Thesis Info

Thesis Title
Scholar: Cognitive Computing Approach for Question Answering
Rivindu Perera
2nd Author
3rd Author
BSc (Hons) Software Engineering
Number of Pages
Informatics Institute of Technology, University of Westminster
Thesis Supervisor
Udayangi Perera
Supervisor e-mail
udayangi AT
Other Supervisor(s)
Language(s) of Thesis
Department / Discipline
Department of Computer Science
Languages Familiar to Author
URL where full thesis can be found
Question answering, Natural Language Processing, Information Extraction, Knowledge management, Cognitive computing, Unsupervised knowledge acquiring
Abstract: 200-500 words
Web based automated question answering is an emerging paradigm of Natural Language Processing and Information Extraction. Yet, question answering systems are incapable of challenging the human question answering ability, though they are driven by tremendous amounts of information on the web. This study focuses on the process of question answering to devise a solution by amalgamating current answer extraction techniques with promising cognitive computing techniques to develop a bio-inspired model for question answering. Research tries to analyse drawbacks and issues in current question answering systems and provide solutions through the current inter-disciplinary approach. Inherent features of existing question answering systems will also be examined to expand the precision of the answer extracted. Empirical solution formulated by this research lies in the knowledge engineering arena of cognitive computing. Knowledge acquiring, representation, validation, inference and justification are used significantly with Natural Language Processing techniques and information management techniques such as clustering and similarity assessment to extract the answer for a given question. Moving few steps further, solution is empowered with novel unsupervised knowledge acquiring algorithm which retrieves knowledge from the web by consistently monitoring user preferences. System is evaluated with seven different evaluation methods covering several aspects, but among them during mean reciprocal rank based evaluation using past Text Retrieval Conference (TREC) question sets, it is noted that system has gained 0.70 of average accuracy level for a set of 280 questions where lowest individual accuracy level is noticed as 0.52 which is higher than the compared system. Through all evaluation methods it is identified that system has made a recognisable scientific contribution for the research domain.