posted on 2021-12-01, 00:00authored bySahisnu Mazumder
Dialogue systems, commonly called as Chatbots, have gained escalating popularity in recent times due to their wide-spread applications in carrying out chit-chat conversations with users and accomplishing tasks as personal assistants. These systems are typically trained from manually-labeled data and/or written with handcrafted rules and often, use explicit knowledge bases (KBs) (compiled by human experts) to help generate quality responses and support information-seeking conversations. However, as the complied data and handcrafted rules are always limited by human efforts, it cannot cover all possible variations of natural language. Also, as the KBs are inherently incomplete and remain fixed during conversation, it limits the systems' ability to answer questions from users and ground knowledge in conversation modeling. Thus, when the systems are deployed, the level of user satisfaction is often low.
In this thesis, we propose methods to give the chatbots the ability to continuously and interactively learn (1) new world knowledge and (2) new language expressions to ground them to actions, during conversation, i.e. “on-the-job” by themselves so that as the systems chat more and more with users, they become more and more knowledgeable and improve their performance over time. To formalize the idea of “On-the-job Continual and Interactive Learning”, we propose a novel paradigm called Lifelong INteractive Learning in Conversation (LINC) and design frameworks for factual knowledge learning and language learning in dialogues. We leverage the opportunity- when a user asks the dialogue system a question which the system is unable to answer (due to the lack of knowledge in its current KB) as the scope for learning new factual knowledge and propose two frameworks, viz. Continuous and Interactive Learning of Knowledge (CILK) and Interactive Knowledge Acquisition and Inference (IKAI) to enable interactive factual knowledge learning in dialogues. We also propose a novel framework called Command Matching and Learning (CML) to automatically build self-adaptive NLIs for any API-driven application. CML continually learns new language expressions in multi-turn dialogues with end users after model deployment and improves its natural language understanding and grounding abilities over time. We build Simulated Users (from the publicly available standard datasets) to interact with the systems (i.e. answering the questions asked by the systems) and use them for online evaluation. Our experimental evaluation and analysis show that with more knowledge accumulated over time, the systems are able to learn better, answering more factual questions from users and improving grounding of natural language commands.
History
Advisor
Liu, Bing
Chair
Liu, Bing
Department
Computer Science
Degree Grantor
University of Illinois at Chicago
Degree Level
Doctoral
Degree name
PhD, Doctor of Philosophy
Committee Member
Yu, Philip S.
Zhang, Xinhua
Parde, Natalie
Riva, Oriana