University of Illinois Chicago
Browse

Towards Modeling Collaborative Task Oriented Multimodal Human-human Dialogues

Download (3.81 MB)
thesis
posted on 2014-10-28, 00:00 authored by Lin Chen
This research took place in the larger context of building effective multimodal interfaces to help elderly people live independently. The final goal was to build a dialogue manager which could be deployed on a robot. The robot would help elderly people perform Activities of Daily Living (ADLs), such as cooking dinner, and setting a table. In particular, I focused on building dialogue processing modules to understand such multimodal dialogues. Specifically, I investigated the functions of gestures (e.g. Pointing Gestures, and Haptic-Ostensive actions which involve force exchange) in dialogues concerning collaborative tasks in ADLs. This research employed an empirical approach. The machine learning based modules were built using collected human experiment data. The ELDERLY-AT-HOME corpus was built based on a data collection of human-human collaborative interactions in the elderly care domain. Multiple categories of annotations were further conducted to build the Find corpus, which only contained the experiment episodes where two subjects were collaboratively searching for objects (e.g. a pot, a spoon, etc.), which are essential tasks to perform ADLs. This research developed three main modules: coreference resolution, Dialogue Act classification, and task state inference. The coreference resolution experiments showed that modalities other than language play an important role in bringing antecedents into the dialogue context. The Dialogue Act classification experiments showed that multimodal features including gestures, Haptic-Ostensive actions, and subject location significantly improve accuracy. They also showed that dialogue games help improve performance, even if the dialogue games were inferred dynamically. A heuristic rule-based task state inference system using the results of Dialogue Act classification and coreference resolution was designed and evaluated; the experiments showed reasonably good results. Compared to previous work, the contributions of this research are as follows: 1) Built a multimodal corpus focusing on human-human collaborative task-oriented dialogues. 2) Investigated coreference resolution from language to objects in the real world. 3) Experimented with Dialogue Act classification using utterances, gestures and Haptic-Ostensive actions. 4) Implemented and evaluated a task state inference system.

History

Advisor

Di Eugenio, Barbara

Department

Computer Science

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Doctoral

Committee Member

Zefran, Milos Leigh, Jason Gmytrasiewicz, Piotr Chai, Joyce Y.

Submitted date

2014-08

Language

  • en

Issue date

2014-10-28

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC