File(s) under embargo
until file(s) become available
FACS-Based Automated Pain Detection From Spontaneous Facial Expressions
thesisposted on 01.05.2020, 00:00 by Zhanli Chen
Patient pain can be detected highly reliably from facial expressions using a set of facial muscle-based action units (AUs) defined by the Facial Action Coding System (FACS). A key characteristic of facial expression of pain is the simultaneous occurrence of pain-related AU combinations, whose automated detection would be highly beneficial for efficient and practical pain monitoring. Existing general Automated Facial Expression Recognition (AFER) systems prove inadequate when applied specifically for detecting pain as they either focus on detecting individual pain-related AUs but not on combinations or they seek to bypass AU detection by training a binary pain classifier directly on pain intensity data but are limited by lack of enough labeled data for satisfactory training. Our research is inspired by the clinical demand for automated pain evaluation in end-of-life patient care. The proposed system decouples pain detection into two consecutive tasks: the pain expression descriptor that detects AUs in a joint manner at the frame level, and sequence-level pain detection from low-dimensional frame-level AU predictions. As our major contribution of this research, the proposed system decouples the pain detection problem into two consecutive tasks: FACS based pain expression related AU predictions at the frame level, and pain detection at sequence-level from low-dimensional frame-level AU predictions. The two sub-tasks are handled by an AFER system and an APD system, which are trained independently. The architecture of two independent machine learning networks not only improves data utilization from existing different pain-oriented video datasets, but also improving data fusion newly acquired data in the future, which addresses the most challenge problem arising from the data insufficiency. The decoupled architecture is also features for its flexibility in customization. The AFER system is realized with three types of configurations, including one conventional CVML system and two types of deep learning architectures. The automated pain detection is modeled as a weakly supervised problem, and the APD system is realized by two multiple instance learning frameworks (MIL and MCIL). AU combination are encoded from single AU scores by two novel data structures (Compact and Clustered), and the multiple instance learning frameworks that are trained with low-dimensional features based on the pain-related AU combinations. In particular, We followed an end-to-front research strategy to develop the decoupled pain detection system in three research phases, and our ultimate goal is to establish a robust and generic automated pain analysis system for clinical applications. Experimental results on the UNBC-McMaster Shoulder Pain Expression dataset show that the deep learning-based multi-label AFER system outperforms state-of-the-art AFER system that are based on classical machine learning (ML) techniques. Further tests with the Wilkie video dataset of lung caner patients suggest the proposed decoupled framework has strong promise for effective pain monitoring in clinical settings, where segment-level patients’ self-report pain is the only available ground truth.