posted on 2019-12-01, 00:00authored byRuixuan Jiang
With increasing access to the internet and technology in the general population, movement toward online data collection continues to grow. This trend includes studies that assess preferences for services and goods, including health care and the product of health care, i.e. health outcomes. Online data collection is convenient, but there may be trade-offs about the validity and generalizability of this data compared to traditional approaches, such as face-to-face or as mail-out surveys using pen and paper. This dissertation aimed to compare online and in-person health preference elicitation using the time trade-off (TTO) and discrete choice experiment (DCE) as well as a variation on the composite TTO (cTTO) to the standard cTTO in terms of data validity and preferences.
This dissertation is composed of 5 total chapters. Chapter 1 introduces the motivations and concepts underlying this dissertation, including preference elicitation task types and potential differences in data validity and selection biases in respondent recruitment between modes of data collection. Further, it also describes the conceptual model that ties together the 3 dissertation studies as well as specific aims for each study.
The first study (Chapter 2) investigated whether a cTTO administered to unsupervised, online respondents can replicate the data quality and elicited values from a face-to-face (F2F), supervised cTTO. In this manuscript, the face-to-face, supervised cTTO was considered the gold standard. The online cTTO was modeled after the face-to-face cTTO task and used the same TTO experimental design, including the selection of hypothetical health states valued, and iterative procedure of varying time associated with Life A and Life B. In comparison to the F2F sample, the online sample had a lower proportion of TTO values less than 0 (worse-than-dead; Online: 2.8% F2F: 22.7% p-value <0.001) and larger proportions of values at 0 (Online: 15.2% F2F: 5.3% p-value <0.001) and 1 (Online: 32.0% F2F: 22.2% p-value <0.001). These patterns may indicate lower respondent engagement as respondents were unwilling to proceed to worse-than-dead values (fewer worse-than-dead values) and/or thoroughly consider their preference for health states (preferences concentrated at a few values). Additional evidence of decreased engagement online was seen in the greater portion of online respondents completing their tasks within 3 trade-offs (Online: 15.8% F2F: 3.7% p-value <0.001). The online TTO values also had poorer validity. Online respondents were more likely to have at least 1 inconsistency (i.e., where a dominated health state was given a lower TTO value) for any health state (Online: 61.1% F2F: 16.0%; p<0.001) as well as at least 1 inconsistency involving 55555, the worst EQ-5D-5L health state (Online: 41.3% F2F: 3.1%; p<0.001). As expected, the estimated value set for the online sample also demonstrated poor face validity. The value for the best EQ-5D-5L health state that has no problems on any of the 5 dimensions was 0.846, far from 1, the full health anchor on the utility scale. The estimated online mean value for 55555 was 0.400; in contrast, no country specific value set conducted face to face has a positive value for 55555. Further, a range of scale of 0.446 for the 3,125 health states described by the EQ-5D-5L is approximately a third of the range of scale estimated by most value sets for the EQ-5D-5L to date. For the purpose of this comparison (which exercised different modeling methods than the published US EQ-5D-5L value set in order to apply the same method to value set estimation between face to face and online analytic samples), the F2F value set had a value of 0.963 for 11111 and -0.307 for 55555, resulting in better face validity than the online sample. Independent of comparisons to the F2F value set, the online value set was of poor face validity, but the value set problems become even clearer when measured against the F2F sample.
The second study (Chapter 3) compared a cTTO task without any minimum requirements (base case arm) to the same task with 3 imposed minimum trade-offs/clicks (engagement arm) when both were collected in online, unsupervised respondents. Respondent carelessness or inattentiveness is a concern when collecting data from respondents online, so we explored whether the imposition of 3 minimum TTO trade-offs helps to improve the validity of the cTTO data collected online. The visual presentation of tasks to both the “base case” and “engagement” arms was identical. Task engagement and data validity were found to be similar between comparators arms. For example, 5.4% of base case respondents completed every single task with the fewest possible trade-offs compared to 4.6% of the engagement arm (p=0.55), and 61.1% of base case respondents had at least 1 inconsistency compared to 63.5% of engagement arm respondents (p=0.43). However, respondents In the engagement arm did not demonstrate any additional consideration of the cTTO tasks. The distribution of cTTO values in the engagement arm was not smoother (i.e., fewer “spikes” in the data) than base case; a smoother distribution may indicate greater consideration of tasks. The face validity of the engagement arm value set was even poorer than that of the base case value set. The value for 11111 was further decreased to 0.783 from 0.846 with the “3-click minimum” engagement. Thus, imposing a 3-click minimum level of respondent engagement decreased the range of scale rather than improving it.
The third study (Chapter 4) compared the quality and validity of discrete choice experiment (DCE) data collected from face-to-face, supervised respondents and online, unsupervised respondents. The online DCE visual presentation was based on the face-to-face DCE; the DCE experimental design, i.e. health states valued, and type of task, i.e. choose the preferred alternative from 2 available with no-opt out option, were identical between modes of data collection. Unlike the TTO, face-to-face data collection for the DCE is not necessarily deemed the standard; thus, this study informs whether preferences collected face-to-face and online using the DCE can be considered comparable. This study found that aspects of face-validity which compare the choices overall (i.e., proportion of choices for A and B by severity difference between alternatives) were similar between face-to-face and online samples. However, choices differed by data collection approach when compared by individual tasks. Suspicious choice patterns were more prevalent Online (i.e. all As, all Bs, alternating A and B beginning with A, and alternating A and B beginning with B) (Online: 5.3% F2F: 2.9% p=0.005). The Online and face-to-face preference weights differed by more than a scaling factor. In the Online sample, the dimension percent contribution was larger for Mobility (Online: 29% F2F: 22%) and lower for Anxiety/Depression (Online: 15% F2F: 22%) by comparator. The general face validity did not significantly change with the move to online, DCE-based preference elicitation. Although the prevalence of potentially suspicious choice patterns was greater in the Online sample, the prevalence was still overall low in both comparators, indicating that a practical difference in data validity is unlikely. However, as Mobility is presented first and Anxiety/Depression is presented last in each alternative, the Online sample preferences may have indicated presence of an ordering effect.
Chapter 5 revisits the results in the context of the framework proposed in the introduction. concludes this dissertation with general conclusions, implications, and future directions of research. Overall, online TTO, as conceptualized, was not successful in replicating the preference validity and respondent engagement of the face-to-face TTO. Additional engagement requirement of 3 minimum trade-offs online also failed not improve these aspects of data collection. Online DCE was similar to face-to-face DCE in terms of face validity of raw, elicited preferences, but modeled preferences differed between modes of data collection.
History
Language
en
Advisor
Pickard, A. Simon
Chair
Pickard, A. Simon
Department
Pharmacy Systems, Outcomes, and Policy
Degree Grantor
University of Illinois at Chicago
Degree Level
Doctoral
Degree name
PhD, Doctor of Philosophy
Committee Member
Walton, Surrey
Lee, Todd
Kohlmann, Thomas
Muehlbacher, Axel