Individuals with serious mental illnesses experience functional impairment that interferes with activities such as work, independent living, and self-care (
1). They often alternate between periods of elevated symptoms with significant impairment and periods of remission with improved functioning (
2). Patients, caregivers, and providers have emphasized the need for interventions to help people manage their illnesses successfully and live meaningful lives (
3,
4). Illness self-management interventions aim to help people develop coping skills to reduce the severity and distress of persistent symptoms (
5).
Illness self-management interventions have been found to improve people’s knowledge about their illness and to help people develop coping skills that reduce the severity and distress associated with symptoms (
5). One well-known evidence-based illness self-management program is Wellness Recovery Action Planning (WRAP) (
6), which uses a peer-led, group-based approach. Sessions follow a sequenced curriculum, and discussion topics and examples draw from the personal experiences of the participants and cofacilitators in attendance. Group facilitators incorporate personal wellness tools into a written plan, which includes daily maintenance, identification of triggers and methods to avoid them, warning signs and response options, and a crisis management plan.
Despite the effectiveness of interventions such as WRAP, clinic-based approaches may pose barriers to participation for some service users. Mobile health (mHealth) interventions use smartphones, wearables, and other mobile devices to support health. Recent evidence shows that mHealth technologies can be used to successfully deliver psychosocial interventions outside the traditional clinic setting and help overcome access barriers (
7,
8). A growing body of work supports the feasibility and clinical promise of various mHealth approaches among persons with serious mental illness (
9–
14). FOCUS is a smartphone-based illness management system specifically designed for those with serious mental illness. FOCUS provides users with daily self-assessments, illness management practice and intervention content, and Web-based summary reports accessible to an authorized mHealth support specialist. FOCUS leverages smartphone video and audio media players to enhance users’ experience and to “bring the intervention to life” (
7).
This article reports qualitative findings from a comparative effectiveness trial comparing FOCUS (
9) and WRAP (
6). The main outcomes from the trial served as a point of departure for this qualitative analysis. As previously reported, FOCUS and WRAP produced comparable clinical outcomes and both had high satisfaction ratings; individuals assigned to FOCUS commenced the intervention at significantly higher rates compared with those assigned to WRAP (
9). Our objective was to integrate qualitative findings with the previously reported main outcomes (
10).
We have previously reported on how participants made use of FOCUS in their everyday lives to self-manage mental illness (
8). The qualitative study described here aimed to compare experiences with FOCUS and WRAP. We sought to examine whether people with serious mental illness notice and care about specific features of these interventions and how these interventions shape experiences of symptoms, recovery, and quality of life. Qualitative methods facilitated insight into first-person perspectives on the two psychosocial interventions (
11). To our knowledge, this study was the first to use qualitative methods to compare an mHealth and a clinic-based illness self-management intervention for serious mental illness.
Methods
A qualitative substudy was nested within the main comparative effectiveness trial, which was conducted between June 2015 and September 2017, to provide additional insight into patient engagement and satisfaction with FOCUS and WRAP and to augment understanding of clinical outcomes through patient narratives regarding the perceived impact of the interventions. The qualitative research design was guided by meaning-centered medical anthropological approaches (
15) to elicit in-depth illness narratives and treatment experiences. The study was approved by the Dartmouth College Committee for the Protection of Human Subjects and the University of Washington Institutional Review Board (IRB) and monitored by an independent safety monitoring board at Dartmouth’s Department of Psychiatry.
Individuals were eligible for the main comparative effectiveness trial if they had a chart diagnosis of schizophrenia, schizoaffective disorder, bipolar disorder, or major depressive disorder; were age 18 or older; and had a rating of ≤3 on one of three items that constitute the domination-by-symptoms factor from the Recovery Assessment Scale (
16), which indicates a need for services. Individuals were excluded if they had a hearing, vision, or motor impairment that made it impossible to operate a smartphone; had an English reading level below 4th grade; or had received the WRAP or FOCUS intervention in the past 3 years. Participants in the main trial were recruited by 20 clinical teams at Thresholds, a large agency that provides services to people with serious mental illness living in Chicago.
For the qualitative study, we used purposive sampling to select individuals from the main trial with varying levels of engagement with FOCUS and WRAP in order to capture positive and negative sentiments about the interventions. High engagers were those who used the intervention for at least 9 of 12 weeks. Individuals who did not meet these criteria were categorized as low engagers. Potential participants for the qualitative interviews were identified by engagement level prior to recruitment by using attendance data for WRAP and app usage data for FOCUS. Researchers invited individuals by telephone to participate in qualitative interviews. Individuals who expressed interest were scheduled to meet with us. Prior to the interview, we provided detailed information regarding the qualitative study’s purpose, risks, and procedures. Participants gave written signed consent and were compensated $30 for completing the interview. Interviews were conducted with study participants after they had completed the interventions to minimize any potential impact that an in-depth reflection about the intervention might have had on their engagement or satisfaction. Interviews were conducted throughout the study in successive “waves,” mirroring the enrollment pattern of the trial.
The qualitative sample consisted of 31 participants (FOCUS, N=16; WRAP, N=15). Our determination of sample size was guided by expectations for thematic saturation. Although no gold standards exist for determining sample size for qualitative research (
17), some research suggests that saturation is typically achieved after 12 to 18 interviews (
18). With this in mind, and because we purposively sampled to include persons with a range of engagement, we expected to enroll 30 participants in the qualitative study.
We conducted semistructured interviews to generate a nuanced understanding of participants’ experiences with the interventions. Semistructured interviews were selected as the method of choice because this approach enables researchers to inquire into similar topics across participants while also allowing for flexibility and probing to elicit rich accounts. Interviews were conducted by researchers trained in qualitative methods and with experience working with people with serious mental illnesses. Interview questions were organized into the following domains: overall perspectives on the intervention, engagement with the intervention, FOCUS or WRAP in relation to illness experience, and FOCUS or WRAP in the context of mental health services. Examples of questions include, Thinking back over the past 3 months of participating in [WRAP or FOCUS], what were your overall impressions? What has this experience meant to you? What did you like about [WRAP or FOCUS]? What did you not like about [WRAP or FOCUS]? In what ways did [WRAP or FOCUS] impact how you manage your symptoms?
Interviews produced detailed accounts of how participants engaged with the interventions, what they found useful and liked, and what was challenging about the interventions. The interview guides were constructed to obtain comparable information across the two interventions, but some questions were tailored to FOCUS and some to WRAP. The interview guides were revised through collaborative discussion following the initial round of interviewing to refine questions for clarity and relation to the research aims. Interviews were 45–60 minutes long and were audio recorded; brief field notes were taken, including researchers’ reflections and observations. Audio recordings were transcribed by a transcription service and checked for accuracy by a research assistant.
Three of the authors (E.C.S., G.J., and R.B.) reviewed the first round of transcripts and independently generated a list of initial concepts and categories. Involving multiple analysts strengthens the rigor of the interpretation of qualitative data by facilitating multiple perspectives during analysis (
19). On the basis of this review, the study aims, and domains of the interview guide, a codebook of 37 initial codes was developed and used to code the transcripts by using a thematic analytic approach (
20). Provisional concepts and patterns identified in the early interviews were used to identify areas of investigation in subsequent interviews (e.g., comparison of the intervention to previous forms of treatment to provide additional insight into impact). Through continued immersion in the data set, we constructed eight additional codes via review and discussion. The final codebook had 45 codes, grouped into 12 domains. For the analysis reported here, we focused on the following domains: indicators of acceptability of the interventions, indicators of unacceptability of the interventions, challenges with the intervention, and impact of the interventions. [A table in an
online supplement to this article presents details of the codes and definitions included in each of the domains.] We selected these domains for their relevance in providing insight into patient engagement, satisfaction with the interventions, and perceived impact of the interventions. We used Dedoose, a qualitative analytic software program, to manage and code the data (
21). Coding was led by one of the authors (G.J.), with ongoing supervision and review by the lead author (E.C.S.), an experienced qualitative researcher, as a check on trustworthiness. We held regular meetings to discuss and review codes and coded data. If there were questions regarding the application of codes, we were able to resolve via consensus.
Qualitative analysis involves more than coding the data set. Whereas codes consist of labels to capture an idea associated with a segment of data, themes capture common, recurring patterns across the data set. To develop themes, coded excerpts were aggregated in Dedoose into reports. Two authors (E.C.S. and G.J.) independently reviewed and annotated these reports and wrote analytic memos to identify patterns. Analytic memos were shared and subsequently revised and refined into data summary reports that outlined main thematic findings and included emblematic quotations.
Discussion
We applied qualitative methods to provide additional insight into the main outcomes of a comparative effectiveness trial of mHealth and clinic-based self-management interventions for persons with serious mental illness (
9). Overall, participant narratives provided evidence that both interventions were well received by many and that the interventions provided opportunities to learn new illness management skills. Participants were drawn to the hallmark characteristics of the two interventions: the 24/7 accessibility of FOCUS and the social support and peer learning in WRAP. Although the high overall satisfaction ratings for both interventions are promising and many participants described the interventions as having had a meaningful impact, qualitative analyses also identified aspects of the interventions that participants found unhelpful, problematic, or challenging. A few participants noted dissatisfaction with repetitive content and a lack of personal connection in FOCUS. mHealth interventions such as FOCUS could benefit from periodic content updates or staggered unlocking of content based on the user’s evolving needs. Providing mHealth users with opportunities to integrate their own content into the intervention (e.g., functionality to upload photos of loved ones, insert individually selected inspirational quotes, and integrate favorite music into modules) may help increase relevance and personalization.
Interview accounts also suggested that the group-based structure of WRAP seems especially preference sensitive. Participants with negative attitudes toward groups did not initiate WRAP. More rigorous screening to determine goodness of fit for the group intervention may help prevent such experiences. In contrast, the smartphone-based intervention delivery for FOCUS—accessed in one’s own environment versus administered in a treatment center, largely automated versus person delivered, and on demand versus scheduled—was attractive to some participants and had few barriers to entry. Combined with the quantitative results of the comparative effectiveness trial (
9), the findings suggest that clinical programming and policy efforts should support continued development of mHealth solutions and innovation in digital health payment and reimbursement models so that individuals gain opportunities to benefit from mental health resources that they might otherwise not receive through standard care.
The study had some limitations. It was designed to enhance the credibility and transferability of findings (
22) by sampling participants with a range of engagement levels and until saturation was reached and by involving multiple researchers to allow for multiple perspectives (
19,
23). However, the findings have limited generalizability because of the small sample and the single site. In addition, interviewing participants only once may have limited our ability to build rapport, and responses may have reflected some social desirability bias. In a few cases, participants were highly symptomatic. Interviews may not have been the best approach in such circumstances.