Skip to main content
Full access
Articles
Published Online: 27 March 2019

User Engagement in Mental Health Apps: A Review of Measurement, Reporting, and Validity

Abstract

Objective:

Despite the potential benefits of mobile mental health apps, real-world results indicate engagement issues because of low uptake and sustained use. This review examined how studies have measured and reported on user engagement indicators (UEIs) for mental health apps.

Methods:

A systematic review of multiple databases was performed in July 2018 for studies of mental health apps for depression, bipolar disorder, schizophrenia, and anxiety that reported on UEIs, namely usability, user satisfaction, acceptability, and feasibility. The subjective and objective criteria used to assess UEIs, among other data, were extracted from each study.

Results:

Of 925 results, 40 studies were eligible. Every study reported positive results for the usability, satisfaction, acceptability, or feasibility of the app. Of the 40 studies, 36 (90%) employed 371 indistinct subjective criteria that were assessed with surveys, interviews, or both, and 23 studies used custom subjective scales, rather than preexisting standardized assessment tools. A total of 25 studies (63%) used objective criteria—with 71 indistinct measures. No two studies used the same combination of subjective or objective criteria to assess UEIs of the app.

Conclusions:

The high heterogeneity and use of custom criteria to assess mental health apps in terms of usability, user satisfaction, acceptability, or feasibility present a challenge for understanding real-world low uptake of these apps. Every study reviewed claimed that UEIs for the app were rated highly, which suggests a need for the field to focus on engagement by creating reporting standards and more carefully considering claims.

HIGHLIGHTS

In response to the low uptake and low sustained use of mobile mental health apps, this systematic review examined studies evaluating mental health apps for depression, bipolar disorder, schizophrenia, and anxiety.
Of the 40 studies reviewed, 36 used at least some subjective criteria (371 subjective questions) and 25 used at least some objective criteria (71 measures) to assess user engagement.
Every study concluded that the user engagement indicators for the app were positive; however, no study used the same combination of criteria or the same thresholds to evaluate the app.
Mobile technologies are increasingly owned and utilized by people around the world. With this rise in pervasiveness comes the potential to increase access to and augment delivery of mental health care. This can occur in multiple ways, including patient-provider communication, self-management, diagnosis, and even treatment (1). Early evidence concerning the efficacy of mobile mental health apps has created a wave of enthusiasm and support (2). The potential scalability of these app-based interventions has been proposed as a means of addressing the global burden of mental illnesses and offering services to those who are in need but previously have not been able to access care (3). Even in developed countries, where access to mental health services remains inadequate, app-based interventions have been proposed as innovative research, screening, prevention, and care delivery platforms (4). The 10,000 mental health apps currently available for immediate download from the Apple iTunes or Google Android Play marketplaces speak to their easy availability, as well as to the high interest (5).
But potential, interest, or availability alone has not translated into the often-forecasted digital revolution for mental health. Many possible explanations exist, and one factor is the poor uptake of mental health apps (6). User engagement studies can shed valuable insight here. Many studies that evaluate mental health apps include an examination of usability, user satisfaction, acceptability, or feasibility. These “user engagement indicators” (UEIs) are meant to represent the ability of an app to engage and sustain user interactions. However, the lack of guidelines, consensus, or specificity regarding user engagement in mental health research introduces the concerning potential for UEIs to be selected inappropriately, presented with bias, or interpreted incorrectly. Thus it is difficult to interpret, let alone compare or pool data on, engagement metrics related to these smartphone apps. For example, in one study, participants described an app as “buggy,” “clunky,” and “didn’t really work” during qualitative interviews (7). Nevertheless, when the same participants were asked specifically whether the app was “user friendly” and “easy to use,” five of seven reported that the app was user friendly and easy to use. The authors used the responses to the second set of metrics as the basis for their conclusion that the app had positive UEIs, which masked potentially serious usability and safety concerns.
To both assess the current state of reporting and inform future efforts, we performed a systematic review of how the UEIs of apps designed for persons with depression, bipolar disorder, schizophrenia, and anxiety are evaluated. We hypothesized that there would be conflations in the definitions and criteria for common types of UEIs (namely, usability, satisfaction, acceptability, and feasibility), inconsistent subjective and objective criteria used to evaluate UEIs, and inconsistent thresholds of UEI ratings across studies.

Methods

Search String and Selection Criteria

We conducted a systematic search of PsycINFO, Ovid MEDLINE, the Cochrane Central Register of Controlled Trials and the AMED, Embase and HMIC databases on July 14, 2018, using terms synonymous with mobile apps for mental health. The full search algorithm is presented in Table 1. Inclusion criteria were as follows: report original qualitative or quantitative data; primarily involve a mobile application; be designed for people with depression, bipolar disorder, schizophrenia, or anxiety (including posttraumatic stress disorder and obsessive-compulsive disorder); include a conclusion about UEIs for the app (including usability, satisfaction, acceptability, or feasibility); and have a study length of at least 7 days. Reviews, conference reports, protocols, or dissertations were excluded, as were non-English language publications and publications that did not focus on the technologies or diseases of interest. All publications were screened by two authors (MMN and JT), and any disagreements were resolved through discussion resulting in consensus.
TABLE 1. Search algorithm for studies of mobile mental health apps, by PICO framework categorya
CategorySearch wordsb
PopulationDepression, depressive, mental illness, mental health, mood disorder, affective disorder, anxiety, phobia, bipolar, psychosis, schizophr*
InterventionSmartphone*, smart phone*, mhealth, mobile phone*, iphone*, android, mhealth, mobile app*, phone app*
Comparator[any]
OutcomeUsability, user interface, ui, feasib*, pilot, engag*, acceptability
a
PICO, population, intervention, comparator, outcome.
b
“OR” terms

Data Extraction and Synthesis

A tool was developed to systematically extract data, and the following data were gathered by two authors (MMN and JT). Study details included the study design (e.g., single arm or randomized controlled trial), sample size, inclusion criteria, and clinical characteristics of participants. Intervention details included information about the app, length of the intervention, and device type used. Data on objective UEIs included usage frequency, response to prompts, and trial retention. Data on subjective UEIs included satisfaction questionnaires, interviews about usability, and other similar data. Data were also gathered on factors that might influence usability, such as whether patients were involved in the app design process, incentives for participation, and other similar factors. Institutional review board approval was not required for this literature review.

Results

Included Studies

The initial database search returned 925 results. (A PRISMA chart in an online supplement to this article shows the full study selection process.) The 925 articles were reduced to 882 after duplicates were removed. A further 778 articles were excluded after the titles and abstracts were reviewed for eligibility. Full-text versions were retrieved for 104 articles, of which 64 were ineligible for various reasons (see PRISMA chart in online supplement).
Thus a total of 40 studies reporting UEIs of mental health apps for persons with mental illness were included (747). Of these, nine apps were designed for individuals with depression (11, 12, 19, 21, 27, 29, 40, 43, 46), four for those with bipolar disorder (16, 20, 22, 47), seven for those with schizophrenia (23, 25, 28, 33, 38, 39, 45), and seven for those with anxiety (15, 26, 3032, 35, 44). Thirteen apps were designed for two or more populations with different mental illnesses (810, 13, 14, 17, 18, 24, 34, 36, 37, 41, 42)
The mean number of participants enrolled was 32 per study (range two to 163). Of studies that reported the length or mean length of the study (some studies lasted as long as participants wanted to use the app), the mean length was 58 days.

UEIs: Usability, Satisfaction, Acceptability, and Feasibility

Every study performed an evaluation of the usability, satisfaction, acceptability, or feasibility of an app. Although we refer to these criteria as UEIs, the studies reviewed did not use UEI as a term or a framework. Across studies, conflations were noted in the definitions of and criteria for usability, satisfaction, acceptability, or feasibility. Some studies referred to these types of UEI interchangeably. For example, multiple studies used the phrase “usability/acceptability” (23, 24) (and another used the phrase “acceptability/usability” [25]). One referred to a “satisfaction/usability interview” (44). Another study first used the phrase “tolerability and usability” and later switched to “acceptability and tolerability” (47). Another first noted that “acceptability was measured by examining self-reports and user engagement with the program” but later stated that “acceptability was measured by examining users’ self-reported attitudes and satisfaction” (43). Yet another study used the technology acceptance model to partly evaluate the usability of an app (44).
Some studies treated certain UEIs as determinants of others. One study stated, “The BeyondNow app was also shown to be feasible given the high level of usability” (36). Another noted, “To evaluate acceptability of using a smartphone application as part of EP [early psychosis] outpatient care, participants completed self-report surveys at the end of the study evaluating satisfaction” (14). And under the subheading “Aim I–Feasibility: Mobile App Satisfaction,” another study reported, “Participants provided high usability ratings for the mobile app based on the SUS [System Usability Scale]” (8).
Most studies evaluated multiple UEIs at once. Eight drew conclusions about one type of UEI (e.g., usability only) (10, 12, 18, 21, 27, 30, 37, 41), 11 about two types of UEI (e.g., feasibility and acceptability) (9, 17, 19, 29, 31, 36, 38, 40, 44, 46, 47), 11 about three types (11, 1416, 20, 23, 24, 28, 34, 39, 45), and 10 about four types (8, 13, 22, 25, 26, 32, 33, 35, 42, 43). Furthermore, most studies used the same criteria to evaluate multiple UEIs. For instance, one stated, “Satisfaction, usability and acceptability were calculated based on the percentage of answers of the Likert-scale” (22). The fact that most studies used similar methods to evaluate more than one type of UEI speaks to the lack of precision and distinction between evaluation methods.

Types of Criteria: Subjective and Objective

The criteria used to draw conclusions about UEIs varied widely across studies, as shown in Figure 1. Of the 40 studies reviewed, 15 (38%) concluded that the app had positive UEIs entirely on the basis of subjective criteria (10, 12, 15, 18, 23, 27, 29, 3032, 37, 41, 44, 46, 47). Four (10%) concluded that the app had positive UEIs entirely on the basis of objective criteria (9, 17, 21, 28), and 21 (53%) concluded that the app had positive UEIs on the basis of a combination of subjective and objective criteria (8, 11, 13, 14, 16, 19, 20, 22, 2426, 3336, 38, 39, 40, 42, 43, 45).
FIGURE 1. Types of criteria used to evaluate user engagement indicators in 40 studies of mental health apps

Subjective criteria

The 36 studies (90%) that evaluated UEIs entirely or partially on the basis of subjective criteria relied on 371 indistinct questions (see table in online supplement) and were assessed by using surveys, interviews, or both. As shown in Table 2, a total of 13 studies derived inspiration from one or more preexisting assessment tools (4858). The remaining 23 studies did not rely on preexisting tools to evaluate subjective criteria, suggesting that they developed their own custom questions. This assortment of both subjective criteria and methodologies for evaluating UEIs demonstrates that there is no gold standard.
TABLE 2. Preexisting assessment tools used to evaluate user engagement indicators in studies of mental health appsa
ToolStudy
System Usability Scale (48)Levin et al., 2017 (8); Ben-Zeev et al., 2014 (25); Bauer et al., 2018 (41); Birney et al., 2016 (46)
Client Satisfaction Questionnaire (49)Possemato et al., 2017 (15); Wenze et al., 2016 (16); Boisseau et al., 2017 (32)
Credibility and Expectancy Scale (50)Wenze et al., 2016 (16); Watts et al., 2013 (27); Boisseau et al., 2017 (32)
Usefulness, Satisfaction, and Ease Questionnaire (51)Ben-Zeev et al., 2014 (25); Corden et al., 2016 (40)
Adaptation of another study's assessment tool (25, 52)Ben-Zeev et al., 2016 (23); Cernvall et al., 2018 (30)
Post-Study System Usability Questionnaire (53)Ben-Zeev et al., 2014 (25)
Technology Acceptance Model (54)Price et al., 2017 (44)
Technology Assessment Model Measurement Scale (55)Ben-Zeev et al., 2014 (25)
Therapeutic Alliance Scales for Children–Revised (56)Pramana et al., 2014 (26)
Client Evaluation of Services Questionnaire (57)Pramana et al., 2014 (26)
Computer System Usability Questionnaire (58)Pramana et al., 2014 (26)
a
There was no consensus among the studies on the best tool for evaluating user engagement indicators.

Objective criteria.

The 25 studies (63%) that evaluated UEIs entirely or partially on the basis of objective criteria relied on 71 indistinct measures of usage data (see online supplement). Of these 25 studies, five set a target usage goal in advance (8, 28, 34, 38, 39) and 20 considered usage data retrospectively (9, 11, 13, 14, 16, 17, 1922, 2426, 33, 35, 36, 40, 42, 43, 45) to determine positive UEIs.
Across all studies, a wide array of objective criteria was taken into account, including “average number of peer and coach interactions” (11), “length of time in clinic at enrollment” (14), “(reliable) logging of location” (19), “(number of) active users” (22) and “percentage of participants who were able to use both system-initiated (i.e., in response to prompts) and participant-initiated (i.e., on-demand) videos independently and in their own environments for a minimum of 3 days after receiving the smartphone” (33).

Thresholds of UEIs

All 40 studies concluded that their app had positive UEIs. However, the studies came to the same conclusion in different ways: they evaluated various types of UEIs with different methodologies—from the criteria used (such as subjective ratings and objective data) to the means of assessment (such as a survey, interview, or usage data). In other words, inconsistencies in the UEI evaluation process cast doubt on the studies’ ability to claim that their app was usable, satisfactory, acceptable, or feasible.

Subjective criteria.

Because of the range of both subjective criteria and their evaluation methods, it is impossible to compare the ratings of UEIs across studies. However, it is clear that studies utilized different thresholds for concluding that their app had positive UEIs. For example, of studies that evaluated the subjective criterion “ease of use,” the percentage of users reportedly satisfied with ease of use ranged from 60% (18) to 100% (13). Similarly, the satisfaction scores for ease of use ranged from 79.7% (46) to 92.6% (16). Despite the range of perceptions about the ease of use of an app, every study concluded that its app had positive UEIs.

Objective criteria.

Differences were noted across studies in objective criteria, such as target usage goals and frequency of usage. For example, of studies that set a target usage goal pertaining to task completion, two studies sought completion of over 33% of prompted tasks (28, 39) and another study sought completion of over 70% of prompted tasks (38). Despite this variability, all the studies that set a target usage goal concluded that the apps had positive UEIs on the basis of the usage data. Similarly, studies that considered frequency of usage as an objective criterion reported frequencies ranging from once per day (8) to once every other day (45) and an average of 5.64 times per participant over the course of 2 months (36). Yet each of these trials concluded that its app had positive UEIs.

Discrepancies between thresholds.

Even when an app seems to meet the threshold for positive UEIs on the basis of subjective criteria, it might not meet the threshold on the basis of objective criteria. One study raised the issue of possible discrepancies arising from evaluating UEIs solely on the basis of subjective versus objective criteria: “Analysis of objective use data for another study utilizing PTSD Coach indicates that although app users report positive feedback on usability and positive impact on symptom distress, only 80% of first-time users reach the home screen and only 37% progress to one of the primary content areas” (15). This is an issue not only within studies but also across studies. For instance, five studies that used retention rate as an objective criterion reported retention rates of 80% (35), 83% (11), 91.5% (21), 100% (38), and 100% (45). Yet studies that did not rely on retention rate as a criterion had retention rates as low as 35% (13) and 65.7% (27). All of these studies concluded that their apps had positive UEIs.

Discussion

Despite the real-world challenges of mental health app usability, engagement, and usage, all 40 studies included in this review reported that their app had positive UEIs. The positive reports for usability, satisfaction, acceptability, or feasibility were made whether the studies based their claims on either subjective or objective criteria—or a combination of criteria—and the studies unfailingly interpreted the UEI ratings as positive even when there was a wide range of reports and of usage data. These findings suggest that the authors of the studies did not establish a threshold indicating a positive UEI or that such thresholds were quite low. The inconsistency of the methodologies makes it difficult to define user engagement and how to best design for it. Furthermore, it calls into question the practices used to evaluate mental health apps.
The findings of this review indicate the lack of consensus about what constitutes usability, satisfaction, acceptability, and feasibility for mental health apps. This lack of consensus makes it difficult to compare results across studies, hinders understanding of what makes apps engaging for different users, and limits their real-world uptake. A great deal of ambiguity currently characterizes the distinctions between various types of UEI (see online supplement), which reduces the usefulness of these descriptors. There is thus a clear and urgent need to formulate standards for reporting and sharing UEIs so that accurate assessments and informed decisions regarding app research, funding, and clinical use can be made.
It is concerning that 15 of the 40 (38%) studies concluded that their app had positive UEIs without considering objective data (Figure 1). Qualitative data are unquestionably valuable for creating a fuller, more nuanced picture of participants, because their characteristics—such as language, disorder, and age—largely inform their ability to use an app and their unique experience of an app. However, there is also a need for objective measurements that can be reproduced to validate initial results and create a baseline for generalizing results of any single study. Consequently, a combination of both subjective and objective criteria may be most useful for offering insight into user engagement.
All studies concluded that their apps had positive UEIs on the bases of vastly different subjective and objective criteria (see online supplement). Although the thresholds for assessing a UEI as positive must depend on the specific purpose of the app (e.g., one study claimed that the use of a suicide prevention app by a single individual at a critical moment could be adequate [36]), predetermined thresholds for interpreting UEIs are urgently required for any meaningful conclusions to be drawn. Every study reviewed here claimed that its app had positive UEIs, which makes it difficult to understand the current challenges surrounding usability, engagement, and usage and hinders progress in the field.
This review had several limitations. After our search retrieved 925 studies, we reviewed only those from academic sources that focused on depression, bipolar disorder, schizophrenia, and anxiety. This restricted our discussion to how the academic community views engagement, as opposed to other industries, and limited the types of mental health apps we took into account. In addition, we assumed that it would be possible and useful for at least some dimensions of user engagement to be measured and reported consistently across mental health apps. Of course, apps that are developed for different purposes require their own specific criteria for determining whether they are engaging users or not. However, if every study claims that its app has positive UEIs and no studies use the same evaluation methods, as found in this review, it is difficult to understand and improve the real-world low uptake of these apps. Although publication bias may explain some of the results, the need for reporting standards is still clear. With more than 10,000 mental health apps in the commercial marketplaces, few of which have ever been studied or had assessment results published (5), the number of black boxes is immense when it comes to user engagement in mobile mental health apps. Examining different mental health conditions beyond those targeted in this review may have also yielded different results.

Conclusions

The experience of mental illness is personal, and the technology literacy of individuals is variable, meaning that no single scale or measurement will ever perfectly capture all engagement indicators for all people. But the future of the field of mobile mental health apps depends on user engagement, and the lack of clear definitions and standards for UEIs is harmful—not only to the field, where progress is impeded, but also to patients, who may not know which app to trust. This review has confirmed the necessity of generating more clarity regarding UEIs, which can both promote app usage and enable researchers to learn from each other’s work and design better mental health apps.
This challenge is compounded by the need to design specifically for the needs of individuals with mental illness. On the topic of Web site design, one study reported, “Commonly prescribed design models and guidelines produce websites that are poorly suited and confusing to persons with serious mental illnesses” (59). Given that smartphone apps are often more complex and interactive than Web sites, it is reasonable to assume that truly usable apps for mental illnesses may look different from apps designed for the general population. The inconsistencies illustrated in this study raise the possibility that no engagement indicators were designed to take into account the potentially unique cognitive, neurological, or motor needs arising from mental illnesses. For example, schizophrenia can lead to changes in cognition, depression can affect reward learning, and anxiety can affect working memory. Furthermore, it is important to consider how the intersectional identities of individuals with mental illness also shape their engagement with mental health apps. Combining lessons from technology design with knowledge about mental illnesses, such as schizophrenia (60), and applying these lessons to evaluations of UEIs could serve as a useful starting point. Other fields have found solutions, and the popularity of the engineering field–derived System Usability Scale (used in several studies in this review) indicates the potential of simple but well-validated metrics. Convening a representative body of patients, clinicians, designers, and technology makers to propose collaborative measures would be a welcome first step.

Supplementary Material

File (appi.ps.201800519.ds001.pdf)

References

1.
Bhugra D, Tasman A, Pathare S, et al: The WPA-Lancet Psychiatry Commission on the Future of Psychiatry. Lancet Psychiatry 2017; 4:775–818
2.
Firth J, Torous J, Nicholas J, et al: The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017; 16:287–298
3.
Naslund JA, Aschbrenner KA, Araya R, et al: Digital technology for treating and preventing mental disorders in low-income and middle-income countries: a narrative review of the literature. Lancet Psychiatry 2017; 4:486–500
4.
Chan S, Godwin H, Gonzalez A, et al: Review of use and integration of mobile apps into psychiatric treatments. Curr Psychiatry Rep 2017; 19:96
5.
Torous J, Roberts LW: Needed innovation in digital health and smartphone applications for mental health: transparency and trust. JAMA Psychiatry 2017; 74:437–438
6.
Torous J, Nicholas J, Larsen ME, et al: Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements. Evid Based Ment Health 2018; 21:116–119
7.
Mackie C, Dunn N, MacLean S, et al: A qualitative study of a blended therapy using problem solving therapy with a customised smartphone app in men who present to hospital with intentional self-harm. Evid Based Ment Health 2017; 20:118–122
8.
Levin ME, Haeger J, Pierce B, et al: Evaluating an adjunctive mobile app to enhance psychological flexibility in acceptance and commitment therapy. Behav Modif 2017; 41:846–867
9.
Fortuna KL, DiMilia PR, Lohman MC, et al: Feasibility, acceptability, and preliminary effectiveness of a peer-delivered and technology supported self-management intervention for older adults with serious mental illness. Psychiatr Q 2018; 89:293–305
10.
Korsbek L, Tønder ES: Momentum: a smartphone application to support shared decision making for people using mental health services. Psychiatr Rehabil J 2016; 39:167–172
11.
Schlosser DA, Campellone TR, Truong B, et al: The feasibility, acceptability, and outcomes of PRIME-D: a novel mobile intervention treatment for depression. Depress Anxiety 2017; 34:546–554
12.
Dahne J, Kustanowitz J, Lejuez CW: Development and preliminary feasibility study of a brief behavioral activation mobile application (Behavioral Apptivation) to be used in conjunction with ongoing therapy. Cognit Behav Pract 2018; 25:44–56
13.
Bauer AM, Iles-Shih M, Ghomi RH, et al: Acceptability of mHealth augmentation of collaborative care: a mixed methods pilot study. Gen Hosp Psychiatry 2018; 51:22–29
14.
Niendam TA, Tully LM, Iosif AM, et al: Enhancing early psychosis treatment using smartphone technology: a longitudinal feasibility and validity study. J Psychiatr Res 2018; 96:239–246
15.
Possemato K, Kuhn E, Johnson EM, et al: Development and refinement of a clinician intervention to facilitate primary care patient use of the PTSD Coach app. Transl Behav Med 2017; 7:116–126
16.
Wenze SJ, Armey MF, Weinstock LM, et al: An open trial of a smartphone-assisted, adjunctive intervention to improve treatment adherence in bipolar disorder. J Psychiatr Pract 2016; 22:492–504
17.
Mohr DC, Tomasino KN, Lattie EG, et al: IntelliCare: an eclectic, skills-based app suite for the treatment of depression and anxiety. J Med Internet Res 2017; 19:e10
18.
Rohatagi S, Profit D, Hatch A, et al: Optimization of a digital medicine system in psychiatry. J Clin Psychiatry 2016; 77:e1101–e1107
19.
Dang M, Mielke C, Diehl A, et al: Accompanying depression with FINE: a smartphone-based approach. Stud Health Technol Inform 2016; 228:195–199
20.
Schwartz S, Schultz S, Reider A, et al: Daily mood monitoring of symptoms using smartphones in bipolar disorder: a pilot study assessing the feasibility of ecological momentary assessment. J Affect Disord 2016; 191:88–93
21.
Hung S, Li MS, Chen YL, et al: Smartphone-based ecological momentary assessment for Chinese patients with depression: an exploratory study in Taiwan. Asian J Psychiatr 2016; 23:131–136
22.
Hidalgo-Mazzei D, Mateu A, Reinares M, et al: Psychoeducation in bipolar disorder with a SIMPLe smartphone application: feasibility, acceptability and satisfaction. J Affect Disord 2016; 200:58–66
23.
Ben-Zeev D, Wang R, Abdullah S, et al: Mobile behavioral sensing for outpatients and inpatients with schizophrenia. Psychiatr Serv 2016; 67:558–561
24.
Macias C, Panch T, Hicks YM, et al: Using smartphone apps to promote psychiatric and physical well-being. Psychiatr Q 2015; 86:505–519
25.
Ben-Zeev D, Brenner CJ, Begale M, et al: Feasibility, acceptability, and preliminary efficacy of a smartphone intervention for schizophrenia. Schizophr Bull 2014; 40:1244–1253
26.
Pramana G, Parmanto B, Kendall PC, et al: The SmartCAT: an m-health platform for ecological momentary intervention in child anxiety treatment. Telemed J E Health 2014; 20:419–427
27.
Watts S, Mackenzie A, Thomas C, et al: CBT for depression: a pilot RCT comparing mobile phone vs computer. BMC Psychiatry 2013; 13:49
28.
Palmier-Claus JE, Ainsworth J, Machin M, et al: The feasibility and validity of ambulatory self-report of psychotic symptoms using a smartphone software application. BMC Psychiatry 2012; 12:172
29.
Burns MN, Begale M, Duffecy J, et al: Harnessing context sensing to develop a mobile intervention for depression. J Med Internet Res 2011; 13:e55
30.
Cernvall M, Sveen J, Bergh Johannesson K, et al: A pilot study of user satisfaction and perceived helpfulness of the Swedish version of the mobile app PTSD Coach. Eur J Psychotraumatol 2018; 9(suppl 1):1472990
31.
Reger G, Skopp N, Edwards-Stewart A, et al: Comparison of Prolonged Exposure (PE) Coach to treatment as usual: a case series with two active duty soldiers. Mil Psychol 2015; 27:287–296
32.
Boisseau CL, Schwartzman CM, Lawton J, et al: App-guided exposure and response prevention for obsessive compulsive disorder: an open pilot trial. Cogn Behav Ther 2017; 46:447–458
33.
Ben-Zeev D, Brian RM, Aschbrenner KA, et al: Video-based mobile health interventions for people with schizophrenia: bringing the “pocket therapist” to life. Psychiatr Rehabil J 2018; 41:39–45
34.
Ramsey AT, Wetherell JL, Depp C, et al: Feasibility and acceptability of smartphone assessment in older adults with cognitive and emotional difficulties. J Technol Hum Serv 2016; 34:209–223
35.
Hicks TA, Thomas SP, Wilson SM, et al: A preliminary investigation of a relapse prevention mobile application to maintain smoking abstinence among individuals with posttraumatic stress disorder. J Dual Diagn 2017; 13:15–20
36.
Melvin GA, Gresham D, Beaton S, et al: Evaluating the feasibility and effectiveness of an Australian safety planning smartphone application: a pilot study within a tertiary mental health service. Suicide Life Threat Behav 2018
37.
Ben-Zeev D, Brian RM, Jonathan G, et al: Mobile health (mHealth) versus clinic-based group intervention for people with serious mental illness: a randomized controlled trial. Psychiatr Serv 2018; 69:978–985
38.
Kreyenbuhl J, Record E, Himelhoch S, et al: Development and feasibility testing of a smartphone intervention to improve adherence to antipsychotic medications. Clin Schizophr Relat Psychoses (Epub ahead of print, July 25, 2016)
39.
Bucci S, Barrowclough C, Ainsworth J, et al: Actissist: proof-of-concept trial of a theory-driven digital intervention for psychosis. Schizophr Bull 2018; 44:1070–1080
40.
Corden ME, Koucky EM, Brenner C, et al: MedLink: a mobile intervention to improve medication adherence and processes of care for treatment of depression in general medicine. Digit Health 2016; 2:2055207616663069
41.
Bauer AM, Hodsdon S, Bechtel JM, et al: Applying the principles for digital development: case study of a smartphone app to support collaborative care for rural patients with posttraumatic stress disorder or bipolar disorder. J Med Internet Res 2018; 20:e10048
42.
Kumar D, Tully LM, Iosif AM, et al: A mobile health platform for clinical monitoring in early psychosis: implementation in community-based outpatient early psychosis care. JMIR Ment Health 2018; 5:e15
43.
Baumel A, Tinkelman A, Mathur N, et al: Digital peer-support platform (7Cups) as an adjunct treatment for women with postpartum depression: feasibility, acceptability, and preliminary efficacy study. JMIR Mhealth Uhealth 2018; 6:e38
44.
Price M, van Stolk-Cooke K, Ward HL, et al: Tracking post-trauma psychopathology using mobile applications: a usability study. J Technol Behav Sci 2017; 2:41–48
45.
Schlosser D, Campellone T, Kim D, et al: Feasibility of PRIME: a cognitive neuroscience-informed mobile app intervention to enhance motivated behavior and improve quality of life in recent onset schizophrenia. JMIR Res Protoc 2016; 5:e77
46.
Birney AJ, Gunn R, Russell JK, et al: MoodHacker mobile web app with email for adults to self-manage mild-to-moderate depression: randomized controlled trial. JMIR Mhealth Uhealth 2016; 4:e8
47.
Saunders KE, Bilderbeck AC, Panchal P, et al: Experiences of remote mood and activity monitoring in bipolar disorder: a qualitative study. Eur Psychiatry 2017; 41:115–121
48.
Brooke J: SUS: A Quick and Dirty Usability Scale. Reading, United Kingdom, Redhatch Consulting Ltd, 1996; https://hell.meiert.org/core/pdf/sus.pdf
49.
Larsen DL, Attkisson CC, Hargreaves WA, et al: Assessment of client/patient satisfaction: development of a general scale. Eval Program Plann 1979; 2:197–207
50.
Devilly GJ, Borkovec TD: Psychometric properties of the credibility/expectancy questionnaire. J Behav Ther Exp Psychiatry 2000; 31:73–86
51.
Lund AM: Measuring Usability With the USE Questionnaire. Fairfax, VA, Society for Technical Communication, Usability and User Experience Special Interest Group, 2001. https://www.researchgate.net/profile/Arnold_Lund/publication/230786746_Measuring_Usability_with_the_USE_Questionnaire/links/56e5a90e08ae98445c21561c/Measuring-Usability-with-the-USE-Questionnaire.pdf
52.
Kuhn E, Greene C, Hoffman J, et al: Preliminary evaluation of PTSD Coach, a smartphone app for post-traumatic stress symptoms. Mil Med 2014; 179:12–18
53.
Lewis JR: Psychometric evaluation of the Post-Study System Usability Questionnaire: the PSSUQ. Proc Hum Factors Ergon Soc Annu Meet 1992; 36:1259–1260
54.
Davis FD: User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int J Man Mach Stud 1993; 38:475–487
55.
Venkatech V, Davis FD: A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci 2000; 46:186–204
56.
Shirk SR, Gudmundsen G, Kaplinski HC, et al: Alliance and outcome in cognitive-behavioral therapy for adolescent depression. J Clin Child Adolesc Psychol 2008; 37:631–639
57.
Nguyen TD, Attkisson CC, Stegner BL: Assessment of patient satisfaction: development and refinement of a Service Evaluation Questionnaire. Eval Program Plann 1983; 6:299–313
58.
Lewis J: IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum Comput Interact 1995; 7:57–78
59.
Rotondi AJ, Sinkule J, Haas GL, et al: Designing websites for persons with cognitive deficits: design and usability of a psychoeducational intervention for persons with severe mental illness. Psychol Serv 2007; 4:202–224
60.
Lindberg S, Jormfeldt H, Bergquist M: Unlocking design potential: design with people diagnosed with schizophrenia. Inform Health Soc Care 2019; 44:31–47

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services

Cover: XXXX

Psychiatric Services
Pages: 538 - 544
PubMed: 30914003

History

Received: 17 November 2018
Revision received: 13 January 2019
Accepted: 14 February 2019
Published online: 27 March 2019
Published in print: July 01, 2019

Keywords

  1. Depression
  2. Schizophrenia
  3. Bipolar Disorder
  4. Computer technology

Authors

Details

Michelle M. Ng, B.A.
Division of Digital Psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston (Ng, Torous); National Institute of Complementary Medicine Health Research Institute, Western Sydney University, Penrith, New South Wales, Australia, and Division of Psychology and Mental Health, Faculty of Biology, Medicine, and Health, University of Manchester, Manchester, United Kingdom (Firth); Headache Center, Department of Neurology, NYU Langone Health, NYU School of Medicine, New York (Minen).
Joseph Firth, Ph.D.
Division of Digital Psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston (Ng, Torous); National Institute of Complementary Medicine Health Research Institute, Western Sydney University, Penrith, New South Wales, Australia, and Division of Psychology and Mental Health, Faculty of Biology, Medicine, and Health, University of Manchester, Manchester, United Kingdom (Firth); Headache Center, Department of Neurology, NYU Langone Health, NYU School of Medicine, New York (Minen).
Mia Minen, M.D.
Division of Digital Psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston (Ng, Torous); National Institute of Complementary Medicine Health Research Institute, Western Sydney University, Penrith, New South Wales, Australia, and Division of Psychology and Mental Health, Faculty of Biology, Medicine, and Health, University of Manchester, Manchester, United Kingdom (Firth); Headache Center, Department of Neurology, NYU Langone Health, NYU School of Medicine, New York (Minen).
John Torous, M.D. [email protected]
Division of Digital Psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston (Ng, Torous); National Institute of Complementary Medicine Health Research Institute, Western Sydney University, Penrith, New South Wales, Australia, and Division of Psychology and Mental Health, Faculty of Biology, Medicine, and Health, University of Manchester, Manchester, United Kingdom (Firth); Headache Center, Department of Neurology, NYU Langone Health, NYU School of Medicine, New York (Minen).

Notes

Send correspondence to Dr. Torous ([email protected]).

Competing Interests

The authors report no financial relationships with commercial interests.

Funding Information

Dr. Torous is supported by a career development award from the National Institutes of Health (NIH)/National Institute of Mental Health 1K23MH116130-01, and Dr. Minen is supported by 5K23AT009706 from the NIH/National Center for Complementary and Alternative Medicine.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share