In 2020, nearly one in five U.S. adults lived with a mental, behavioral, or emotional disorder (
1). Mental health treatment with a therapist can be highly effective for reducing symptoms and improving functioning of individuals experiencing psychological distress. The nature of the clinical encounter, however, is such that a therapy session may be disconnected from other parts of a patient’s life. Because many people regularly interact with social media, these interactions and written communication may be shared with the therapist and can be helpful for assessing a patient’s mental state at the time of the communication by providing additional context (
2,
3). Previous research has found that these digitally captured activities of everyday life may offer insights into individual thoughts and behaviors that could, in principle, enhance psychotherapy and associated outcomes (
4,
5). In fact, previous research has found that both patients and providers are comfortable sharing and discussing digital data and electronic communication in therapy sessions (
6,
7), but these sources of information are not routinely part of therapeutic encounters. Instead, clinicians providing psychosocial therapy typically rely on patients’ observable behaviors and their self-report of thoughts and experiences—sources that are incomplete and subject to recall bias, social desirability bias, and misinterpretation (
8–
11).
We sought to determine whether a personalized dashboard of patients’ digital data, shared between patients and clinicians providing psychosocial therapy, would improve self-reported health-related quality of life (HRQoL) (primary outcome), symptoms of depression and anxiety, and the therapeutic alliance (secondary outcome) (see Trial Protocol and Statistical Analysis Plan in the online supplement to this report). We posited that providing additional data to patients and providers may increase the likelihood of detecting clinically actionable targets. Such increased detection could lead to treatment decisions that affect patient self-reported HRQoL. We hypothesized that integrating these data in therapy could improve the therapeutic alliance and symptoms of depression and anxiety.
Methods
This was a single-blind, randomized controlled trial to evaluate the effectiveness of a digital health dashboard in improving HRQoL symptoms for patients enrolled in mental health therapy versus usual care. We used a computer-generated algorithm to randomly assign participating patients to block sizes of 2, 4, and 6. The study was conducted from October 2020 to December 2021. Each patient-clinician dyad was enrolled for 60 days and reassessed 30 days later. (The trial protocol and statistical analysis plan are in the online supplement.) The trial was approved by the institutional review board at the University of Pennsylvania (IRB protocol 831246) and followed CONSORT guidelines (see eFigure 1 in the online supplement).
Clinicians providing psychosocial therapy were identified by eligible study patients or were recruited from mental health clinics at the University of Pennsylvania or private practices. Eligible clinicians provided informed consent and completed a baseline survey of sociodemographic factors (e.g., gender identity and race-ethnicity), professional variables (e.g., years practicing and type of therapy), and the Working Alliance Inventory–Short Revised (WAI-SR) instrument (
12). Clinicians providing psychosocial therapy were asked to complete the McLean Collateral Information and Clinical Actionability Scale (M-CICAS) after each therapy session (
13). Clinicians providing psychosocial therapy could enroll up to 15 participating patients.
Patients were recruited with materials posted in mental health clinics at the University of Pennsylvania, on online research registries, through paid Facebook and Google advertisements, or by clinicians as indicated below. Eligible participants were ≥18 years, primarily English speaking, owned a smartphone, self-reported a diagnosis of anxiety or depression, self-reported intention to remain in mental health therapy for 3 months, and demonstrated willingness to share data from at least one digital source. Eligible participants provided informed consent and completed a baseline survey of sociodemographic variables, including the gender identity and race-ethnicity questionnaires adapted from the U.S. Census Bureau. At 0, 60, and 90 days, participants completed the RAND 36-Item Short Form Health Survey (SF-36), a validated, widely used HRQoL scale (
14); the eight-item Patient Health Questionnaire (PHQ-8), a multipurpose instrument for screening, diagnosing, monitoring, and measuring the severity of depression (
15); the seven-item Generalized Anxiety Disorder (GAD-7) scale, a validated survey that measures anxiety symptoms and severity of generalized anxiety disorder on the basis of the
DSM-IV diagnostic criteria for anxiety (
16); and the WAI-SR, based on Bordin’s three-factor conceptualization of the provider-client relationship (
12). All survey instruments were completed via REDCap and Way to Health (
17), which were HIPAA-compliant research platforms. Patients could receive up to $200 for sharing data and completing surveys. Clinicians providing psychosocial therapy could receive up to $200 for completing surveys per patient enrolled.
After completing informed consent and the baseline survey, all patient participants were asked to share data from at least one type of social media (e.g., Facebook wall posts) or digital data (i.e., Google search queries, YouTube video search queries, steps walked as determined by smartphone built-in pedometer, and smartphone screen status such as turned on or off or locked or unlocked) (see eFigure 2 in the online supplement). The patients and therapists allocated to the intervention arm received a digital health dashboard at least 24 hours before their session with a reminder to jointly review the dashboard in the session.
The digital dashboard included at least three sections (see eFigure 2 in the online supplement): time spent on the smartphone between 12 a.m. and 4 a.m. each day, miles walked during weekdays and weekend per week, and top five words from Facebook posts and YouTube and Google searches. A fourth section was populated if the patient had additional digital content (e.g., text messages and e-mails) for the dashboard. Two days before each scheduled therapy session, patients in the intervention arm received an appointment reminder via text message and a request to share any additional digital content to include in the dashboard. Before receiving their first dashboard, patients and clinicians took a 5-minute video tutorial orienting them to the dashboard and its information sources. The dashboard was automatically updated with user-generated data before each weekly therapy session.
The primary outcome was the 60-day change in the SF-36 score. Secondary outcomes included the 90-day change in the SF-36 score and 60- and 90-day changes in the other instrument scores. Our primary analysis had an intention-to-treat, complete-case design. We used an unadjusted paired t test to compare the 60-day change in SF-36 score between arms. As a secondary analysis, we conducted a linear mixed-effects model to account for clustering of patients within therapist. Robust (empirical) standard errors were used. As an additional secondary analysis, we accounted for missing data by using multiple imputation after carefully assessing patterns of missingness, and data were deemed to be missing at random. A p≤0.05 was deemed statistically significant, but emphasis was placed on point estimates and CIs.
Results
In total, 115 patients were eligible and randomly assigned to treatment condition; 57 were assigned to the intervention and 58 to usual care. Sixty-nine clinicians provided psychosocial therapy to the 115 participating patients. The mean±SD age of the patients was 31.3±10.5 years, 82% (N=94) were women, and 86% (N=99) had been in therapy for >1 year. Baseline characteristics were balanced between arms (see eTable 1 in the online supplement). All patient participants shared at least one type of digital data or social media (eTable 2 in the online supplement). Among the 57 intervention participants, a total of 352 dashboards were generated and sent (eTable 3 in the online supplement). Most intervention participants (N=48, 84%) received five or more dashboards (eTable 4 in the online supplement). Therapists previewed dashboards at least once (N=352), whereas fewer downloaded the dashboard (N=121) (eTable 5 in the online supplement).
We did not detect a statistically significant 60-day change in SF-36 score for patients randomly allocated to the intervention (mean difference=−0.39, 95% CI=−4.17 to 3.39) or to usual care (mean difference=−1.98, 95% CI=−5.74 to 1.77), and no significant between-arm difference was observed (between-arm difference=1.60, 95% CI=−3.67 to 6.86). Similarly, we found no statistically significant between-arm differences in 90-day changes in the SF-36 score or 60- or 90-day changes in the other measures (
Table 1). Very similar results were obtained with imputed data. No significant difference was observed in the mean number of collateral sources (captured with the M-CICAS measure) reviewed in session (control=1.04 sources, intervention=1.20 sources).
Discussion
Despite the absence of significant differences in the assessed HRQoL measures between the two treatment conditions, this study yielded two main insights. First, patients were willing to share their digital data with clinicians, and patients agreed to jointly review dashboards in sessions with their therapists. Second, patients receiving the intervention and those receiving usual care had similar changes in HRQoL, depression, anxiety, and working alliance scores.
Several reasons may account for the intervention not having an effect on the measured variables: benefits of the intervention could have accrued in unmeasured areas or over longer periods; the digital dashboards may not have reflected the most relevant digital information; the digital data shared varied in volume; dyads could have varied in how they used the digital dashboard; self-report measures were used to examine treatment outcome; the enrolled participants were heterogeneous in terms of mental health conditions, time enrolled in therapy, educational attainment, race-ethnicity, and access to technology; and the COVID-19 pandemic may have made the study context unrepresentative. Even though the study participants provided informed consent, privacy concerns may have arisen. Previous research has reported that patients thought that sharing social media and digital data with their therapist could be “a little creepy” and noted concerns about being “watched” (
18).
Limitations of this study included a relatively small sample size (N=115 patient participants), social desirability and self-presentation inherently embedded in social media posts, diagnostic heterogeneity that is likely to dilute the ability to detect an effect, inability to collect dashboard fidelity metrics for patient participants, and the Hawthorne effect (i.e., altering one’s behavior because of an awareness of being observed). Despite these limitations, the findings of this randomized controlled trial indicate that patients and therapists are interested in and comfortable with discussing digital data in therapy. Social media platforms provide an unstructured and accessible venue for patients to share their experiences and potentially to inform care.