Site maintenance Wednesday, November 13th, 2024. Please note that access to some content and account information will be unavailable on this date.
Skip to main content

Abstract

Technology is ubiquitous in society and is now being extensively used in mental health applications. Both assessment and treatment strategies are being developed and deployed at a rapid pace. The authors review the current domains of technology utilization, describe standards for quality evaluation, and forecast future developments. This review examines technology-based assessments of cognition, emotion, functional capacity and everyday functioning, virtual reality approaches to assessment and treatment, ecological momentary assessment, passive measurement strategies including geolocation, movement, and physiological parameters, and technology-based cognitive and functional skills training. There are many technology-based approaches that are evidence based and are supported through the results of systematic reviews and meta-analyses. Other strategies are less well supported by high-quality evidence at present, but there are evaluation standards that are well articulated at this time. There are some clear challenges in selection of applications for specific conditions, but in several areas, including cognitive training, randomized clinical trials are available to support these interventions. Some of these technology-based interventions have been approved by the U.S. Food and Drug administration, which has clear standards for which types of applications, and which claims about them, need to be reviewed by the agency and which are exempt.
Technology is ubiquitous in society and now mediates many forms of interpersonal and societal communication. It is no surprise that the numbers of technology-based interventions and strategies for treating psychiatric disorders are rapidly increasing. These technologies include evaluation of nearly all features of psychiatric disorders, including symptoms, cognitive performance, and everyday functioning. In fact, while technology-based assessments and intervention strategies initially were administered in-person at office visits, many of these strategies are now administered remotely using cloud-based applications.
Current technology allows for the structured delivery of material used for assessment and training in cognitive, social cognitive, and functional domains; two-way communication with video, short message services (SMS), such as Twitter and other software platforms, including remote therapy applications; paging using various technologies for assessment and intervention purposes using ecological momentary assessment (EMA) strategies; continuous passive monitoring of location and behavior (including activity and physiological signals such as heart rate and skin conductance); and presentation of reality-based computer simulations. These simulations include both immersive virtual reality (VR) simulations and more static simulations that allow performance assessment and, in some cases, training on veridical representations of technology-based tasks such as shopping, banking, traveling, and placing online orders. The devices on which such applications are now delivered range from computers to tablets to smartphones to wearable devices. Thus, technology in the context of this review refers to an array of different functions (messaging, monitoring) across a number of different platforms and operating systems (Windows, iOS, and Android).
In this review, we describe technology relevant to mental health applications, including both assessment and intervention applications. In the assessment domain, we focus on assessment of cognitive abilities, emotion regulation capacity, functional skills, and clinical symptoms, including thorough sampling of individual symptoms and activities through structured queries or observed experiences. Assessment technologies involve observational strategies, including EMA, paging, and passive measurement, and cues to engage in performance-based assessments in cognitive, social cognitive, or functional domains. We generally focus here on adult populations, with some mention of interventions for attention deficit hyperactivity disorder (ADHD) in children that may also apply to adults.
In the treatment domain, we present information on applications designed to deliver interventions as well as applications that are designed to augment other treatments. Examples of direct treatment delivery include performance-based training in cognition and functional skills, which are available across conditions ranging from ADHD to mild cognitive impairment to substance use disorders. Other therapeutic applications include immersive VR simulations as well as cognitive-behavioral therapy applications. Technology-based augmentation strategies include tools for self-monitoring between therapy sessions, delivery of reminders to reinforce therapeutic goals, and various ways to track adherence to treatments. This leads to a very broad-based review, which itself is only a shadow of the field of mental health assessment and treatment and technological interfaces.
Our review of these applications and technologies includes data on their efficacy (when they are employed as treatments or assessment tools) as well as data on user tolerability. Any disparities between clinical trial results obtained with digital health technologies and outcomes arising from traditional in-person clinical trials require reconciliation and interpretation, and are likely related to factors of 1) real-world engagement challenges for patients and 2) workflow challenges for clinicians (1, 2). Considering data on real-world effectiveness beyond just efficacy data will be critical to ensure that the field makes optimal use of emerging technologies (3).

Technology-Based Assessment of Cognition and Everyday Functioning

Cognitive Assessments

Computerized cognitive assessment strategies have been used for several decades. Multiple testing batteries are currently available, and these have been reviewed in detail elsewhere (46). Computerized assessments are appealing for several reasons, including systematic delivery of instructions and collection of responses, as well as automated scoring and norming of response data.
Computerized cognitive testing has been used in multiple clinical trials, and its use in routine clinical practice is also becoming more common. Certain tests have always been available exclusively in a fully computerized format (7). Other assessments, initially developed for administration using paper and pencil (e.g., the Brief Assessment of Cognition [8] and the Wechsler Intelligence Scales) were subsequently released as computerized applications (9, 10) developed to be convergent with the widely used paper versions. A significant advantage offered by many of these computer-based assessment tools is that the tester does not have to be a licensed professional; indeed, subprofessional clinical trainees can acquire the skills required to administer most computerized assessments and collect valid data. However, evaluation of whether results from computer-based cognitive assessments are convergent with the results of traditional in-person assessments remains an important consideration (11). Some recent data suggest substantial challenges with certain tasks, particularly if there is an attempt to sustain fidelity to paper-and-pencil assessments while performing a remote assessment (12). As a result, there is a need for careful consideration of whether all legacy cognitive assessments can be performed remotely.

Functional Capacity Assessments

A variety of computer-based strategies examine the ability to perform skills that are critical for everyday functioning, referred to as functional capacity. Available assessment tools evaluate performance on a range of tasks through structured simulations of everyday activities, veridical simulations of everyday tasks, and VR-based simulations. The stand-alone task batteries usually have a structured assessment sequence with individually scorable tasks and are normable in a manner similar to that applied to responses on neuropsychological tests. While the VR assessments (described below) are commonly more realistic and more flexible, they are, in many cases, less amenable to normative standards. In the stand-alone tests, such as the Virtual Reality Functional Capacity Assessment Tool (VRFCAT) (13), touch-screen responses are used to assess the subject’s ability to perform a sequence of skilled acts using simulation formats (e.g., looking in cabinets for specific target items, developing a shopping list, utilizing public transportation, and navigating a shopping experience in a virtual retail store). In another, the computerized functional skills assessment and training system (CFSAT) (14), the specific components of skilled acts are examined, such as entering a personal identification number on an automatic teller machine screen, selecting the correct ticket choice on a computer kiosk, and using the keypad on a simulated mobile phone to enter responses to a telephone voice menu. Data have consistently shown that performance on these computerized simulations of everyday activities is correlated with cognitive function (12, 13) measured with standard or digital strategies. These assessments have a variety of functions, including use in clinical trials of cognitive enhancement where evidence of functional relevance is required and in clinical settings to directly measure improvements in functional skills in individuals receiving rehabilitative interventions (15, 16).

Remote Delivery of Technology-Based Cognitive and Functional Assessments

As the assessment technology reviewed here is already available for either remote or in-person assessment settings, we briefly address the feasibility of remote delivery of cognitive assessments. Several different formats are used for remote assessment, including tester administration of tasks over a videoconferencing application and remote, exclusively self-administration of all assessments by the subject. There are several challenges inherent in each approach. For exclusively remote, self-administered assessments, the participant needs to be comfortable with, and capable of using, the required technology.
Videoconference administration of tests that were designed to be administered in-person requires consideration of the technological demands of the conferencing application and the ability of participants to use the technology as well as to perform the critical skills. It is certainly possible to perform certain types of cognitive assessments over the telephone (e.g., measurement of verbal responses in tests of working memory or episodic memory, and measurement of auditory processing speed on tasks such as the Oral Trail Making Task [17]). For videoconferencing applications in cases where the participant is asked to perform cognitive tests on the device while simultaneously receiving remote instructions and supervision, the participant need only be able to manage the technology-based delivery of the assessment program, which can possibly be facilitated by another person who is on-site with the participant at the time of testing (18). These challenges may be difficult to eliminate entirely.
In the case of assessments designed for fully remote self-administration, there are other potential challenges. Several such studies have found significantly more missing data than observed with in-person, paper-and-pencil assessments (19, 20). One possibility is that some participants, particularly those with severe mental illness or other forms of cognitive impairment, find the computerized assessments difficult to comprehend and/or are less motivated, particularly without another person present to receive instructions and facilitate subject engagement. We recently validated methods for remote delivery of neurocognition (21) and social cognition (i.e., emotion recognition) (22) testing embedded in an EMA application. Participants were seen in person and trained on the use of the technology at the start of participation. In a sample that demonstrated the baseline ability to utilize this technology, we found that subsequent adherence to the EMA cognitive assessments (75% for neurocognition [N=168] and 80% for emotion recognition [N=86]) was high and data quality was on average excellent. Adherence was not correlated with diagnosis (major depression, bipolar disorder, schizophrenia), age, sex, or presence of psychosis, negative symptoms, or suicidal ideation. Although these data are quite positive, strategies for determination of an individual’s capacity to be assessed remotely seems to be an important clinical topic.

Clinical Virtual Reality

Over the past 25 years, researchers and clinicians have pursued the use of VR as a tool to advance clinical assessment, intervention, and scientific research (2331). This effort was inspired by the intuitively obvious opportunity for VR environments to address specific challenges inherent in the provision of usual clinical strategies for mental health, rehabilitation, and general medical care. At its core, VR technology, along with other related simulation-based formats (e.g., augmented/mixed reality), offers new capabilities that did not exist a decade ago. Many recently developed VR-based testing, training, teaching, and treatment approaches would be difficult, if not impossible, to deliver without leveraging the power of modern computing, three-dimensional (3D) graphics, body tracking sensors, novel user interfaces, gaming/narrative principles, big data analytics, and artificial intelligence. Such VR-enabling technologies allow for the creation of highly realistic, interactive, engaging, and systematically controllable digital simulation environments. Users can be immersed in VR simulations and interact with content for the purposes of clinical assessment and intervention. VR technology is thus well matched to the requirements of various clinical targets and psychiatric contexts.

Defining Virtual Reality

Since the inception of VR, a large and evolving scientific literature has emerged regarding the outcomes and effects associated with what we now refer to as clinical VR applications that target psychological, cognitive, motor, and functional impairments or symptoms across a wide range of health conditions. Continuing advances in the underlying enabling technologies for creating and delivering VR applications have resulted in their widespread availability as consumer products, sometimes at a very low cost (e.g., Oculus Quest 2).
The concept and definition of VR has been debated over the years. VR has been very generally defined as a way to visualize, manipulate, and interact with technology and complex data in a more naturalistic and intuitive manner (32). From this baseline perspective, VR can be seen as an advanced form of human-computer interaction that allows a user to interact with computers beyond what is typically afforded by standard mouse–keyboard–touchscreen interface devices. An engaged VR user experience can be created through unique combinations of interaction devices, sensory display systems, and the type of content presented in the virtual environment. Thus, there are two common types of VR. The automated observation of these interactions constitutes the assessment components of VR therapies.
Nonimmersive VR is the most basic format and is similar to the experience of playing a video game. Virtual content is delivered on a standard computer monitor, tablet, mobile phone, or television as users interact with 3D computer graphics using a game pad, joystick, mouse, keyboard, or specialized interface devices (e.g., other handheld devices, data gloves, treadmills). Modern computer games that support user interaction and navigation within 3D graphics can be considered to be VR environments. Tasks such as the VRFCAT described above are nonimmersive VR assessment strategies.
Immersive VR integrates head-mounted displays, body-tracking sensors, specialized interface devices, and 3D graphics (33). Users operate within a simulated environment that changes in a natural or intuitive way based on the user’s motion and interaction. The head-mounted display occludes the user’s view of the outside world while head- and body-tracking technology senses the user’s position and movement. These user movement data are rapidly sent to a computing system, which uses the movement and interaction data to update the sensory stimuli, which are presented to the user via the head-mounted display. When users are immersed in computer-generated visual imagery and sounds of a simulated virtual scene, their interaction produces an experience that corresponds to what they would see and hear if the scene were real.
Regardless of the technical approach, the key aim of these immersive systems is to perceptually replace the outside world with the virtual world to psychologically engage users with simulated digital content designed to create a specific user experience. Immersive VR is typically the choice for applications where a controlled stimulus environment is desirable for constraining a user’s perceptual experience to a synthetic world. This format has been often used in clinical VR applications for assessment of anxiety disorder or PTSD severity and subsequent exposure therapy, as distraction for patients undergoing acutely painful medical procedures, and in the physical/cognitive assessment/rehabilitation domain. The research potential—for example, studying neural processes during brain imaging or neurosurgery—are also clear.
In a related domain, recent work has involved the creation of virtual human characters (sometimes called avatars or autonomous agents) that allow users to engage in clinical interactions within both nonimmersive and immersive simulations. The creation of virtual humans has evolved from research showing their clinical usefulness as stimuli for exposure therapy for social phobias (34, 35), for role-play training for social skills in people on the autism spectrum (3638), for activities for addressing intimate partner violence (39), and for teaching self-compassion in persons with depression (40). More complex virtual humans infused with varying levels of natural language processing and artificial intelligence have shown effectiveness in the role of virtual patients that novice clinicians can use in practicing the skills required for challenging diagnostic interviews (41) and motivational interviewing (42). They have also been created to produce online virtual human health care guides (43, 44) and as clinical interviewers, with automated sensing of facial, gestural, and vocal behaviors that are useful for inferring the state of the user interacting with these virtual human entities (45) and for assessing clinician empathetic behavior (46).

Current VR Clinical Treatment Areas

The field of clinical VR has expanded dramatically as the technology has evolved. Clinical VR has been shown to be effective in fear reduction in persons with specific phobias (e.g., 47, 48), treatment for posttraumatic stress disorder (e.g., 4952), and cue exposure for addiction treatment and relapse prevention (5355). VR has also been effective in treating depression (40), paranoid delusions (56), and body image disturbances in patients with eating disorders (27, 28). Cognitive and physical rehabilitation research using VR has produced promising results when applied to navigation and spatial training in children and adults with motor impairments (57), functional skill training and motor rehabilitation in patients with central nervous system dysfunction (e.g., stroke, traumatic brain injury, cerebral palsy, multiple sclerosis) (58), and for the rehabilitation of attention, memory, spatial skills, and other cognitive functions (59, 60).

VR Assets for Advancing Clinical Interventions: Expose, Distract, Motivate, Measure, and Engage

On a very general level, VR leverages core processes that are relevant across a variety of clinical domains. These processes can be summarized as the capacity to expose, distract, motivate, measure, and engage users. Expose refers to clinical applications designed to provide exposure therapy for anxiety disorders and PTSD, to practice social interactions in order to reduce paranoid delusions, and cue-exposure approaches for addiction treatment and relapse prevention. VR used for exposure therapy offers strong evidence, with a recent meta-analysis suggesting efficacy across a wide range of phobias, anxiety, and trauma- and stressor-related disorders (61). Further meta-analysis supports the efficacy of virtual reality for anxiety-related disorders, although the research base is still relatively small (62). Distract refers to methods for distracting attention from painful medical procedures to reduce pain perception, promote reduction of discomfort, and provide respite from bleak hospital settings. A recent systematic review supported the potential of VR for pain management but noted that the studied effects are often for acute pain, and less is known about longitudinal analgesia for chronic pain (63). Motivate refers to the practice of promoting patient adherence to repetitive and sometimes boring or frustrating training tasks that need to be performed for cognitive or physical rehabilitation and chronic pain management by embedding these activities within game-like contexts. Measure underscores the capability that VR simulations provide for quantifying activity and/or performance in response to controlled simulations of fearful experiences. Finally, engage is generally seen as the end result of capturing attention or action that is useful for encouraging participation with clinical applications where users relate to and interact with virtual content as if it were physically real—sometimes referred to as the sense of presence (64). For example, learning the “skill” of achieving a “mindful” state typically requires multiple sessions before the user perceives a rewarding change in their mental or emotional state. VR has been used to create engaging experiences within which users may be more compelled to practice and learn this skill.

The Future of Clinical VR

Scientific support for the clinical use of VR for mental health and rehabilitation has evolved as the costs and complexity of developing VR applications have gone down and the capacity of the technology has increased. A complex VR headset and hand controllers that might have cost tens of thousands of dollars in 2000 now cost well under $800. This trend should be accelerated by recent developments in “standalone” VR headsets (e.g., Oculus Quest, Pico Neo, Vive Focus, etc.). Such low-cost VR display systems do not require a tethered computer, as all the graphic and interaction processing take place onboard the device. These lower-cost devices will promote adoption and enable larger-scale clinical studies that can help build the effectiveness data necessary for VR to build out a solid evidence base for guiding future clinical implementation. As we look to the future, we also see growing clinician awareness, acceptance, and adoption of clinical VR methods. For example, Norcross et al. (65) surveyed 70 psychotherapy experts regarding interventions they predicted to increase in the next decade; VR was ranked 4th out of 45 options, with other computer-supported methods (teletherapy, mobile apps, online cognitive-behavioral therapy self-help) occupying three of the other top five positions. Moreover, the COVID-19 crisis has certainly accelerated the exploration and acceptance of these technologies to amplify access to care, and that interest will likely continue after the pandemic has passed (66). Thus, in view of the current enthusiasm for VR generally across society, and specifically in the clinical community, coupled with emerging scientific support and lower system costs, it is likely that clinical VR applications have the potential to become standard tools for psychiatry researchers and possibly to be utilized more widely by practitioners.

Ecological Momentary Assessment (EMA)

Ecological momentary assessment (EMA), also referred to as the experience sampling method (ESM), has been a tool for understanding fluctuating phenomena and within-person dynamics, and the ubiquity of the smartphone has greatly accelerated the accessibility of this method for clinical applications (67). Programs for the delivery of EMA surveys have become more widely available, and the tools for analysis of intensive longitudinal data have proliferated. At the earlier stages of EMA, the focus was typically on the recording of behaviors (e.g., activity, sleep, smoking) or daily life experiences, such as stressors, through diaries (6870). The data gathered enabled examination of within-person change, but required user input and did little to reduce the biases inherent to self-report (70). These older assessment strategies had no way to accurately time-stamp the reports that were collected. Anecdotal reports of people arriving 20 minutes early for their appointments and completing 14 days’ worth of assessments are confirmed by the results of research studies comparing reported and observed adherence to paper diary assessments (71).
Personal digital assistants such as early Palm Pilot–like devices automated these processes (72, 73). Moreover, prompts to complete surveys could now be timed and momentary responses could be time-stamped (74). With the translation of EMA to smartphones, surveys could be delivered according to different contexts experienced by the individual (and indexed by the geolocation features of the device), thus enabling more personalized information to be gathered. The ability to tailor probes based on the individual’s momentary state generated a new field of ecological momentary intervention (75), and several trials have evaluated personalized automated interventions that leverage momentary data (76, 77). Some researchers have moved beyond self-reports to intensively repeated objective measures, including brief cognitive tests embedded in the EMA programs as described above (21, 22).
The kinds of questions that researchers have been able to ask with these new tools have led to new insights in fundamental questions in mental health. Sometimes these findings are at odds with prevailing theories. It is commonly believed that smokers relapse because of nicotine withdrawal symptoms. Shiffman et al. (78) evaluated smoking behavior in non-daily smokers and found that negative affect was more important than withdrawal symptoms in relapse, which is critical for understanding which factors to target to sustain smoking cessation. It is commonly believed that suicidal ideation arises from feelings of hopelessness. Kleiman et al. (79) found that suicidal thoughts varied markedly throughout the day and that variation in candidate predictors (e.g., hopelessness) did not predict the emergence of this ideation, a finding that had been produced previously in a hospitalized sample (80). Depp et al. (81) found that social isolation and number of social interactions did not predict onset of suicidal ideation in people with schizophrenia, but that the anticipation of being alone later was associated with an increase in ideation. Granholm et al. (82) found that people with schizophrenia (N=100) spent considerably more time home and alone than healthy control subjects (N=71) and, even when home and alone, engaged in fewer productive behaviors. In a follow-up analysis of this sample, Strassnig et al. (83) found that people with schizophrenia reported fewer activities, spent considerably more time sitting and less time standing, and were considerably more likely to sleep during the daytime hours. However, listening to music and watching television were not differentially common in healthy and schizophrenia participants, suggesting that activities less productive than passive recreation are among the things that were more common in participants with schizophrenia.
More general lifespan questions can also be addressed by EMA. Using a measurement burst design in which bouts of EMA are integrated with a longitudinal follow-up period, Koffer et al. (84) found that older age was associated with greater ability to buffer against the effect of stress on affect.
These are just a few examples from a burgeoning field, highlighting the degree to which active EMA paradigms can be used to advance understanding of the dynamic processes underlying psychiatric diagnoses, extending and sometimes challenging prevailing theories. EMA is a useful strategy to identify targeted features of different conditions on a momentary basis. For example, repeated assessment can identify the proportion of prompts that are answered at home versus away and in the presence of other people versus alone. As these are the central indices of social isolation and social avoidance, the socially relevant impact of negative symptoms in schizophrenia (85) and current depression in mood disorders can be directly indexed. Research suggests excellent correlations between clinical ratings of symptoms from structured interviews and EMA data, while identifying fluctuations in symptoms that are missed by more widely spaced assessments (86, 87). These strategies can also be used to examine health-relevant behaviors in mental health populations, as described above. Given the reduced life expectancy associated with severe mental illness and the high prevalence of metabolic syndrome, EMA can be used to estimate the amount of time spent sitting versus standing or otherwise engaged in active behaviors. Given that contemporary EMA can collect the occurrence of multiple different activities since the last survey, it is quite easy to see whether only one activity has occurred since the last survey or whether participants are engaging in multiple concurrent activities, including physical activities (88). When paired with the passive digital phenotyping described below, a comprehensive EMA assessment can examine location and social context, refine measurements of activity (exercise vs. agitation), detect sleeping during the daytime and not at night, and assess concurrent subjective emotional responses to these activities.

Passive Digital Phenotyping

A more recent breakthrough involves quantifying clinical outcomes using “passive” digital phenotyping (i.e., unobtrusively collecting data via the internal sensors of a smartphone, a wrist-worn smart band, or another device). Passive measures can reduce certain limitations associated with interview- and questionnaire-based clinical assessments (e.g., cognitive impairment, social desirability, cultural biases [89]). Numerous passive measures have been evaluated in psychiatric populations (e.g., geolocation, accelerometry, ambient speech recorded from the environment, phone call and text logs, screen on/off time, social media activity, Bluetooth-based proximity social sensing) (9096). However, the validity of these passive measures is only beginning to be established. Goldsack et al. (97) proposed the V3 framework for determining the validity of passive digital biomarkers, which involves three components: verification, analytical validation, and clinical validation. These components, as reviewed below, provide a useful heuristic for determining whether the level of validity achieved for various passive measures meets clinical standards.
The first component of the V3 model, verification (i.e., efficacy), is a quality-control step for the device of interest that is performed by the manufacturer. It occurs before testing is conducted on human subjects. The goal is to determine whether the sensor captures data accurately and to verify that the software accurately outputs data within a predetermined range of values. For example, accelerometry could be verified by placing a smart band on an object programmed to accelerate at a prespecified rate. Verification is typically done by device/software manufacturers against a reference standard. However, the results of these tests and the analytic methods supporting the devices are typically not published or made available for evaluation, which presents replication challenges. Additionally, common standards do not exist for verifying passive digital phenotyping sensors of interest, and sensors embedded in different models will often be different. Since devices and sensors may require differing levels of verification (e.g., required accuracy) for various clinical purposes, evaluating verification data is a critical step that should occur before passive digital phenotyping measures are applied in studies in clinical populations. For medical devices, such as medical decision-making software, this process may be handled by the U.S. Food and Drug Administration (FDA) as part of Good Manufacturing Practice (GMP) standards. Making test results and analytic methods underlying devices accessible to researchers will help disentangle whether failures of replication are true problems with reproducibility across clinical populations or simply differences in the technical quality of different devices used in studies.
The second component, analytical validation (i.e., effectiveness), involves behavioral or physiological validation of a device in human subjects in the real world. A key first step in this process is determining whether sample-level data output by the device is properly received and that algorithms calculated on that data perform as expected. The metric resulting from the algorithm, applied in real time or post hoc, should be evaluated against a relevant reference. Although agreed-upon reference standards have not been determined for validating passive digital phenotyping measures, there has been initial analytical validation of some passive measures. For example, phone-based geolocation and accelerometry recorded on the ExpoApp have been validated in relation to a reference wrist-worn actigraph and a travel/activity diary; time in microenvironments and physical activity from the diary demonstrated high agreement with phone-based geolocation and accelerometry measures (98). Huang and Onnela (92) analytically validated a phone accelerometer and gyroscope using a ground-truth standard. They had human participants engage in specific physical activities (e.g., sitting, standing, walking, and ascending and descending stairs) with a phone in their front and back pockets. Behavior was filmed throughout as an objective reference. The sensors accurately predicted video-recorded behavior in the reference standard. One ongoing challenge is that as smartphones are updated with new software and phone models with new sensors, prior validation efforts cannot be assumed to be valid.
The third component, clinical validation (i.e., implementation), involves determining whether the passive digital phenotyping variable of interest adequately predicts a specific clinical outcome within the population of interest. Preliminary evidence for clinical validation exists for several passive measures—although at times results have also been contradictory (99). For example, in bipolar disorder, incipient depressive symptoms have been predicted by changes in the number of outgoing text messages, the duration of incoming phone calls, geolocation-based mobility measures, and vocal features extracted during phone calls. Manic symptoms of bipolar disorder have been predicted by more outgoing texts and calls, acoustic properties of speech extracted during phone calls (e.g., standard deviation of pitch), and increased movement detected via accelerometry (100, 101). Clinically elevated and subthreshold depressive symptoms have been predicted by geolocation-derived measures of circadian rhythm, normalized entropy, and location variance, as well as phone usage frequency and speech-derived audio volume (102105). Social anxiety has been predicted by reduced movement on accelerometry and fewer outgoing calls and texts (106). Relapse of psychotic disorders has been predicted by geolocation mobility metrics and text/call behavior (90). Negative symptoms of schizophrenia measured via EMA or clinical ratings have been predicted by geolocation-based mobility metrics, voice activity, and actigraphy-based metrics of gesture and activity level (99, 107110). Combining passive measures with EMA surveys may further enhance clinical validation. For example, Raugh et al. (111) found that the combination of geolocation and EMA surveys was a stronger predictor of clinically rated negative symptoms in schizophrenia than either measure alone. Similarly, Faurholt-Jepsen et al. (101) found that combining vocal acoustic features extracted from phone calls with EMA reports improved the correct classification of mixed or manic mood states in bipolar disorder beyond either measure alone. Henson et al. (112) reported that a combination of EMA and passive data, when analyzed for congruence with anomaly detection methods, was associated with early warnings of relapse in people with schizophrenia. Thus, studies suggest that passive measures are promising tools for measuring clinical outcomes. However, there are numerous inconsistencies regarding the predictive value of specific metrics and measures for classifying individual disorders or symptom states, including geolocation, accelerometry, ambient speech, and ambulatory psychophysiology (113116). For example, clinical data on sleep did not match sensor report in one study (94), and results are not comparable across studies because of differences in sensors utilized, in the clinical targets, in time frames for calculating associations across assessment modalities (e.g., daily or monthly), and in the populations studied. There are also fundamental differences across studies in methods and analyses, such as controlling for multiple comparisons when examining correlational data.
Clinical validation (i.e., implementation) is of particular concern for using passive measures as outcomes for clinical interventions. Unlike traditional interview- or questionnaire-based clinical outcome measures, standards for the level of psychometric evidence needed to say that a measure is clinically validated have not yet been determined for passive digital phenotyping. Proprietary data collection via devices (e.g., a custom wearable device [117]) and proprietary methods for analysis (e.g., a custom machine learning algorithm [118]) offer both innovation and a challenge to reproducible clinical research. Further complexity arises from the trend toward using more complex analytic methods with passive digital phenotyping because of the multilevel nature of the data. For example, machine learning is an increasingly common tool in the clinical validation process, and studies have employed various algorithms to predict a range of clinical outcomes (e.g., classification, regression, unsupervised clustering) (119). However, common standards for judging the level of psychometric evidence that constitutes clinical validation for machine learning are not yet uniformly applied across the field. Is predictive accuracy of 70% enough to declare clinical validation, or should a higher standard be set (e.g., 90% accuracy) (104, 106)? Similar considerations affect simpler analytic methods, such as simple correlations for passive data aggregated across a range of time (e.g., 1 week) to form a single value that can be correlated with clinical outcomes. It seems important that such aggregated values be adjusted for the extent of daily or time of day variation. These adjusted correlations tend to be statistically significant but lower (r values ∼0.3–0.5) than typical standards for convergent validity that would be applied within clinical rating scales or questionnaires (e.g., r values >0.80) (103, 104, 111). Do these lower correlations reflect inadequate convergent validity, even though they are statistically significant? Or is the lower correlation to be expected (and therefore acceptable) because of the fact that it averages across differences in temporal variation across measures or method variance? We suggest that common guidelines for judging what constitutes clinical validation are clearly needed for passive digital phenotyping. There should also be an effort to ensure that clinical validation studies include a representative sample with diverse individuals to ensure that algorithms are not primarily trained to be accurate in populations whose demographic and personal characteristics do not overlap with the clinical populations of interest and that methodological and analytic approaches are valid and consistent throughout the population.
Feasibility of implementation is the next consideration, and barriers and facilitators such as cost, accessibility, tolerability, ease of use, and data failure rates are among the relevant factors. Few studies have evaluated user experience of interactions with passive measures. However, qualitative studies employing interviews designed to assess patient perceptions have indicated that while many see these technologies as holding promise for clinical detection and self-management, there may also be unintended barriers to use, such as increased stigma or anxiety (120, 121). One would expect that most passive measures would not be viewed by participants as burdensome, given that they are collected unobtrusively by the background sensors of their device and do not require direct participant action. However, there may be some instances where device interface proves problematic in clinical populations. For example, in a study on outpatients with chronic schizophrenia, the participants had considerable difficulty with remedying Bluetooth unpairing of a smart band and smartphone (112). People with schizophrenia found this pairing issue more burdensome than did control subjects. It is also unclear whether certain clinical symptoms interact with the willingness to consent to participating in digital phenotyping studies. For example, by their nature, continuous geolocation and ambient speech monitoring raise questions about privacy and agency. It is unclear whether clinical populations, such as individuals with schizophrenia who have delusions of suspicion, experience such technologies as intrusive and whether they exacerbate symptoms or result in the individual not consenting to participate out of fear of being monitored. Some data suggest that the prevalence of answering prompts while acknowledging concurrent psychotic symptoms is reasonably high (86) and that EMA reports of location have been validated using GPS coordinates (108). More generally, issues of systemic racism and mistrust of how passive digital phenotyping information could be (mis)used by the law enforcement or other systems of power may influence implementation of these methods in participants who are racial minorities. Thus, user experience should be carefully evaluated when administering these technologies in clinical populations. As we mention below, the general issue of access to the Internet and experience with any technology is a barrier that will need continuous attention.
Combinations of EMA and passive digital phenotyping seem likely to improve interventions and assessment. GPS location coordinates provide information about where one is, but not who is with them. Proximity detection can determine whether another individual with a device is present, and ambient sound sampling can tell whether individuals are interacting or are simply in proximity to each other. Smart bands can detect activity but not the motivation for the activity (exercise vs. agitation). Combining mood sampling with geolocation information and EMA can help determine whether social isolation is due to depression or lack of motivation, and facial and vocal affect assessment from participant-captured samples can provide validation information for mood reports. A recent example (122) suggested that the combination of passive phenotyping and EMA prompts was feasible, with multiple different prompted responses collected, in conjunction with data regarding location, psychophysiological responses, and ambulatory acoustics (44 participants with schizophrenia and 19 with bipolar disorder). Thus, an array of different elements of functioning can be captured simultaneously and used to generate a wide-ranging picture of momentary functioning.
A challenge in the domain of passive digital phenotyping is that application developers and scientific utilizers are commonly at the mercy of the manufacturers, who can restrict access to phone features for applications or push out operating system upgrades that cause software to fail. Further, applications that monitor access to social media may also encounter restricted access or requirements that access be granted for each time the application attempts to capture data. This is an area where collaboration with manufacturers will be required.

Adherence Monitoring

One of the major approaches using technology in mental health treatment is in adjuncts to therapies. In particular, treatment adherence monitoring is a clear area of need and has been a focus of both clinical trials and clinical treatments. For example, several studies have used mobile monitoring to check in with patients at high risk for nonadherence, including early-course psychosis patients (123) and patients with bipolar disorder (124). Some applications have been approved as medical devices by the FDA. For example, an application that monitors adherence to aripiprazole was approved by the FDA in 2017 (125) and involves an embedded sensor in a pill. Other strategies used in clinical trials include the use of digital photography to capture the moment of pill taking or other chemical tags that can be detected after a medication is taken (126). Systematic studies are under way to examine the usefulness of these strategies for real-world adherence support.
One of the issues with adherence monitoring is that this cannot be a passive measurement strategy. Individuals whose adherence is monitored, in either research or clinical treatment, need to be fully informed and to agree to this monitoring, and their consent must be valid.

Smartphone Therapeutic Applications

There are many mobile apps designed around the principles of cognitive-behavioral therapy (CBT) or other evidence-based interventions but few randomized controlled trials demonstrating their efficacy for any disorder. There are a huge number of applications available that attempt to promote mindfulness or induce relaxation. As many of these applications are not tested empirically at all, we have focused on the translation of CBT strategies into applications. As CBT has a long history of being systematically manualized, the comparison of efficacy of applications to legacy in-person treatment is facilitated. The majority of existing data on the efficacy of mobile app-based interventions comes from randomized controlled trials assessing symptoms associated with a defined disorder or other mental health outcomes such as stress levels, well-being, and quality of life. There are some data to suggest that smartphone interventions can be effective in reducing depressive symptoms. A 2017 meta-analysis by Firth et al. (127) identified a small number of randomized controlled trials (N=18) that examined the efficacy of smartphone-based interventions in improving symptoms of depression. They found a significant reduction in depressive symptoms with smartphone interventions compared with waiting list or inactive control conditions (g=0.56) and a smaller effect in comparison to active control conditions (g=0.22). The use of interventions based on cognitive-behavioral techniques offered greater benefits for depression than computerized cognitive training applications. In a 2019 meta-analysis of randomized controlled trials assessing the efficacy of app-supported smartphone interventions for mental health disorders, Linardon et al. (128) found that smartphone interventions significantly outperformed control conditions in improving depressive symptoms. Similar to the Firth et al. meta-analysis, the effect size was larger when waiting list (g=0.32) or informational resources (g=0.39) were used as control conditions compared with attention or placebo control conditions, such as checking the weather on the phone (g=0.12). Of the 54 comparisons (smartphone vs. control) analyzed, 26 involved a CBT-based app; however, a subgroup analysis did not show them to be associated with larger effect sizes. CBT is an empirically and meta-analytically supported treatment for depression, but some researchers have suggested a low level of adherence to the core principles of CBT models and identified highly variable usability among CBT-based smartphone interventions as reasons for their lack of superiority (129). A 2021 review of studies of CBT smartphone apps for depression featuring a control group reported that results remain too heterogeneous to recommend for front-line care (130).
Similarly, a small but growing body of data suggest that smartphone interventions may be efficacious in the treatment of anxiety symptoms. Another meta-analysis from 2017 (131), focused on randomized controlled trials involving smartphone-supported interventions to reduce anxiety symptoms and found significantly greater reductions in anxiety scores from smartphone interventions compared with control conditions across nine eligible randomized controlled trials. Effect sizes were significantly greater when studies made use of a waiting-list or inactive control conditions (g=0.45) compared with those that used active control conditions (g=0.19). This discrepancy in effect sizes—like that seen in studies assessing depressive symptoms, as noted above—suggests the complexity of conducting digital mental health research and the possibility of a digital placebo effect by which use of a digital device in itself confers a degree of psychological benefit. The Linardon et al. meta-analysis (128) found 29 studies assessing efficacy in treating generalized anxiety symptoms, with eight studies specifically designed to target generalized anxiety symptoms. Across the 39 comparisons within the identified studies, the pooled effect size (g) was found to be 0.30 and statistically significant across all sensitivity analyses. Subgroup analyses again showed a smaller effect size for comparisons using an active comparison intervention (g=0.09) and a larger effect size with studies that used a CBT-based app, which included 16 of the 39 comparisons analyzed.
An intervention strategy that combines EMA principles with interactive smartphone technology is referred to as “just-in-time adaptive interventions” (132). These strategies involve consistent monitoring of behavior, activities, moods, and symptoms, using EMA strategies, but they also interactively offer interventions in real time. An example of such a strategy is the FOCUS intervention (133), which uses a mix of prompts directed to the participant and self-activated tools. The goal of this class of interventions is to sustain engagement while offering interventions in real time. As noted in several reviews, this strategy is being widely used, but the data are not yet at the stage where global statements about efficacy can be made.
Two other examples of smartphone-based interventions that have FDA-approved elements are recently introduced devices to promote smoking cessation and to reduce opioid abuse. PIVOT is a digital smoking cessation app that includes human coaching with text messages, combined with smartphone-based carbon monoxide (CO) monitoring (134). The CO sensor is an FDA-cleared medical device, and the program includes a multistage intervention following standard human-delivered smoking cessation strategies as well as nicotine supplementation.
R-Set-O (135) is an application that is designed to be paired with buprenorphine treatment for opioid addition. In a randomized clinical trial, 82% of participants who were randomized to the device remained in treatment, compared with 68% of those in treatment as usual.
Abstinence was also higher in the active treatment group (77% vs. 63%). Given the typical attrition rates for opioid use disorder treatment (about 50% or more) (136), these are encouraging results. One of the challenges in these interventions is adherence and engagement. For example, in a naturalistic study of the R-Set-O intervention in which data from 3,144 individuals with opioid use disorder were evaluated, 80% completed at least eight of the 67 possible therapeutic modules, 66% completed half of all modules, and 49% completed all modules (137). Although abstinence rates were quite good (about 65%), there is a clear difference in adherence compared with the randomized controlled trials that led to FDA approval. In a large-scale review of device-based interventions, Linardon and Fuller-Tyszkiewicz (128) reported that adherence was challenging in many of these interventions. The types of strategies that succeeded in increasing adherence were clearly associated with attempts to promote engagement at the outset of the intervention. Interventions that used online enrollment were particularly susceptible to poor adherence and dropout, while in-person and telephone recruitment strategies were better.

Computerized Cognitive Training and Cognitive Remediation Therapy

The core of computerized cognitive training (CCT) is software designed to engage and practice cognitive functions. Cognitive functioning is commonly defined in these applications as the set of abilities that would be measured with neuropsychological assessments and relate consistently to everyday functional outcomes. Some programs are explicitly aimed at a single cognitive domain, while others target an array of domains. A central feature of successful CCT programs is adaptive presentation of training stimuli, such that the level of difficulty tracks the participant’s current performance. The goal is to train increasingly more difficult tasks while ensuring a success rate of about 80% of the target stimuli.
Computerized cognitive training has been widely studied in the past two decades, along with concurrent advances in computer technology, which has allowed for great strides forward in terms of control over the learning environment. Multiple studies have demonstrated CCT’s efficacy for improvement of cognition in multiple populations, with the bulk of the evidence in severe mental illness (138, 139) and supported by large-scale studies of healthy older people (140142). There is considerably less information outside of schizophrenia, but studies in bipolar disorder (143) and major depression (144) have been published. For evaluation of CCT as a mental health treatment, there are several central considerations. These include the range of efficacy expected, how the intervention needs to be delivered, the dose required, and whether there are specific subpopulations who stand to make the most treatment gains. Further, there are several considerations about concurrent treatments that may be required to translate cognitive gains into improvements in everyday functioning. Finally, remote delivery of cognitive training has been studied in the past with some success.
As described in the meta-analyses, cognitive changes induced with CCT have generally been shown to have minimal efficacy for the improvement of everyday functioning in the absence of a targeted intervention aimed at functional skills. When CCT is combined with structured intervention programs, the term cognitive remediation therapy (CRT) is generally applied. CRT has been shown in meta-analyses to produce both cognitive and functional gains (139). There are multiple approaches to delivering CRT, but they all share common features. The intervention is delivered in person by a trainer, and other skills training is delivered as well, typically with a focus on vocational or social functioning. CCT combined with supported employment programs has proven in multiple studies to provide considerable benefits (e.g., 145), even in previous nonresponders (146). Hence, when delivered in a structured CRT program, the range of expected efficacy includes cognition and everyday functioning. Some studies have also trained social cognitive abilities, leading to improved social outcomes (15), and some have found that combined CCT and computerized social cognitive training (CSCT) lead to more substantial gains than CCT alone (147). However, a recent study using compensatory cognitive training combined with supported employment did not find employment gains (148).
Dosing of CCT has varied considerably across studies. In studies of severe mental illness, doses ranging from 15 to 135 training sessions have been delivered. One factor that may mediate the effect of dose is the extent of training engagement. Several studies have suggested that training engagement predicts the extent of training gains in CCT (149, 150). Even large doses of CCT may be ineffective if participants are not actually participating in the procedure (151). Thus, monitoring of engagement, easily accomplished through the software in most training programs, is clearly recommended. There are insufficient data to draw conclusions regarding the likelihood that training engagement will either improve in poorly engaged patients or worsen in those who are initially engaged.
In terms of specific subpopulations with potential to benefit from CRT, prodromal (152), first-episode (153), and chronic (154) schizophrenia patients show equivalent cognitive gains when trained with a single CCT system. In a reanalysis of a larger randomized trial, patients with a shorter illness duration had a greater cognitive and functional response to a comprehensive CRT program (155). In contrast, several studies of patients with extended institutional stays (156, 157) have suggested that benefits are common and include both cognitive and functional improvements. Similarly, in mood disorders, patients with a history of major depression and treatment resistance, both older and younger, have received benefits from CRT (158, 159). Thus, there are no clear indicators for illness characteristics that define which patients will achieve maximum benefit. Engagement has a much stronger signal than age in research to date.
Some rehabilitation facilities may not have access to computers for all participants, and some participants may prefer to train at home. Although the majority of structured CRT has been studied with in-person training, several studies suggest that home-based CCT can be accomplished with reasonable levels of adherence (70%) and with cognitive and social cognitive benefits (15, 152, 153). Train-at-home studies with nonpsychotic community-based populations have also been conducted (160). A sample of 2,912 older community dwellers participated in an entirely online training program, with evidence of gains in both composite scores on cognitive performance and everyday functioning. The dropout rate was considerably larger than seen in the studies noted above, with rates exceeding 50% at the 6-month follow-up.
A treatment for ADHD recently approved by the FDA, EndeavorRx (generic name, AKL-T01), also used a train-at-home performance-based training intervention (161). In a large-scale trial, AKL-T01 was delivered to children with ADHD in a video game–like interface via at-home play for 25 minutes per day, 5 days per week for 4 weeks. The outcome measure was performance on an ADHD-relevant cognitive task, the Test of Variables of Attention (TOVA) (162). The treatment was significantly superior to a video game control condition. AKL-T01 thus received approval to improve attention function as measured by computer-based testing in children ages 8–12 with primarily inattentive or combined-type ADHD who have a demonstrated attention issue. The sponsors of the treatment clearly state that it is designed to be used as augmentation therapy in addition to other treatments. One reason for this suggestion is that scores on ADHD rating scales did not show improvement after training. Thus, this intervention is very similar in terms of strategy to previous studies using CRT to improve cognitive functioning in other conditions, such as schizophrenia.

Pharmacological Augmentation of CRT

An important recent development been systematic studies of pharmacological augmentation of cognitive training (163). These augmentation strategies have been found to be successful for the use of stimulants (164), guanfacine (165), alertness promotion agents (166), and memantine (167) in schizophrenia, and for vortioxetine in age-related cognitive decline (168). Interestingly, modafinil and memantine have been much less effective as monotherapies for cognitive impairments (169, 170). Other studies are examining compounds that have shown preliminary efficacy as monotherapy treatment to improve cognition in schizophrenia (171) and as an adjunct to CRT (172), and these therapies may have promise as augmentations.
An additional important recent finding in the area of pharmacological augmentation of cognitive training is that of the combination of long-acting injectable antipsychotic medications and CRT. In a study of first-episode patients randomized to either oral or long-acting medications as well as to either CRT or another augmented psychosocial intervention, an important interaction effect was found (173). The combination of long-acting medication and CRT led to considerably greater cognitive gains than seen with CRT and oral medications. Further, the cognitive changes directly translated into functional gains, including work function. As this intervention also included vocational rehabilitation for all participants, the effects of cognitive gains associated with CRT on work outcomes in more chronic patients was reproduced. This is, to our knowledge, the first study demonstrating that clinical stability may be a factor that is associated with the efficacy of CRT.

Level of Evidence and Approval

As digital mental health technologies evolve, so do questions regarding the level of evidence to support their claims of efficacy. In response, some have proposed that digital health technologies may benefit from alternative endpoints and novel study designs in order to best capture their efficacy (174). The FDA’s Digital Health Software Precertification (Pre-Cert) Pilot Program is an attempt to reenvision how it approves such technologies (175), although questions remain about the real-world practicality of this approach, given that it remains a pilot project. In short, Pre-Cert seeks to expedite approval of software as a medical device through preapproving technology developers and using real-world data to assess the performance of the software after approval. Still, as noted above, there are smartphones apps, computer programs, and devices that have all been granted FDA marketing approval through more traditional pathways (section 510(k) and de novo) and trial designs.
There are other levels of FDA clearance for approvable technology. Under FDA guidelines, many technology-based interventions are viewed as general wellness applications, and this is outside the focus of regulation. These interventions include technology targeting adherence to scheduled therapeutic activities other than the act of taking medication, exercise, and certain elements of everyday functioning. These applications are generally similar to CCT applications in that they are not aimed at diagnosing and treating a disease. The FDA explicitly states that devices can be exempt from review because “when not associated with the diagnosis, cure, mitigation, treatment, or prevention of a disease the claim falls outside the scope of the definition of a medical device” (176, p. 8). Further, the FDA has a process referred to as enforcement discretion, in which the FDA considers the device to be a medical device but does not require that it receive a formal approval for use. Example 1 in the FDA guidance states that they do not intend to require approval for:
Software functions that help patients with diagnosed psychiatric conditions (e.g., post-traumatic stress disorder (PTSD), depression, anxiety, obsessive compulsive disorder) maintain their behavioral coping skills by providing a “Skill of the Day” behavioral technique or audio messages that the user can access when experiencing increased anxiety (176, p. 23).
Thus, devices that do not attempt to replace an approved treatment or attempt to eliminate the need for medical care fall under this heading. Clearance of technology under general wellness applications or medical devices that fall under enforcement discretion are not likely to be eligible for direct insurance reimbursement, although they could be part of other bundled services. While in many clinical settings this would not be relevant because therapeutic activities are not billed on a session-by-session basis, in some practice settings this would be more of a challenge. For example, computerized cognitive training is covered by some insurance plans for neurological conditions, such as persistent traumatic brain injury, but not for psychotic or depressive disorders. Similarly, certain adherence applications have been approved by the FDA, but they are linked only to a single medication because the software actually detects the presence of a chip that is ingested along with the pill. Finally, as noted above, the AKL-T01 application for ADHD was approved only as an adjunctive treatment, not a stand-alone.
As regulation seeks to catch up to the mental health technology space, clinicians and patients must make choices today. Various frameworks have been proposed for such evaluation, including one endorsed by APA (177). Several score-based databases have also emerged, although research suggests low rates of concordance between such scoring systems as well as an inability to update at the rate of technology changes (178). Newer educational initiatives offer to help patients and clinicians make informed decisions based on available data (179). The Federal Trade Commission (FTC) continues to sue technology vendors for false marketing claims (notably Lumosity, in relation to brain training in 2016 [180] and a menstrual period tracking app in 2021 [181]) and offers consumer guidance as well.

Conclusions

Technology-based assessment and intervention strategies are proliferating, and the COVID-19 pandemic has accelerated the process. These strategies are based on technology that is newly developed and continuing to evolve. Technological strategies are likely to allow for expansion of clinical assessment and intervention potential and for clinicians’ ability to deliver more service in the same time frame. Even purportedly nontechnological interventions involve technology today, including electronic health records and video conferences, but this review addresses some of the ways that technology will continue to expand in the immediate future.
Development is faster than validation, and advertisements are less expensive than research. A reasonable idea would be to consider the evidence for using applications, keeping in mind that exaggerated claims are common in the technology area. While some of these claims have been the target of investigations by the FTC, the more common challenge is applications that are marketed without extravagant claims but also without adequate data. As a field, we need to develop our standards for what we utilize now and what we wait until later for.
There are several issues to follow into the future. One is the research–clinical deployment gap. Clearly, many technologies are well validated in research settings but are not as actively used in the clinic. Over time, this situation can change; the case for CCT is a perfect example: the advent of better computer technology and the feasibility of remote administration of training has enabled the expansion of general community access to CCT. This process may have been kicked off by CCT providers who made exaggerated efficacy claims, as described above, but the result is that the general community is quite aware of CCT now.
Another critical issue is access. While both age and socioeconomic status used to be barriers to technology access, many more older people have access to the Internet and use smartphone technology. The lack of access on the part of lower-income and rural populations was clearly highlighted during the COVID-19 pandemic, and until the access disparity is resolved, many people will not be reachable with these interventions. Importantly, these are the same factors that create access barriers to mental services in general; given the promise of technology increasing access to mental health services, increasing access to technology will be a critical first step.
In summary, these technological developments are exciting, and they show efficacy in controlled studies and are increasingly designed to be acceptable to patients. There is likely more to come in this broad area, and assessments and interventions that would have seemed like science fiction in the past are entirely commonplace now.

References

1.
Linardon J, Cuijpers P, Carlbring P, et al: The efficacy of app-supported smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials. World Psychiatry 2019; 18:325–336
2.
Torous J, Firth J: Bridging the dichotomy of actual versus aspirational digital health. World Psychiatry 2018; 17:108–109
3.
Firth J, Torous J, Nicholas J, et al: The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017; 16:287–298
4.
Arrieux JP, Cole WR, Ahrens AP: A review of the validity of computerized neurocognitive assessment tools in mild traumatic brain injury assessment. Concussion 2017; 2:CNC31
5.
Buckley RF, Sparks KP, Papp KV, et al: Computerized cognitive testing for use in clinical trials: a comparison of the NIH Toolbox and Cogstate C3 batteries. J Prev Alzheimers Dis 2017; 4:3–11
6.
Wild K, Howieson D, Webbe F, et al: Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement 2008; 4:428–437
7.
Robbins TW, James M, Owen AM, et al: Cambridge Neuropsychological Test Automated Battery (CANTAB): a factor analytic study of a large sample of normal elderly volunteers. Dementia 1994; 5:266–281
8.
Keefe RSE, Goldberg TE, Harvey PD, et al: The Brief Assessment of Cognition in Schizophrenia: reliability, sensitivity, and comparison with a standard neurocognitive battery. Schizophr Res 2004; 68:283–297
9.
Atkins AS, Tseng T, Vaughan A, et al: Validation of the tablet-administered Brief Assessment of Cognition (BAC App). Schizophr Res 2017; 181:100–106
10.
Pearson: Q-interactive: easily administer and score clinical assessments using 2 iPads. https://www.pearsonassessments.com/professional-assessments/digital-solutions/q-interactive/about.html
11.
Cole WR, Arrieux JP, Ivins BJ, et al: A comparison of four computerized neurocognitive assessment tools to a traditional neuropsychological test battery in service members with and without mild traumatic brain injury. Arch Clin Neuropsychol 2018; 33:102–119
12.
Russell MT, Funsch KM, Springfield CR, et al: Validity of remote administration of the MATRICS Consensus Cognitive Battery for individuals with severe mental illness. Schizophr Res Cogn 2021; 27:100226
13.
Keefe RSE, Davis VG, Atkins AS, et al: Validation of a computerized test of functional capacity. Schizophr Res 2016; 175:90–96
14.
Harvey PD, Forero DB, Ahern LB, et al: The computerized functional skills assessment and training program: sensitivity to global cognitive impairment, correlations with cognitive abilities, and factor structure. Am J Geriatr Psychiatry 2021; 29:395–404
15.
Nahum M, Lee H, Fisher M, et al: Online social cognition training in schizophrenia: a double-blind, randomized, controlled multi-site clinical trial. Schizophr Bull 2021; 47:108–117
16.
Czaja SJ, Kallestrup P, Harvey PD: Evaluation of a novel technology-based program designed to assess and train everyday skills in older adults. Innov Aging 2020; 4:igaa052
17.
Kaemmerer T, Riordan P: Oral adaptation of the Trail Making Test: a practical review. Appl Neuropsychol Adult 2016; 23:384–389
18.
Salinas CM, Bordes Edgar V, Berrios Siervo G, et al: Transforming pediatric neuropsychology through video-based teleneuropsychology: an innovative private practice model pre-COVID-19. Arch Clin Neuropsychol 2020; 35:1189–1195
19.
Keefe RSE, Bilder RM, Davis SM, et al: Neurocognitive effects of antipsychotic medications in patients with chronic schizophrenia in the CATIE Trial. Arch Gen Psychiatry 2007; 64:633–647
20.
Silverstein SM, Jaeger J, Donovan-Lepore AM, et al: A comparative study of the MATRICS and IntegNeuro cognitive assessment batteries [published correction appears in J Clin Exp Neuropsychol 2010; 32:1149]. J Clin Exp Neuropsychol 2010; 32:937–952
21.
Parrish EM, Kamarsu S, Harvey PD, et al: Remote ecological momentary testing of learning and memory in adults with serious mental illness. Schizophr Bull 2021; 47:740–750
22.
Depp CA, Kamarsu S, Filip TF, et al: Ecological momentary facial emotion recognition in psychotic disorders. Psychol Med (Online ahead of print, January 12, 2021)
23.
Hoffman HG, Patterson DR, Carrougher GJ, et al: Effectiveness of virtual reality–based pain control with multiple treatments. Clin J Pain 2001; 17:229–235
24.
Hoffman HG, Meyer WJ III, Drever SA, et al: Virtual reality distraction to help control acute pain during medical procedures, in Virtual Reality for Psychological and Neurocognitive Interventions: Virtual Reality Technologies for Health and Clinical Applications. Edited by Rizzo A, Bouchard S. New York, Springer, 2019, pp 195–208
25.
Rizzo AA, Buckwalter JG: Theoretical and practical issues for the use of virtual reality in the cognitive rehabilitation of persons with acquired brain injuries: an update, in Proceedings of the 3rd International Conference on Virtual Reality and Persons With Disabilities. Edited by Murphy HJ. California State University Northridge, 1995. http://www.csun.edu/∼hfdss006/conf/1995/proceedings/0004.htm
26.
Rizzo AS, Koenig ST: Is clinical virtual reality ready for primetime? Neuropsychology 2017; 31:877–899
27.
Riva G: Virtual environment for body image modification: virtual reality system for the treatment of body image disturbances. Comput Hum Behav 1998; 14:477–490
28.
Riva G, Gutiérrez-Maldonado J, Dakanalis A, et al: Virtual reality in the assessment and treatment of weight-related disorders, in Virtual Reality for Psychological and Neurocognitive Interventions: Virtual Reality Technologies for Health and Clinical Applications. Edited by Rizzo A, Bouchard S. New York, Springer, 2019, pp 163–194
29.
Rothbaum BO, Hodges LF, Kooper R, et al: Virtual reality graded exposure in the treatment of acrophobia: a case report. Behav Ther 1995; 26:547–554
30.
Rothbaum BO, Price M, Jovanovic T, et al: A randomized, double-blind evaluation of d-cycloserine or alprazolam combined with virtual reality exposure therapy for posttraumatic stress disorder in Iraq and Afghanistan War veterans. Am J Psychiatry 2014; 171:640–648
31.
Slater M, Neyret S, Johnston T, et al: An experimental study of a virtual reality counselling paradigm using embodied self-dialogue. Sci Rep 2019; 9:10903
32.
Aukstakalnis S, Blatner D: Silicon Mirage: The Art and Science of Virtual Reality. Berkeley, Calif, Peachpit Press, 1992
33.
Slater M, Wilbur S: A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Teleoperators Virtual Environments 1997; 6:603–616
34.
Bouchard S, Dumoulin S, Robillard G, et al: Virtual reality compared with in vivo exposure in the treatment of social anxiety disorder: a three-arm randomised controlled trial. Br J Psychiatry 2017; 210:276–283
35.
Botella C, García‐Palacios A, Villa H, et al: Virtual reality exposure in the treatment of panic disorder and agoraphobia: a controlled study. Clin Psychol Psychotherapy 2007; 14:164–175
36.
Burke SL, Bresnahan TL, Li T, et al: Using Virtual Interactive Training Agents (ViTA) with adults with autism and other developmental disabilities. J Autism Dev Disord 2018; 48:905–912
37.
Rutten A, Cobb S, Neale H, et al: The AS interactive project: single‐user and collaborative virtual environments for people with high‐functioning autistic spectrum disorders. J Visualization Computer Anim 2003; 14:233–241
38.
Yang YJD, Allen T, Abdullahi SM, et al: Neural mechanisms of behavioral change in young adults with high‐functioning autism receiving virtual reality social cognition training: a pilot study. Autism Res 2018; 11:713–725
39.
Gonzalez-Liencres C, Zapata LE, Iruretagoyena G, et al: Being the victim of intimate partner violence in virtual reality: first- versus third-person perspective. Front Psychol 2020; 11:820
40.
Falconer CJ, Rovira A, King JA, et al: Embodying self-compassion within virtual reality and its effects on patients with depression. BrJPsych Open 2016; 2:74–80
41.
Talbot T, Rizzo A: Virtual human standardized patients for clinical training, in Virtual Reality for Psychological and Neurocognitive Interventions: Virtual Reality Technologies for Health and Clinical Applications. Edited by Rizzo A, Bouchard S. New York, Springer, 2019, pp 387–405
42.
Reger GM, Norr AM, Rizzo AS, et al: Virtual standardized patients vs academic training for learning motivational interviewing skills in the US Department of Veterans Affairs and the US military: a randomized trial. JAMA Netw Open 2020; 3:e2017348
43.
Mozgai S, Hartholt A, Rizzo A: An adaptive agent–based interface for personalized health interventions, in Proceedings of the 2020 ACM Intelligent User Interfaces Conference. Association for Computing Machinery, 2020, pp 118–119
44.
Rizzo AA, Shilling R, Forbell ED, et al: Autonomous virtual human agents for healthcare information support and clinical interviewing, in Artificial Intelligence in Mental Healthcare Practice. Edited by Luxton DD. Oxford, Academic Press, 2015, pp 53–80
45.
Rizzo AA, Scherer S, DeVault D, et al: Detection and computational analysis of psychological signals using a virtual human interviewing agent. J Pain Manage 2016; 9:311–321
46.
Yao H, de Siqueira AG, Foster A, et al: Toward automated evaluation of empathetic responses in virtual human interaction systems for mental health scenarios, in Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. Association for Computing Machinery, 2020, pp 1–8
47.
Parsons TD, Rizzo AA: Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: a meta-analysis. J Behav Ther Exp Psychiatry 2008; 39:250–261
48.
Powers MB, Rothbaum BO: Recent advances in virtual reality therapy for anxiety and related disorders: introduction to the special issue. J Anxiety Disord 2019; 61:1–2
49.
Rizzo AS, Difede J, Rothbaum BO, et al: Development and early evaluation of the Virtual Iraq/Afghanistan exposure therapy system for combat-related PTSD. Ann N Y Acad Sci 2010; 1208:114–125
50.
Rizzo AA, John B, Newman B, et al: Virtual reality as a tool for delivering PTSD exposure therapy and stress resilience training. J Militar Behav Health 2013; 1:48–54
51.
Rizzo AA, Cukor J, Gerardi M, et al: Virtual reality exposure therapy for PTSD due to military combat and terrorist attacks. J Contemp Psychother 2015; 45:255–264
52.
Rothbaum BO, Hodges LF, Ready D, et al: Virtual reality exposure therapy for Vietnam veterans with posttraumatic stress disorder. J Clin Psychiatry 2001; 62:617–622
53.
Bordnick PS, Washburn M: Virtual environments for substance abuse assessment and treatment, in Virtual Reality for Psychological and Neurocognitive Interventions: Virtual Reality Technologies for Health and Clinical Applications. Edited by Rizzo A, Bouchard S. New York, Springer, 2019, pp 131–162
54.
Hone-Blanchet A, Wensing T, Fecteau S: The use of virtual reality in craving assessment and cue-exposure therapy in substance use disorders. Front Hum Neurosci 2014; 8:844
55.
Yoon JH, Ii RDLG, Bordnick PS, et al: A pilot study examining the efficacy of virtual-reality based relapse prevention among alcohol-dependent veterans with traumatic brain injury. Drug Alcohol Depend 2014; 140:e247
56.
Freeman D, Bradley J, Antley A, et al: Virtual reality in the treatment of persecutory delusions: randomised controlled experimental study testing how to reduce delusional conviction. Br J Psychiatry 2016; 209:62–67
57.
John NW, Pop SR, Day TW, et al: The implementation and validation of a virtual environment for training powered wheelchair manoeuvres. IEEE Trans Vis Comput Graph 2018; 24:1867–1878
58.
Maggio MG, Latella D, Maresca G, et al: Virtual reality and cognitive rehabilitation in people with stroke: an overview. J Neurosci Nurs 2019; 51:101–105
59.
Rizzo AA, Bowerly T, Buckwalter JG, et al: A virtual reality scenario for all seasons: the virtual classroom. CNS Spectrums 2006; 11:35–44
60.
Rizzo AA, Chen JZ, Wang J, et al: Normative data for a next generation virtual classroom for attention assessment in children with ADHD and beyond, in Proceedings of the International Conference on Disability, Virtual Reality, and Associated Technologies. http://studio.hei-lab.ulusofona.pt/archive/2021/ICDVRAT2021_Full_Proceedings_13thConf_FinalVersion.pdf
61.
Dellazizzo L, Potvin S, Luigi M, et al: Evidence on virtual reality–based therapies for psychiatric disorders: meta-review of meta-analyses. J Med Internet Res 2020; 22:e20889
62.
Carl E, Stein AT, Levihn-Coon A, et al: Virtual reality exposure therapy for anxiety and related disorders: a meta-analysis of randomized controlled trials. J Anxiety Disord 2019; 61:27–36
63.
Mallari B, Spaeth EK, Goh H, et al: Virtual reality as an analgesic for acute and chronic pain in adults: a systematic review and meta-analysis. J Pain Res 2019; 12:2053–2085
64.
Skarbez R, Brooks FP Jr, Whitton MC: A survey of presence and related concepts. ACM Comput Surv (CSUR) 2017; 50:1–39
65.
Norcross JC, Pfund RA, Prochaska JO: Psychotherapy in 2022: a Delphi poll on its future. Prof Psychol Res Pract 2013; 44:363–370
66.
Rizzo A, Hartholt A, Mozgai S: From combat to COVID-19: managing the impact of trauma using virtual reality. J Tech Hum Serv 2021; 39:341–347
67.
Doherty K, Balaskas A, Doherty G: The design of ecological momentary assessment technologies. Interact Comput 2020; 32:257–278
68.
Stone AA, Schwartz JE, Neale JM, et al: A comparison of coping assessed by ecological momentary assessment and retrospective recall. J Pers Soc Psychol 1998; 74:1670–1680
69.
O’Connell KA, Gerkovich MM, Cook MR, et al: Coping in real time: using ecological momentary assessment techniques to assess coping with the urge to smoke. Res Nurs Health 1998; 21:487–497
70.
Kamarck TW, Schwartz JE, Shiffman S, et al: Psychosocial stress and cardiovascular risk: what is the role of daily experience? J Pers 2005; 73:1749–1774
71.
Stone AA, Shiffman S, Schwartz JE, et al: Patient compliance with paper and electronic diaries. Control Clin Trials 2003; 24:182–199
72.
Dunton GF, Huh J, Leventhal AM, et al: Momentary assessment of affect, physical feeling states, and physical activity in children. Health Psychol 2014; 33:255–263
73.
Weinstein SM, Mermelstein RJ, Hedeker D, et al: The time-varying influences of peer and family support on adolescent daily positive and negative affect. J Clin Child Adolesc Psychol 2006; 35:420–430
74.
Depp CA, Kim DH, de Dios LV, et al: A pilot study of mood ratings captured by mobile phone versus paper-and-pencil mood charts in bipolar disorder. J Dual Diagn 2012; 8:326–332
75.
Myin-Germeys I, Klippel A, Steinhart H, et al: Ecological momentary interventions in psychiatry. Curr Opin Psychiatry 2016; 29:258–263
76.
Depp CA, Mausbach B, Granholm E, et al: Mobile interventions for severe mental illness: design and preliminary data from three approaches. J Nerv Ment Dis 2010; 198:715–721
77.
Moore DJ, Montoya JL, Blackstone K, et al: Preliminary evidence for feasibility, use, and acceptability of individualized texting for adherence building for antiretroviral adherence and substance use assessment among HIV-infected methamphetamine users. AIDS Res Treat 2013; 2013:585143
78.
Shiffman S, Scholl SM, Mao J, et al: Ecological momentary assessment of temptations and lapses in non-daily smokers. Psychopharmacology (Berl) 2020; 237:2353–2365
79.
Kleiman EM, Turner BJ, Fedor S, et al: Examination of real-time fluctuations in suicidal ideation and its risk factors: results from two ecological momentary assessment studies. J Abnorm Psychol 2017; 126:726–738
80.
Ben-Zeev D, Young MA, Depp CA: Real-time predictors of suicidal ideation: mobile assessment of hospitalized depressed patients. Psychiatry Res 2012; 197:55–59
81.
Depp CA, Moore RC, Perivoliotis D, et al: Social behavior, interaction appraisals, and suicidal ideation in schizophrenia: the dangers of being alone. Schizophr Res 2016; 172:195–200
82.
Granholm E, Holden JL, Mikhael T, et al: What do people with schizophrenia do all day? Ecological momentary assessment of real-world functioning in schizophrenia. Schizophr Bull 2020; 46:242–251
83.
Strassnig MT, Harvey PD, Miller ML, et al: Real world sedentary behavior and activity levels in patients with schizophrenia and controls: an ecological momentary assessment study. Mental Health Phys Act 2021; 20:100364
84.
Koffer R, Drewelies J, Almeida DM, et al: The role of general and daily control beliefs for affective stressor-reactivity across adulthood and old age. J Gerontol B Psychol Sci Soc Sci 2019; 74:242–253
85.
Harvey PD, Miller ML, Moore RC, et al: Capturing clinical symptoms with ecological momentary assessment: convergence of momentary reports of psychotic and mood symptoms with diagnoses and standard clinical assessments. Innov Clin Neurosci 2021; 18:24–30
86.
Cohen AS, Schwartz E, Le TP, et al: Digital phenotyping of negative symptoms: the relationship to clinician ratings. Schizophr Bull 2021; 47:44–53
87.
Targum SD, Sauder C, Evans M, et al: Ecological momentary assessment as a measurement tool in depression trials. J Psychiatr Res 2021; 136:256–264
88.
Durand D, Strassnig MT, Moore RC, et al: Self-reported social functioning and social cognition in schizophrenia and bipolar disorder: using ecological momentary assessment to identify the origin of bias. Schizophr Res 2021; 230:17–23
89.
Insel TR: Digital phenotyping: a global tool for psychiatry. World Psychiatry 2018; 17:276–277
90.
Barnett I, Onnela JP: Inferring mobility measures from GPS traces with missing data. Biostatistics 2020; 21:e98–e112
91.
Fulford D, Mote J, Gonzalez R, et al: Smartphone sensing of social interactions in people with and without schizophrenia. J Psychiatr Res 2021; 137:613–620
92.
Huang EJ, Onnela J-P: Augmented movelet method for activity classification using smartphone gyroscope and accelerometer data. Sensors (Basel) 2020; 20:3706
93.
Hswen Y, Gopaluni A, Brownstein JS, et al: Using Twitter to detect psychological characteristics of self-identified persons with autism spectrum disorder: a feasibility study. JMIR Mhealth Uhealth 2019; 7:e12264
94.
Staples P, Torous J, Barnett I, et al: A comparison of passive and active estimates of sleep in a cohort with schizophrenia. NPJ Schizophrenia 2017; 3:37
95.
Yan Z, Yang J, Tapia EM: Smartphone Bluetooth based social sensing, in Proceedings of the 2013 ACM conference on pervasive and ubiquitous computing adjunct publication. Association for Computing Machinery, 2013, pp 95–98
96.
Wang Y, Ren X, Liu X, et al: Examining the correlation between depression and social behavior on smartphones through usage metadata: empirical study. JMIR Mhealth Uhealth 2021; 9:e19046
97.
Goldsack JC, Coravos A, Bakker JP, et al: Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for biometric monitoring technologies (BioMeTs). NPJ Digit Med 2020; 3:55
98.
Donaire-Gonzalez D, Valentín A, van Nunen E, et al: ExpoApp: an integrated system to assess multiple personal environmental exposures. Environ Int 2019; 126:494–503
99.
Adler DA, Ben-Zeev D, Tseng VWS, et al: Predicting early warning signs of psychotic relapse from passive sensing data: an approach using encoder-decoder neural networks. JMIR Mhealth Uhealth 2020; 8:e19962
100.
Ebner-Priemer UW, Mühlbauer E, Neubauer AB, et al: Digital phenotyping: towards replicable findings with comprehensive assessments and integrative models in bipolar disorders. Int J Bipol Disord 2020; 8:35
101.
Faurholt-Jepsen M, Busk J, Frost M, et al: Voice analysis as an objective state marker in bipolar disorder. Transl Psychiatry 2016; 6:e856
102.
Saeb S, Zhang M, Karr CJ, et al: Mobile phone sensor correlates of depressive symptom severity in daily-life behavior: an exploratory study. J Med Internet Res 2015; 17:e175
103.
Saeb S, Lattie EG, Schueller SM, et al: The relationship between mobile phone location sensor data and depressive symptom severity. PeerJ 2016; 4:e2537
104.
Sarda A, Munuswamy S, Sarda S, et al: Using passive smartphone sensing for improved risk stratification of patients with depression and diabetes: cross-sectional observational study. JMIR Mhealth Uhealth 2019; 7:e11041
105.
Di Matteo D, Fotinos K, Lokuge S, et al: The relationship between smartphone-recorded environmental audio and symptomatology of anxiety and depression: exploratory study. JMIR Form Res 2020; 4:e18751
106.
Jacobson NC, Summers B, Wilhelm S: Digital biomarkers of social anxiety severity: digital phenotyping using passive smartphone sensors. J Med Internet Res 2020; 22:e16875
107.
Parrish EM, Depp CA, Moore RC, et al: Emotional determinants of life-space through GPS and ecological momentary assessment in schizophrenia: what gets people out of the house? Schizophr Res 2020; 224:67–73
108.
Depp CA, Bashem J, Moore RC, et al: GPS mobility as a digital biomarker of negative symptoms in schizophrenia: a case control study. NPJ Digit Med 2019; 2:108
109.
Umbricht D, Cheng WY, Lipsmeier F, et al: Deep learning–based human activity recognition for continuous activity and gesture monitoring for schizophrenia patients with negative symptoms. Front Psychiatry 2020; 11:574375
110.
Wee ZY, Yong SWL, Chew QH, et al: Actigraphy studies and clinical and biobehavioural correlates in schizophrenia: a systematic review. J Neural Transm (Vienna) 2019; 126:531–558
111.
Raugh IM, James SH, Gonzalez CM, et al: Geolocation as a digital phenotyping measure of negative symptoms and functional outcome. Schizophr Bull 2020; 46:1596–1607
112.
Henson P, D’Mello R, Vaidyam A, et al: Anomaly detection to predict relapse risk in schizophrenia. Transl Psychiatry 2021; 11:28
113.
Fraccaro P, Beukenhorst A, Sperrin M, et al: Digital biomarkers from geolocation data in bipolar disorder and schizophrenia: a systematic review. J Am Med Inform Assoc 2019; 26:1412–1420
114.
Or F, Torous J, Onnela JP: High potential but limited evidence: using voice data from smartphones to monitor and diagnose mood disorders. Psychiatr Rehabil J 2017; 40:320–324
115.
Raugh IM, Chapman HC, Bartolomeo LA, et al: A comprehensive review of psychophysiological applications for ecological momentary assessment in psychiatric populations. Psychol Assess 2019; 31:304–317
116.
Tazawa Y, Wada M, Mitsukura Y, et al: Actigraphy for evaluation of mood disorders: a systematic review and meta-analysis. J Affect Disord 2019; 253:257–269
117.
Moshe I, Terhorst Y, Opoku Asare K, et al: Predicting symptoms of depression and anxiety using smartphone and wearable data. Front Psychiatry 2021; 12:625247
118.
Dagum P: Digital biomarkers of cognitive function. NPJ Digit Med 2018; 1:10
119.
Benoit J, Onyeaka H, Keshavan M, et al: Systematic review of digital phenotyping and machine learning in psychosis spectrum illnesses. Harv Rev Psychiatry 2020; 28:296–304
120.
Simblett SK, Bruno E, Siddi S, et al: Patient perspectives on the acceptability of mHealth technology for remote measurement and management of epilepsy: a qualitative analysis. Epilepsy Behav 2019; 97:123–129
121.
Torous J, Wisniewski H, Bird B, et al: Creating a digital health smartphone app and digital phenotyping platform for mental health and diverse healthcare needs: an interdisciplinary and collaborative approach. J Technol Behav Sci 2019; 4:73–85
122.
Raugh IM, James SH, Gonzalez CM, et al: Geolocation as a digital phenotyping measure of negative symptoms and functional outcome. Schizophr Bull 2020; 46:1596–1607
123.
Bonet L, Torous J, Arce D, et al: ReMindCare, an app for daily clinical practice in patients with first episode psychosis: a pragmatic real-world study protocol. Early Interv Psychiatry 2021; 15:183–192
124.
Menon V, Selvakumar N, Kattimani S, et al: Therapeutic effects of mobile-based text message reminders for medication adherence in bipolar I disorder: are they maintained after intervention cessation? J Psychiatr Res 2018; 104:163–168
125.
US Food and Drug Administration: FDA approves pill with sensor that digitally tracks if patients have ingested their medication. https://www.fda.gov/news-events/press-announcements/fda-approves-pill-sensor-digitally-tracks-if-patients-have-ingested-their-medication. November 13, 2017
126.
Morey TE, Booth M, Wasdo S, et al: Oral adherence monitoring using a breath test to supplement highly active antiretroviral therapy. AIDS Behav 2013; 17:298–306
127.
Firth J, Torous J, Nicholas J, et al: The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017; 16:287–298
128.
Linardon J, Fuller-Tyszkiewicz M: Attrition and adherence in smartphone-delivered interventions for mental health problems: a systematic and meta-analytic review. J Consult Clin Psychol 2020; 88:1–13
129.
Huguet A, Rao S, McGrath PJ, et al: A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLoS One 2016; 11:e0154248
130.
Hrynyschyn R, Dockweiler C: Effectiveness of smartphone-based cognitive behavioral therapy among patients with major depression: systematic review of health implications. JMIR Mhealth Uhealth 2021; 9:e24703
131.
Firth J, Torous J, Nicholas J, et al: Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. J Affect Disord 2017; 218:15–22
132.
Nahum-Shani I, Smith SN, Spring BJ, et al: Just-in-time adaptive interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Ann Behav Med 2018; 52:446–462
133.
Ben-Zeev D, Scherer EA, Gottlieb JD, et al: mHealth for schizophrenia: patient engagement with a mobile phone intervention following hospital discharge. JMIR Ment Health 2016; 3:e34
134.
Marler JD, Fujii CA, Utley DS, et al: Initial assessment of a comprehensive digital smoking cessation program that incorporates a mobile app, breath sensor, and coaching: cohort study. JMIR Mhealth Uhealth 2019; 7:e12609
135.
Maricich YA, Bickel WK, Marsch LA, et al: Safety and efficacy of a prescription digital therapeutic as an adjunct to buprenorphine for treatment of opioid use disorder. Curr Med Res Opin 2021; 37:167–173
136.
Hser YI, Evans E, Huang D, et al: Long-term outcomes after randomization to buprenorphine/naloxone versus methadone in a multi-site trial. Addiction 2016; 111:695–705
137.
Maricich YA, Xiong X, Gerwien R, et al: Real-world evidence for a prescription digital therapeutic to treat opioid use disorder. Curr Med Res Opin 2021; 37:175–183
138.
McGurk SR, Twamley EW, Sitzer D, et al: A meta-analysis of cognitive remediation in schizophrenia. Am J Psychiatry 2007; 164:1791–1802
139.
Wykes T, Huddy V, Cellard C, et al: A meta-analysis of cognitive remediation for schizophrenia: methodology and effect sizes. Am J Psychiatry 2011; 168:472–485
140.
Rebok GW, Ball K, Guey LT, et al: Ten-year effects of the Advanced Cognitive Training for Independent and Vital Elderly Cognitive Training Trial on cognition and everyday functioning in older adults. J Am Geriatr Soc 2014; 62:16–24
141.
Wolinsky FD, Vander Weg MW, Howren MB, et al: The effect of cognitive speed of processing training on the development of additional IADL difficulties and the reduction of depressive symptoms results from the IHAMS randomized controlled trial. J Aging Health 2015; 27:334–354
142.
Smith GE, Housen P, Yaffe K, et al: A cognitive training program based on principles of brain plasticity: results from the Improvement in Memory With Plasticity-Based Adaptive Cognitive Training (IMPACT) study. J Am Geriatr Soc 2009; 57:594–603
143.
Bellani M, Biagianti B, Zovetti N, et al: The effects of cognitive remediation on cognitive abilities and real-world functioning among people with bipolar disorder: a systematic review. J Affect Disord 2019; 257:691–697
144.
Motter JN, Grinberg A, Lieberman DH, et al: Computerized cognitive training in young adults with depressive symptoms: effects on mood, cognition, and everyday functioning. J Affect Disord 2019; 245:28–37
145.
McGurk SR, Mueser KT, Feldman K, et al: Cognitive training for supported employment: 2–3 year outcomes of a randomized controlled trial. Am J Psychiatry 2007; 164:437–441
146.
McGurk SR, Mueser KT, Xie H, et al: Cognitive enhancement treatment for people with mental illness who do not respond to supported employment: a randomized controlled trial. Am J Psychiatry 2015; 172:852–861
147.
Lindenmayer JP, Khan A, McGurk SR, et al: Does social cognition training augment response to computer-assisted cognitive remediation for schizophrenia? Schizophr Res 2018; 201:180–186
148.
Twamley EW, Thomas KR, Burton CZ, et al: Compensatory cognitive training for people with severe mental illnesses in supported employment: a randomized controlled trial. Schizophr Res 2019; 203:41–48
149.
Harvey PD, Balzer AM, Kotwicki RJ: Training engagement, baseline cognitive functioning, and cognitive gains with computerized cognitive training: a cross-diagnostic study. Schizophr Res Cogn 2020; 19:100150
150.
Biagianti B, Fisher M, Neilands TB, et al: Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia. Neuropsychology 2016; 30:998–1008
151.
Mahncke HW, Kim SJ, Rose A, et al: Evaluation of a plasticity-based cognitive training program in schizophrenia: results from the eCaesar trial. Schizophr Res 2019; 208:182–189
152.
Loewy R, Fisher M, Schlosser DA, et al: Intensive auditory cognitive training improves verbal memory in adolescents and young adults at clinical high risk for psychosis. Schizophr Bull 2016; 42(suppl 1):S118–S126
153.
Fisher M, Loewy R, Carter C, et al: Neuroplasticity-based auditory training via laptop computer improves cognition in young individuals with recent onset schizophrenia. Schizophr Bull 2015; 41:250–258
154.
Fisher M, Mellon SH, Wolkowitz O, et al: Neuroscience-informed auditory training in schizophrenia: a final report of the effects on cognition and serum brain-derived neurotrophic factor. Schizophr Res Cogn 2016; 3:1–7
155.
Bowie CR, Grossman M, Gupta M, et al: Cognitive remediation in schizophrenia: efficacy and effectiveness in patients with early versus long-term course of illness. Early Interv Psychiatry 2014; 8:32–38
156.
Lindenmayer JP, McGurk SR, Mueser KT, et al: A randomized controlled trial of cognitive remediation among inpatients with persistent mental illness. Psychiatr Serv 2008; 59:241–247
157.
Thomas ML, Bismark AW, Joshi YB, et al: Targeted cognitive training improves auditory and verbal outcomes among treatment refractory schizophrenia patients mandated to residential care. Schizophr Res 2018; 202:378–384
158.
Morimoto SS, Altizer RA, Gunning FM, et al: Targeting cognitive control deficits with neuroplasticity-based computerized cognitive remediation in patients with geriatric major depression: a randomized, double-blind, controlled trial. Am J Geriatr Psychiatry 2020; 28:971–980
159.
Bowie CR, Gupta M, Holshausen K, et al: Cognitive remediation for treatment-resistant depression: effects on cognition and functioning and the role of online homework. J Nerv Ment Dis 2013; 201:680–685
160.
Corbett A, Owen A, Hampshire A, et al: The effect of an online cognitive training package in healthy older adults: an online randomized controlled trial. J Am Med Dir Assoc 2015; 16:990–997
161.
Kollins SH, DeLoss DJ, Cañadas E, et al: A novel digital intervention for actively reducing severity of paediatric ADHD (STARS-ADHD): a randomised controlled trial. Lancet Digit Health 2020; 2:e168–e178
162.
Leark RA, Dupuy TR, Greenberg LM, et al: Test of Visual Attention: Professional Manual (Edition Number 9.1-12-g63538d5). Los Alamitos, Calif, TOVA Corporation, December 30, 2020
163.
Swerdlow NR: Beyond antipsychotics: pharmacologically augmented cognitive therapies (PACTs) for schizophrenia. Neuropsychopharmacology 2012; 37:310–311
164.
Swerdlow NR, Tarasenko M, Bhakta SG, et al: Amphetamine enhances gains in auditory discrimination training in adult schizophrenia patients. Schizophr Bull 2017; 43:872–880
165.
McClure MM, Graff F, Triebwasser J, et al: Guanfacine augmentation of a combined intervention of computerized cognitive remediation therapy and social skills training for schizotypal personality disorder. Am J Psychiatry 2019; 176:307–314
166.
Michalopoulou PG, Lewis SW, Drake RJ, et al: Modafinil combined with cognitive training: pharmacological augmentation of cognitive training in schizophrenia. Eur Neuropsychopharmacol 2015; 25:1178–1189
167.
Swerdlow NR, Bhakta SG, Talledo J, et al: Memantine effects on auditory discrimination and training in schizophrenia patients. Neuropsychopharmacology 2020; 45:2180–2188
168.
Lenze EJ, Stevens A, Waring JD, et al: Augmenting computerized cognitive training with vortioxetine for age-related cognitive decline: a randomized controlled trial. Am J Psychiatry 2020; 177:548–555
169.
Bhakta SG, Chou HH, Rana B, et al: Effects of acute memantine administration on MATRICS Consensus Cognitive Battery performance in psychosis: testing an experimental medicine strategy. Psychopharmacology (Berl) 2016; 233:2399–2410
170.
Ortiz-Orendain J, Covarrubias-Castillo SA, Vazquez-Alvarez AO, et al: Modafinil for people with schizophrenia or related disorders. Cochrane Database Syst Rev 2019; 12:CD008661
171.
Fleischhacker WW, Podhorna J, Gröschl M, et al: Efficacy and safety of the novel glycine transporter inhibitor BI 425809 once daily in patients with schizophrenia: a double-blind, randomised, placebo-controlled phase 2 study. Lancet Psychiatry 2021; 8:191–201
172.
Harvey PD, Bowie CR, McDonald S, et al: Evaluation of the efficacy of BI 425809 pharmacotherapy in patients with schizophrenia receiving computerized cognitive training: methodology for a double-blind, randomized, parallel-group trial. Clin Drug Investig 2020; 40:377–385
173.
Nuechterlein KH, Ventura J, Subotnik KL, et al: A randomized controlled trial of cognitive remediation and long-acting injectable risperidone after a first episode of schizophrenia: improving cognition and work/school functioning. Psychol Med 2022; 8:1517–1526
174.
Coravos A, Goldsack JC, Karlin DR, et al: Digital medicine: a primer on measurement. Digital Biomarkers 2019; 3:31–71
175.
US Food and Drug Administration: Digital Health Software Precertification (Pre-Cert) Program. https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-program. May 6, 2021
176.
US Food and Drug Administration: Mobile Medical Applications: Guidance for industry and FDA Staff. Washington, DC, US Food and Drug Administration, February 9, 2015
177.
Torous JB, Chan SR, Gipson SYMT, et al: A hierarchical framework for evaluation and informed decision making regarding smartphone apps for clinical care. Psychiatr Serv 2018; 69:498–500
178.
Carlo AD, Hosseini Ghomi R, Renn BN, et al: By the numbers: ratings and utilization of behavioral health mobile applications. NPJ Digit Med 2019; 2:54–58
179.
Lagan S, Aquino P, Emerson MR, et al: Actionable health app evaluation: translating expert frameworks into objective metrics. NPJ Digit Med 2020; 3:100
180.
Federal Trade Commission: Lumosity to pay $2 million to settle FTC deceptive advertising charges for its “brain training” program. January 5, 2016. https://www.ftc.gov/news-events/press-releases/2016/01/lumosity-pay-2-million-settle-ftc-deceptive-advertising-charges
181.
Federal Trade Commission: Developer of popular women’s fertility-tracking app settles FTC allegations that it misled consumers about the disclosure of their health data. January 13, 2021. https://www.ftc.gov/news-events/press-releases/2021/01/developer-popular-womens-fertility-tracking-app-settles-ftc

Information & Authors

Information

Published In

Go to American Journal of Psychiatry
Go to American Journal of Psychiatry
American Journal of Psychiatry
Pages: 897 - 914
PubMed: 36200275

History

Received: 24 December 2021
Revision received: 14 March 2022
Revision received: 2 May 2022
Accepted: 17 May 2022
Published online: 6 October 2022
Published in print: December 01, 2022

Keywords

  1. Cognition/Learning/Memory
  2. Emotion
  3. Psychotherapy
  4. Assessment and Interviewing
  5. Neurodevelopmental Disorders
  6. Attention Deficit Hyperactivity Disorder (ADHD)
  7. Posttraumatic Stress Disorder (PTSD)
  8. Depressive Disorders
  9. Bipolar and Related Disorders
  10. Schizophrenia Spectrum and Other Psychotic Disorders

Authors

Details

Philip D. Harvey, Ph.D. [email protected]
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Colin A. Depp, Ph.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Albert A. Rizzo, Ph.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Gregory P. Strauss, Ph.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
David Spelber, M.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Linda L. Carpenter, M.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Ned H. Kalin, M.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
John H. Krystal, M.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
William M. McDonald, M.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Charles B. Nemeroff, M.D., Ph.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Carolyn I. Rodriguez, M.D., Ph.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
Alik S. Widge, M.D., Ph.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).
John Torous, M.D.
Department of Psychiatry, University of Miami Miller School of Medicine, Miami, and Miami VA Medical Center (Harvey); Department of Psychiatry, UC San Diego Medical Center, La Jolla (Depp); USC Institute for Creative Technologies, University of Southern California, Los Angeles (Rizzo); Department of Psychology, University of Georgia, Athens (Strauss); Department of Psychiatry, Dell Medical Center, University of Texas at Austin (Spelber, Nemeroff); Department of Psychiatry and Human Behavior, Alpert Medical School of Brown University, Providence, R.I. (Carpenter); Department of Psychiatry, University of Wisconsin Medical School, Madison (Kalin); Department of Psychiatry, Yale University School of Medicine, New Haven, Conn. (Krystal); Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta (McDonald); Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford; Veterans Affairs Palo Alto Health Care System, Palo Alto (Rodriguez); Department of Psychiatry and Behavioral Sciences and Medical Discovery Team–Addictions, University of Minnesota, Minneapolis (Widge); Department of Psychiatry, Beth Israel Deaconess Medical Center, Boston (Torous).

Notes

Send correspondence to Dr. Harvey ([email protected]).

Competing Interests

Dr. Harvey has served as a consultant for or received travel reimbursements from Acadia, Alkermes, BioExcel, Boehringer Ingelheim, Karuna Pharma, Merck Pharma, Minerva Pharma, Roche Pharma, and Sunovion (DSP) Pharma; he receives royalties for the Brief Assessment of Cognition in Schizophrenia (owned by WCG Verasci, Inc., and contained in the MATRICS Consensus Cognitive Battery); he is chief scientific officer of i-Function, and serves as a scientific consultant for EMA Wellness; he owns stock options in Mindstrong and equity interest in i-Function and EMA Wellness; he is compensated as an editor-in-chief by Elsevier; and he serves on the board of Skyland Trail. Dr. Rizzo has served as a consultant for Penumbra and Cognitive Leap. Dr. Strauss has served as a consultant for Acadia, Boehringer Ingelheim, Lundbeck, Minerva Neuroscience, Otsuka, and Sunovion; he is a research adviser at Quantic Innovations; he receives royalties in relation to commercial use of the Brief Negative Symptom Scale (BNSS), which are donated to the Brain and Behavior Research Foundation, and he has conducted trainings for MedAvante-ProPhase in conjunction with use of the BNSS. Dr. Carpenter has research grant or clinical trials contracts with Affect Neuro, Janssen, NIH, Neuronetics, and Neurolief; she has served as a consultant for Affect Neuro, Janssen, Neuronetics, Neurolief, Nexstim, Otsuka, Sage Therapeutics, and Sunovion; has she has received in-kind support for research projects from Affect Neuro, Janssen, Neuronetics, and Neurolief, and equipment loan from Nexstim. Dr. Kalin is Editor-in-Chief of the American Journal of Psychiatry, and Drs. McDonald and Rodriguez are Deputy Editors; the Editors’ disclosures are reported in the April issue of the Journal. Dr. Krystal has served as a consultant for Aptinyx, Biogen Bionomics, Boehringer Ingelheim, Epiodyne, EpiVario, Janssen Research and Development, Jazz Pharmaceuticals, Otsuka America Pharmaceutical, Spring Care, and Sunovion Pharmaceuticals; he is listed as co-inventor on a patent licensed by Yale to Spring Health; he serves on the board of directors of Freedom Biosciences and on scientific advisory boards for Biohaven Pharmaceuticals, BioXcel Therapeutics (clinical advisory board), Cerevel Therapeutics, Delix Therapeutics, Eisai, EpiVario, Jazz Pharmaceuticals, Neumora Therapeutics, Neurocrine Biosciences, Novartis, PsychoGenics, Tempero Bio, and Terran Biosciences; he holds stock in Biohaven Pharmaceuticals and Spring Care and stock options in Biohaven Pharmaceuticals Medical Sciences, EpiVario, Neumora Therapeutics, Tempero Bio, and Terran Biosciences; he serves on the editorial board of Biological Psychiatry; he is named on U.S. patents 5,447,948, 8,778,979, and 9592207, patent application nos. 15/379,013, 61/973/961, 62/444,552, 62/719,935, and 63/125,181, and USPTO docket number Y0087.70116US00; and he has received study medications from AstraZeneca, Cerevel, and Novartis. Dr. Nemeroff has served as a consultant for AbbVie, Acadia Pharmaceuticals, Alfasigma, ANeuroTech (division of Anima BV), BioXcel Therapeutics, Corcept Therapeutics Pharmaceuticals Company, EcoR1, EMA Wellness, Engrail Therapeutics, GoodCap Pharmaceuticals, Intra-Cellular Therapies, Magstim, Navitor Pharmaceuticals, Neuritek, Pasithea Therapeutic Corp., Sage, Senseye, Signant Health, Silo Pharma, SK Life Science, and XW Pharma and on scientific advisory boards for ANeuroTech, the Anxiety and Depression Association of America (ADAA), the Brain and Behavior Research Foundation, Heading Health, the Laureate Institute for Brain Research, Magnolia CNS, Pasithea Therapeutics, Sage, Skyland Trail, Signant Health, and TRUUST Neuroimaging; he serves on boards of directors for ADAA, Gratitude America, Lucy Scientific Discovery, and Xhale Smart; he is a stockholder in Antares, BI Gen Holdings, Corcept Therapeutics Pharmaceuticals Company, EMA Wellness, Seattle Genetics, Naki Health, TRUUST Neuroimaging, and Xhale; and he holds patents on a method and devices for transdermal delivery of lithium (US 6,375,990B1) and on a method of assessing antidepressant drug therapy via transport inhibition of monoamine neurotransmitters by ex vivo assay (US 7,148,027B2). Dr. Widge has served as a consultant for Dandelion Science; he has received device donations from Medtronic; and he has unlicensed patents in the area of biomarkers and methods for tracking mental health symptoms. Dr. Torous has received support from Otsuka and is a cofounder of Precision Mental Wellness. The other authors report no financial relationships with commercial interests.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Get Access

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - American Journal of Psychiatry

PPV Articles - American Journal of Psychiatry

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share