Site maintenance Wednesday, November 13th, 2024. Please note that access to some content and account information will be unavailable on this date.
Skip to main content
Full access
Clinical Synthesis
Published Online: 24 April 2020

Real-Time Monitoring: A Key Element in Personalized Health and Precision Health

Abstract

Current management of psychiatric disorders relies heavily on retrospective, subjective reports provided by patients and their families. Consequently, psychiatric services are often provisioned inefficiently and with suboptimal outcomes. Recent advances in computing and sensor technologies have enabled the development of real-time monitoring systems for the diagnosis and management of psychiatric disorders. The state of these technologies is rapidly evolving, with passive monitoring and predictive modeling as two areas that have great potential to affect psychiatric care. Although outpatient psychiatry probably stands to benefit the most from the use of real-time monitoring technologies, there are also several ways in which inpatient psychiatry may also benefit. As the capabilities of these technologies increase and their use becomes more common, many ethical and legal issues will need to be considered. The role of governmental regulatory bodies and nongovernmental organizations in providing oversight of the implementation of these technologies is an active area of discussion.

History of Key Concepts

Real-time monitoring has had a varied adoption across different medical specialties and conditions. In the treatment of some conditions, real-time monitoring has become so routine that to not use it would be a significant deviation from the usual practice and might not even come to mind as an instance of real-time monitoring. For example, in managing insulin therapy for patients with diabetes, an endocrinologist who did not incorporate the use of at-home glucose monitoring into usual practice and instead checked levels at only quarterly office visits would likely be considered to be providing substandard care. Because psychiatric disorders are characterized by their complex integration of both subjective and objective phenomena, the use of real-time monitoring technologies in psychiatry is more complicated, compared with their use in many other specialties.
Several concepts inform our current understanding of and approaches to the use of real-time monitoring in the field of psychiatry (Table 1). One of the oldest is ecological momentary assessment (EMA), which “involves repeated sampling of subjects’ current behaviors and experiences in real time, in subjects' natural environments” (1). The principles of EMA were first described by Stone and Shiffman in 1994 and addressed two key limitations of then common assessment methods: that “laboratory studies . . . may not faithfully capture real-world phenomena” and that “[r]etrospective self-report data . . . are subject to a number of biases” (2). Stone and Shiffman were focused on the collection of data in a research setting; however, these limitations are also present in the typical model of outpatient psychiatric care wherein patients are seen in the office and they are asked to provide retrospective reports of their symptomatology and to complete a mental status exam. Because the patient reports are retrospective and subjective, they can be subject to several biases, such as recency, novelty, and mood-congruent memory effects (3). Whereas the mental status exam provides more objective data, it is limited to assessing only those elements that can be assessed in the office setting. For example, a mental status exam may provide objective information about a patient’s thought process and affect regulation, but whereas a patient may demonstrate adequate levels of functioning of these domains in the setting of a quiet office, how this translates to the patient’s day-to-day life is not always clear. Furthermore, the mental status exam provides an assessment at a single point in time. Depending on the frequency of exams, there may be gaps between assessments during which salient changes may occur. Although early efforts in EMA tended to focus on the delivery of prompts to patients via phones and personal digital assistants, it was also recognized that these devices could also be used to capture other data streams, such as “audio, video, geographical positioning, and (through attachments) some physiological and biological data” (3).
TABLE 1. Key terms in real-time monitoring
TermDefinition
Ecological momentary assessmentA methodology in which clinically relevant data are obtained from patients in real time (or nearly real time) as they go about their usual daily routines
BiomarkerAnything that can be measured from a patient that can be reliably associated with a given disorder or prognosis
EndophenotypeA type of biomarker that has a demonstrated association with a specific disorder as well as a genetic basis
Digital phenotypeA type of biomarker that is obtained by analyzing the data resulting from a patient’s use of digital technologies
Digital exhaustData passively created by people’s use of digital technologies; historically, such data were byproducts of the use of the technology (e.g., website access logs)
Digital exhaust, also described as data exhaust, is a more recently developed concept that has led to the development of new paradigms of data collection. These terms refer to data that are passively generated as byproducts of people’s interactions with digital technologies (e.g., website or mobile app log files) that store granular records of the various actions that users take when interacting with the digital resource in question (4). It has become recognized that these data can be analyzed to generate useful information about user behaviors and preferences. Although the actual collection of these data may often require creating software specifically for the purpose of data collection, the capture of digital exhaust data is not typically the primary reason for using the digital technology. In this regard, digital exhaust data are passively and unobtrusively collected. This contrasts with active data collection, in which a user might be asked to perform a specific task to assess his or her cognition or even to simply report his or her mood.
The most recently developed concept regarding the use of real-time monitoring in psychiatry is the digital phenotype. Whereas a traditional phenotype is defined as the product of the interaction between an organism’s genotype and its environment, a digital phenotype is the product of the interaction between an individual’s psychiatric disorder(s) and that individual’s use of digital technologies (5). Similar to a regular phenotype, the digital phenotype is a rather broad and abstract concept, as there are many technologies and data streams that can be analyzed, as well as a multitude of methods that can be used to conduct these analyses. Consequently, typical implementation of digital phenotypes focus on specific disorders and digital technologies.
What unifies all these concepts is that they describe methods for obtaining views into people’s daily experiences and behaviors by capturing data in real time; thus, they can all be considered as different, specific frameworks for conceptualizing real-time monitoring. Exactly what data are collected and how they are collected will vary depending on the specific implementation of these frameworks.

Theoretical Underpinnings

Psychiatric disorders are heterogeneous in both their presentations and etiologies. One of the ways in which the field has attempted to get a better handle on these difficulties is with the concept of endophenotypes. This concept was first described by Gottesman and Gould (6). The essence of this idea is that there are reliably measurable traits or behaviors associated with specific psychiatric disorders and that these traits or behaviors have a genetic basis. When a given endophenotype can be used to separate people with a disorder into different subgroups, those groups can be thought of as different subtypes of the disorder (6, 7). It is worth noting that Gottesman and Gould’s original specification of endophenotypes included rather strict criteria and that the term is sometimes used more loosely to simply describe characteristics that can be reliably associated with a given disorder. Such characteristics would be more properly described as biomarkers (8). It is also worth noting that several biomarkers and purported endophenotypes have been found to be transdiagnostic and thus would not meet the stricter definition of endophenotypes proposed by Gottesman and Gould (9, 10).
When considered in the context of endophenotypes and biomarkers, digital phenotypes can be considered as a form of biomarker. The development of biomarkers in psychiatry have a rather storied history, with many candidates but nothing that has been widely accepted into clinical use. Many of the most promising biomarkers in psychiatry involve the use of neuroimaging or the assay of some kind of a physical analyte, such as blood or cerebrospinal fluid (11). Digital phenotypes distinguish themselves from these “traditional” biomarkers in that they can be sampled at high frequency and without the need for highly specialized equipment or testing procedures. This creates great potential for the use of digital biomarkers in population screening and management as well as tracking the progression of a disorder.
One of the difficulties in using biomarkers to make clinically relevant predictions is that the presence of group-level differences in a biomarker does not necessarily mean that the biomarker will be useful in making a prediction about an individual. Arbabshirani et al. described this phenomenon with regard to neuroimaging-based biomarkers, but the same general principles apply to digital biomarkers (12). One approach that addresses this issue at least partially is the “n-of-1” paradigm. In an n-of-1 study, rather than trying to identify group-level parameters, the primary unit of observation is the individual (13). These types of studies are often used in intervention trials in which the aim is to optimize a given clinical outcome for an individual. There are many design options for n-of-1 studies, with a feature of some designs being that the subjects are randomized to intervention arms multiple times over the course of the study, with this randomization guided in real time by response to previous interventions. The goal of such studies is to create “just-in-time adaptive interventions” also known as JITAIs, which, because of their context sensitivity and high level of personalization, could theoretically be more effective than traditional interventions (14). Although the primary goal of n-of-1 studies is to draw inferences for individuals, n-of-1 study results can be pooled to try to identify population-level parameters (15). Given the complexity of the signals generated by many real-time monitoring technologies and the complexity of psychiatric disorders themselves, n-of-1 trials may prove to be an important research methodology for fully leveraging the use of real-time monitoring in understanding psychiatric disorders; however, to discern the population outcomes that would increase our understanding of these disorders would likely require a large number of subjects and longitudinal monitoring. A recent review of the use of n-of-1 trials in schizophrenia suggests that the methodology is underutilized (16).

Current Efforts and Challenges

Arguably, the technology that has most enabled the development and use of real-time monitoring is the smartphone. Because of its ubiquity and its use in many daily tasks, it serves as an ideal platform for collecting data streams that can be used in models of psychiatric functioning. A nonexhaustive list of the types of data that have been investigated with regard to their utility for building such models includes typing kinematics (1720), acoustic characteristics of speech (21), number of phone calls (22, 23), number of text messages (22, 23), pattern of phone calls to contacts stored on the smartphone (23), locations visited as measured via global positioning system signal (23, 24), and patterns of app usage (23, 25). Stand-alone wearable sensors have also been investigated, including off-the-shelf activity trackers (26, 27) and custom devices to measure electrodermal activity and galvanic skin response (28, 29); however, the majority of the research published to date has focused on smartphone-derived sensor data.
Recently, there have been efforts to systematically review findings to date (30, 31); however, as discussed by Rohani et al. in their review of the concordance between sensor data and depressive symptoms, heterogeneity in measurement and analytic methods makes it difficult to draw generalizable conclusions (32). This is exacerbated by the use of technology such as smartphones: the functionalities of the sensors and the types of data available are often dependent on the version of the phone and software, thus making it more difficult to create robust, generalizable models.
Although outpatient psychiatry is likely to benefit the most from the integration of real-time monitoring technologies, there is also opportunity for inpatient psychiatry. Given that patients hospitalized on a psychiatric inpatient unit are acutely ill and may have disorders characterized by paranoia, the question becomes whether real-time monitoring would be acceptable to psychiatric inpatients. Ben-Zeev et al. examined this question and found that of 20 inpatients with schizophrenia or schizoaffective disorder approached for enrollment in a study involving real-time monitoring via carrying a special study phone, 13 expressed interest. Notably, two of the 13 were found to have insufficient capacity to consent to participation (33). Another feasibility and acceptability study in an inpatient adolescent population was conducted by Kleiman et al. In this study, 50 participants were asked to wear a wrist-worn monitor for as often as possible, and the study found that they wore the device an average of 18 hours per day (34). These are clearly very specific populations, so how well these findings may generalize is not clear; however, these studies provide encouraging early evidence that the use of real-time monitoring technologies with inpatient populations may be a viable approach.
One of the interesting aspects of the application of real-time monitoring to the inpatient setting is that it allows for the use of nonwearable sensors that are incorporated into the environment. One such possibility is the use of closed-circuit video cameras. Tracking people across video frames is a well-studied problem in the field of computer vision, with increasingly sophisticated algorithms being developed (3537). Information derived from such algorithms could be used to quantify psychiatrically relevant measures such as overall activity levels, psychomotor agitation and retardation, stereotyped behaviors such as pacing, and the amount and frequency of interpersonal interactions. Other information that could be derived from video include emotion recognition based on facial expressions (38) and gait analysis (39). There are ethical and legal questions to be considered in the deployment of any of these systems, such as whether patients and nonpatients (e.g., staff, visitors) being monitored by the same video system could provide consent in regard to different types of analysis and how any information derived by the algorithms should be handled. Also, the utility of the information would need to be demonstrated and weighed against the information derived from the current practice of behavioral observation reports provided by inpatient staff.
In terms of currently available systems for real-time monitoring, there are probably approximately 10,000 to 15,000 mental health apps available for download from various app stores (40, 41), but it is not clear how many of these include some type of real-time monitoring. This lack of precision raises the question of what, exactly, constitutes real-time monitoring. If we include apps that allow users to track things such as their mood, sleep, menstrual cycle, and other potentially psychiatrically relevant functions, the number of apps is probably quite large.
To be clinically useful, real-time monitoring data must be distilled into relevant information and made available to psychiatric practitioners. For most psychiatric practitioners today, the primary repository for clinical data is the electronic health record (EHR); however, EHR vendors and academic researchers are only beginning to explore the possibility of importing patient-generated data via other digital systems such as smartphone apps into the EHR. Among the challenges that are raised by this function is the development of data standards that can be used to ensure the integrity and fidelity of the data and track the provenance and veracity of the data. For example, let us consider the example of importing sleep data captured by a patient’s smartphone app. Such data could be used by a clinician to help determine whether a patient is in a mood episode and may be more accurate than the patient’s own retrospective self-reports. A simple list of the number of hours of sleep the patient has had each night since the patient’s last visit would probably not be helpful, so some relevant summary measures would need to be created. It would also be important to know whether the number of hours of sleep that feed into these summary measures are from patient self-report, passively measured via some proxy measure such as phone usage, or derived from an algorithm based on multiple data streams. In the case of an algorithm, it would also be important to know whether the algorithm has changed over time and, if so, what version was used. All of these questions will need to be answered to establish the validity of the metric. Of course, the ultimate goal is for most of this to be hidden from the clinician end user, which is similar to how a clinician can, today, look at a lab value and trust that the assays and algorithms used to calculate it were accurate. The Office of the National Coordinator for Health Information Technology recently published a white paper describing the various issues that would need to be addressed by a framework to incorporate such data into EHRs (42).

Assessing App Quality

With the proliferation of mental health apps, there has been increasing recognition of the need for assessment of these apps beyond the user-submitted ratings found in app stores.
In January 2019, the Food and Drug Administration (FDA) launched the Software Precertification Pilot Program. The goal of this program is to “help inform the development of a future regulatory model that will provide more streamlined and efficient regulatory oversight of software-based medical devices developed by manufacturers who have demonstrated a robust culture of quality and organizational excellence” (43). An important feature of this proposed model is that, rather than focusing on vetting individual apps, the FDA would instead focus on vetting the companies that make the apps. Nine companies were selected to participate in the program, including Apple, Fitbit, and Verily (43). The next steps of the program include a comparison of the results of the proposed model with the traditional medical device clearance process (i.e., vetting individual apps). The FDA is currently soliciting public comments regarding the program (44).
The American Psychiatric Association (APA) endorses a five-step process for practitioners to follow in determining what advice to provide patients who are considering using an app. This process is based on a hierarchical framework proposed by Torous et al. (45). These steps include assessing the risks associated with using an app, including the protection afforded to the patient’s private data; the evidence supporting the app’s proposed benefits; the usability of the app; and the app’s interoperability (i.e., the ease with which data collected by the app can be shared with other technologies such as other apps or an EHR system) (46). As of the time of the writing of this article, the APA is also planning to form a panel that will use this model to rate apps and publish their evaluations online (47).
Several projects have already published databases providing behavioral health app reviews that are similar to what the APA is proposing (Table 2). Each of these projects maintains its own framework for assessing apps. Carlo et al. recently published an article analyzing the concordance between these frameworks for the most downloaded apps for the iOS and Android operating systems (48). They focused their analysis on three projects that have rated the most apps: the Organisation for the Review of Care and Health Applications (ORCHA), PsyberGuide, and MindTools.io. Carlo et al found that the amount of agreement between the projects was highest for the “credibility and evidence base” domain with “fair agreement”; they found “slight agreement” for the domains of “user experience” and “date use and security.” ORCHA had the highest fraction of published reviews for the top 25 most popular apps; however, PsyberGuide was the most popular of the sites with the highest number of visits.
TABLE 2. Current projects providing publicly available reviews of apps for mental health
ProjectWebsiteDescriptionPublished mental health app reviewsa
ORCHAbhttps://orcha.co.ukPrivately held company based in United Kingdom412
PsyberGuidehttps://psyberguide.orgProject of the nonprofit organization One Mind198
MindTools.iohttps://mindtools.ioNonprofit website that spun off from an academic project96
a
As of September 19, 2019.
b
Organisation for the Review of Care and Health Applications.

Conclusions and Future Directions

There is still much work to be done on validating the scientific basis of current findings. Additionally, translational and implementation work will need to be conducted to ensure that the information obtained via real-time monitoring systems is both relevant and incorporated into clinical workflows and information systems in such a way that it can affect clinical decision making and improve clinical outcomes. Throughout all of this, respect for principles of privacy and fairness will need to be maintained. For practitioners interested in incorporating real-time monitoring practices into their current clinical practice, the APA’s app evaluation model provides a useful framework for selecting appropriate apps.
Real-time monitoring paradigms, such as digital phenotyping, have great potential to transform the ways in which we study and treat psychiatric disorders. Although we have seen similar promises from past breakthroughs (e.g., genetics, epigenetics, neuroimaging), digital phenotyping distinguishes itself in that it can be much more readily and inexpensively deployed and that its feasibility is being facilitated by changes in our society—namely, the increasing use of digital technologies in more and more aspects of people’s daily lives. With these forces at play, it appears inevitable that we will see rising adoption of real-time monitoring technologies, in which case the relevant issue will become how to ensure that this adoption proceeds safely and effectively.

References

1.
Shiffman S, Stone AA, Hufford MR: Ecological momentary assessment. Annu Rev Clin Psychol 2008; 4:1–32.
2.
Stone AA, Shiffman S: Ecological momentary assessment (Ema) in behavioral medicine. Ann Behav Med 1994; 16:199–202
3.
Trull TJ, Ebner-Priemer UW: Using experience sampling methods/ecological momentary assessment (ESM/EMA) in clinical assessment and clinical research: introduction to the special section. Psychol Assess 2009; 21:457–462
4.
Data Exhaust [Definition from Techopedia]. https://www.techopedia.com/definition/30319/data-exhaust. Accessed Feb 9, 2020
5.
Jain SH, Powers BW, Hawkins JB, et al.: The digital phenotype. Nat Biotechnol 2015; 33:462–463
6.
Gottesman II, Gould TD: The endophenotype concept in psychiatry: etymology and strategic intentions. Am J Psychiatry 2003; 160:636–645
7.
Cannon TD, Keller MC: Endophenotypes in the genetic analyses of mental disorders. Annu Rev Clin Psychol 2006; 2:267–290
8.
Beauchaine TP: The role of biomarkers and endophenotypes in prevention and treatment of psychopathological disorders. Biomark Med 2009; 3:1–3
9.
Pinto JV, Moulin TC, Amaral OB: On the transdiagnostic nature of peripheral biomarkers in major psychiatric disorders: a systematic review. Neurosci Biobehav Rev 2017; 83:97–108
10.
Beauchaine TP, Constantino JN: Redefining the endophenotype concept to accommodate transdiagnostic vulnerabilities and etiological complexity. Biomark Med 2017; 11:769–780
11.
Lozupone M, La Montagna M, D’Urso F, et al: The role of biomarkers in psychiatry; in Reviews on Biomarker Studies in Psychiatric and Neurodegenerative Disorders. Edited by Guest PC. Cham, Switzerland, Springer, 2019
12.
Arbabshirani MR, Plis S, Sui J, et al: Single subject prediction of brain disorders in neuroimaging: promises and pitfalls. Neuroimage 2017; 145(Pt B):137–165
13.
Lillie EO, Patay B, Diamant J, et al: The n-of-1 clinical trial: the ultimate strategy for individualizing medicine? Per Med 2011; 8:161–173
14.
Klasnja P, Hekler EB, Shiffman S, et al: Microrandomized trials: an experimental design for developing just-in-time adaptive interventions. Health Psychol 2015; 34:1220–1228
15.
Zucker DR, Ruthazer R, Schmid CH: Individual (N-of-1) trials can be combined to give population comparative treatment effect estimates: methodologic considerations. J Clin Epidemiol 2010; 63:1312–1323
16.
Marwick KFM, Stevenson AJ, Davies C, et al: Application of n -of-1 treatment trials in schizophrenia: systematic review. Br J Psychiatry 2018; 213:398–403
17.
Zulueta J, Piscitello A, Rasic M, et al: Predicting mood disturbance severity with mobile phone keystroke metadata: a biaffect digital phenotyping study. J Med Internet Res 2018; 20:e241
18.
Cao B, Zheng L, Zhang C, et al: DeepMood: modeling mobile phone typing dynamics for mood detection. arXiv 2018; arXiv:1803.08986. http://arxiv.org/abs/1803.08986
19.
Stange JP, Zulueta J, Langenecker SA, et al: Let your fingers do the talking: passive typing instability predicts future mood outcomes. Bipolar Disord 2018; 20:285–288
20.
Ghosh S, Ganguly N, Mitra B, et al: TapSense: combining self-report patterns and typing characteristics for smartphone based emotion detection; in MobileHCI ’17: Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, New York, Association for Computing Machinery, 2017, 1–12. http://dl.acm.org/citation.cfm?doid=3098279.3098564
21.
Karam ZN, Provost EM, Singh S, et al: Ecologically valid long-term mood monitoring of individuals with bipolar disorder using speech; in Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Piscataway, NJ, IEEE, 2014, 4858–4862. http://ieeexplore.ieee.org/document/6854525/
22.
Faurholt-Jepsen M, Frost M, Vinberg M, et al: Smartphone data as objective measures of bipolar disorder symptoms. Psychiatry Res 2014; 217:124–127
23.
LiKamWa R, Liu Y, Lane ND, et al: MoodScope: building a mood sensor from smartphone usage patterns; in Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services—MobiSys ’13. New York, ACM Press, 2013, 389. http://dl.acm.org/citation.cfm?doid=2462456.2464449
24.
Saeb S, Zhang M, Karr CJ, et al: Mobile phone sensor correlates of depressive symptom severity in daily-life behavior: an exploratory study. J Med Internet Res 2015; 17:e175
25.
Alvarez-Lozano J, Osmani V, Mayora O, et al: Tell me your apps and I will tell you your mood; in Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments—PETRA ’14. New York, ACM Press, 2014, 1–7. http://dl.acm.org/citation.cfm?doid=2674396.2674408
26.
Shin S, Yeom C-W, Shin C, et al: Activity monitoring using a mHealth device and correlations with psychopathology in patients with chronic schizophrenia. Psychiatry Res 2016; 246:712–718
27.
Cook JD, Prairie ML, Plante DT: Utility of the Fitbit Flex to evaluate sleep in major depressive disorder: a comparison against polysomnography and wrist-worn actigraphy. J Affect Disord 2017; 217:299–305
28.
Puiatti A, Mudda S, Giordano S, et al: Smartphone-centred wearable sensors network for monitoring patients with bipolar disorder; in Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Piscataway, NJ, IEEE, 2011, 3644–3647. http://ieeexplore.ieee.org/document/6090613/
29.
Kappas A, Küster D, Basedow C, et al: A Validation Study of the Affectiva Q-Sensor in Different Social Laboratory Situations. Presented at the 53rd Annual Meeting of the Society for Psychophysiological Research. Florence, Italy, 2013. https://www.researchgate.net/profile/Pasquale_Dente2/publication/264236855_A_validation_study_of_the_Affective_Q-Sensor_in_different_social_laboratory_situations/links/55814ae208aed40dd8cd4e44/A-validation-study-of-the-Affective-Q-Sensor-in-different-soci
30.
Seppälä J, De Vita I, Jämsä T, et al: Mobile phone and wearable sensor-based mHealth approaches for psychiatric disorders and symptoms: systematic review. JMIR Ment Health 2019; 6:e9819
31.
Reinertsen E, Clifford GD: A review of physiological and behavioral monitoring with digital sensors for neuropsychiatric illnesses. Physiol Meas 2018; 39:05TR01
32.
Rohani DA, Faurholt-Jepsen M, Kessing LV, et al: Correlations between objective behavioral features collected from mobile and wearable devices and depressive mood symptoms in patients with affective disorders: systematic review. JMIR Mhealth Uhealth 2018; 6:e165
33.
Ben-Zeev D, Wang R, Abdullah S, et al: Mobile behavioral sensing for outpatients and inpatients with schizophrenia. Psychiatr Serv 2016; 67:558–561
34.
Kleiman E, Millner AJ, Joyce VW, et al: Using wearable physiological monitors with suicidal adolescent inpatients: feasibility and acceptability study. JMIR Mhealth Uhealth 2019; (Epub ahead of print, Sep 24, 2019)
35.
Insafutdinov E, Andriluka M, Pishchulin L, et al: ArtTrack: articulated multi-person tracking in the wild; in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, IEEE, 2017. http://openaccess.thecvf.com/content_cvpr_2017/html/Insafutdinov_ArtTrack_Articulated_Multi-Person_CVPR_2017_paper.html
36.
Andriluka M, Iqbal U, Insafutdinov E, et al: PoseTrack: a benchmark for human pose estimation and tracking; in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, IEEE, 2018. http://openaccess.thecvf.com/content_cvpr_2018/html/Andriluka_PoseTrack_A_Benchmark_CVPR_2018_paper.html
37.
Yang M, Jia Y: Temporal dynamic appearance modeling for online multi-person tracking. Comput Vis Image Underst 2016; 153:16–28
38.
Ko BC: A brief review of facial emotion recognition based on visual information. Sensors 2018; 18:401
39.
Xu B, Wu H, Wu W, et al: Computer vision system for ambient long-term gait assessment. US patent 9,993,182 B2, filed Oct 3 2016, and granted June 12, 2018. https://patentimages.storage.googleapis.com/c3/a1/c0/3b3f1490960d6a/US9993182.pdf
40.
Torous J, Roberts LW: Needed innovation in digital health and smartphone applications for mental health: transparency and trust. JAMA Psychiatry 2017; 74:437–438
41.
Anthes E: Mental health: there’s an app for that. Nature 2016; 532:20–23
42.
Conceptualizing a Data Infrastructure for the Capture, Use, and Sharing of Patient-Generated Health Data in Care Delivery and Research through 2024. Washington, DC, Office of the National Coordinator for Health Information Technology, 2018. https://www.healthit.gov/sites/default/files/onc_pghd_final_white_paper.pdf
43.
Digital Health Software Precertification (Pre-Cert) Program. Silver Spring, MD, U.S. Food and Drug Administration, 2019. https://www.fda.gov/medical-devices/digital-health/digital-health-software-precertification-pre-cert-program
44.
Software Precertification Program: 2019 Test Plan. Silver Spring, MD, U.S. Food and Drug Administration, 2019. https://www.fda.gov/media/119723/download
45.
Torous JB, Chan SR, Gipson SYT, et al: A hierarchical framework for evaluation and informed decision making regarding smartphone apps for clinical care. Psychiatr Serv 2018; 69:498–500
46.
App Evaluation Model. Washington, DC, American Psychiatric Association, 2019. https://www.psychiatry.org/psychiatrists/practice/mental-health-apps/app-evaluation-model
47.
App Advisor Expert Panel. Washington, DC, American Psychiatric Association, 2019. https://www.psychiatry.org/psychiatrists/practice/mental-health-apps/app-advisor-expert-panel
48.
Carlo AD, Hosseini Ghomi R, Renn BN, et al: By the numbers: ratings and utilization of behavioral health mobile applications. NPJ Digit Med 2019; 2:54

Information & Authors

Information

Published In

History

Published in print: Spring 2020
Published online: 24 April 2020

Keywords

  1. mHealth
  2. phenotype
  3. patient-specific modeling
  4. Biological Markers

Authors

Details

John Zulueta, M.D. [email protected]
Department of Psychiatry, College of Medicine (all authors), and Department of Bioengineering and Computer Science, College of Engineering (Leow), all at the University of Illinois at Chicago.
Alex D. Leow, M.D., Ph.D.
Department of Psychiatry, College of Medicine (all authors), and Department of Bioengineering and Computer Science, College of Engineering (Leow), all at the University of Illinois at Chicago.
Olusola Ajilore, M.D., Ph.D.
Department of Psychiatry, College of Medicine (all authors), and Department of Bioengineering and Computer Science, College of Engineering (Leow), all at the University of Illinois at Chicago.

Notes

Send correspondence to Dr. Zulueta ([email protected]).

Competing Interests

Dr. Leow reports serving on the advisory board for Buoy Health and being a cofounder of KeyWise. Dr. Ajilore reports serving on the advisory board of Embodied Labs and Blueprint Health, being a cofounder of KeyWise, and being a consultant for Quartet Health. Dr. Zulueta reports no financial relationship with commercial interests.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Focus

PPV Articles - Focus

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share