The Diagnostic and Statistical Manual of Mental Disorders (DSM) employs a categorical diagnostic system with operationalized diagnostic criteria that has allowed the field of psychiatry to have a common clinical and research language. Despite this significant advantage, the limitations of the categorical system have become increasingly evident since the publication of DSM-III in 1980 (
1,
2). Although much progress has been made in elucidating the neurobiology, genetics, and environmental influences involved in psychopathology and brain pathophysiology, validity of the disorders in the DSM has not yet been demonstrated. Clinical treatments initially developed to treat one mental disorder are often found to be efficacious in the treatment of other disorders (for example, selective serotonin reuptake inhibitors and cognitive-behavioral therapies in the treatment of major depressive disorder, generalized anxiety disorder, and obsessive-compulsive disorder). In fact, DSM’s attempt to exhaustively describe the characteristics of psychopathology through categorical diagnosis has been criticized as limiting further progress in finding the underlying causes of mental disorders and developing effective treatments (
3,
4).
One of the major problems of a strict categorical system has been demonstrated in clinical and epidemiological research showing high levels of symptom comorbidity crossing diagnostic boundaries. For example, depressive, anxiety, and somatic symptoms are frequently seen together in various combinations whether or not they meet diagnostic criteria (
5). Anxiety symptoms are frequently seen in patients with major depressive disorder despite the lack of anxiety symptoms in the major depressive disorder diagnostic criteria; importantly, the presence of anxiety has been shown to affect the treatment outcomes for major depressive disorder (
6). Mood symptoms are frequently seen in schizophrenia and also affect the prognosis of the disorder (
7). Sleep problems pervade psychiatric practice, being seen in patients across many diagnostic categories (
8). Some cross-cutting symptoms such as suicidal ideation, while not highly prevalent, are relevant to prognosis and treatment planning, sometimes requiring urgent intervention.
The impact of cross-cutting symptoms is seen in routine clinical practice. Clinicians use diagnoses for treatment planning and reporting, but they often treat clinically significant symptoms that do not correspond to a formal diagnosis (
9). On the other hand, with its focus on categorical diagnoses, DSM may also contribute to co-occurring symptoms being missed in clinical evaluations (
10,
11). There is currently limited guidance in DSM for the clinician to document the presence and nature of these symptoms in a systematic way. With the advent of measurement-based care (
12), which includes patient-reported outcomes as an integral component, systematic measurement of common cross-cutting symptoms has the potential not only to help clinicians in documenting and justifying diagnostic and treatment decisions but also to increase patient involvement in these decisions (
13). Providing clinicians a method to measure cross-cutting symptoms was one of the recommendations by the DSM-5 Research Planning Conference on Dimensional Assessment (
2) and the DSM-5 Diagnostic Spectrum Study Group.
The proposed DSM-5 cross-cutting symptom assessment was developed with several principles in mind. First, the cross-cutting symptom assessment should call attention to common potential areas of mental health concern to both patients and clinicians. Second, it should be suitable for use with most patients in most clinical settings, with separate versions for adult and child populations. Whenever possible, information should be gathered from patient self-report, and the assessment should be self-administered. Finally, the assessment should be administered before a direct clinical contact is made in order to inform the subsequent clinical process. Here, we describe the cross-cutting symptom assessments developed for adult and child populations and their implementation and test-retest reliability in the DSM-5 Field Trials.
Method
Study Design
The DSM-5 Field Trials were a multisite test-retest reliability study conducted with adult patient populations at seven sites and with child and adolescent populations at four sites. The field trials were centrally designed and coordinated by the DSM-5 Research Group at the American Psychiatric Association (APA). Each site focused on four to seven study diagnoses. A stratified sampling approach was used, with stratification based on the patient’s existing DSM-IV diagnoses or, for disorders new to DSM, symptoms with a high probability of meeting criteria for the new disorders. Sites were asked to enroll a “fail-safe” sample size of 50 patients per diagnosis. In addition, each site was asked to enroll an “other diagnosis” group with a target sample size of 50 patients with none of the study diagnoses at that site. Detailed information on the rationale, design, stratification and other methods, and implementation of the DSM-5 Field Trials can be found in the companion article by Clarke et al. (
14).
Study Population
Adult patients were considered eligible for the study if they were 18 years of age or older; could speak, read, and understand English well enough to complete the self-administered questions and participate in the diagnostic interview; and were currently symptomatic for one or more mental disorders. Proxy respondents were allowed for adult patients with cognitive impairments or other impaired capacity that prevented self-completion of the measures. Child and adolescent patients had to be 6 years old or older and currently symptomatic for one or more diagnoses, and they were required to have a parent or legal guardian able to read and communicate in English who would accompany the child to the study appointments and complete the study measures. Information on eligibility factors and clinical status was provided by patients’ treating clinicians, or in the case of patients new to the study site, by the intake clinician. The research coordinator at each site provided each eligible patient (or parent/legal guardian in the case of children and adolescents) with a complete description of the study before obtaining written informed consent. Written assent was obtained from children and adolescents after an age-appropriate description of the study was given. Measures for the protection of human subjects in the DSM-5 Field Trials were reviewed and approved by the institutional review board (IRB) of the American Psychiatric Institute for Research and Education as well as the IRBs of each study site.
Clinician Training and Test-Retest Visits
The test and retest diagnostic interviews were conducted by two independent and randomly assigned study clinicians who did not know the patient, had current human subjects training, and had completed the mandatory DSM-5 Field Trials clinician training. Clinician training involved basic instruction on the changes proposed for DSM-5 (examples of new disorders and criteria changes for existing disorders) and orientation regarding the DSM-5 cross-cutting symptom measures and their purpose in the DSM-5 diagnostic schema. The clinicians were given basic instructions on developing rapport with research participants, which entailed patient-friendly strategies for collecting data in the allotted time and not interfering with any ongoing treatment process. Importantly, clinicians were instructed to integrate the proposed DSM-5 criteria and measures into their usual diagnostic practices rather than use structured research instruments.
Clinicians were instructed to use the information obtained in the cross-cutting symptom measures as potentially important clinical information that should be used to inform their clinical interviews. That is, after reviewing the results of the completed measures, the clinicians were instructed to start the interview as usual with the chief complaint (which may not have corresponded to the highest-scoring domains on the cross-cutting symptom measures) and to follow up on any areas of concern indicated in the cross-cutting symptom measures during the course of the interview. They were cautioned that using the cross-cutting symptom measures solely as diagnostic screeners would defeat the purpose of the measures. It was emphasized that because cross-cutting symptoms might be found in any number of disorders–for example, depression–a high score in a particular domain should prompt the clinician to consider not only mood disorder diagnoses but also clinically significant but nondiagnostic levels of depressive symptoms co-occurring with other disorders. Clinicians were also instructed to complete their assessments of psychosis and level of suicide concern or risk during the interview with the patient present. Parent interviews were recommended for child patients, either alone or with the patient present as clinically indicated. More detailed information on the DSM-5 Field Trials study clinician training is documented in the companion article by Clarke et al. (
14).
The test (visit 1) and retest (visit 2) diagnostic interviews occurred anytime from 4 hours to 14 days apart. All study clinicians were blind to the patient’s stratum assignment, and clinicians who conducted the diagnostic interviews were blind to each other’s ratings. At each study visit, before meeting with the assigned study clinician for the diagnostic interview, the patient, proxy respondent, or parent/guardian provided demographic information and completed the relevant version of the DSM-5 cross-cutting symptom measures on a tablet or laptop computer. The completed measures were computer-scored automatically and the results transmitted to the assigned study clinician via Research Electronic Data Capture (REDCap) (
15), the electronic data collection system used in the study. Clinicians were given summary scores for each cross-cutting symptom domain with an interpretation and were also able to examine item-level scores for all measures before the start of the interview.
Patient- and Parent-Rated Cross-Cutting Symptom Measures
The cross-cutting symptom assessment is administered in two “levels.” For adults, level 1 includes 23 questions covering 13 domains (
Table 1). For parents (
Table 2) and children (
Table 3), level 1 had 25 questions and 12 domains. Level 1 domains were chosen by the DSM-5 work groups and the Instrument Development Study Group, and the questions were usually developed de novo by the work groups. The questions in level 1 covered symptoms in the past 2 weeks, and participants were asked to respond on a 5-point scale as follows: 0=none/not at all; 1=slight/rare, less than a day or two; 2=mild/several days; 3=moderate/more than half the days; 4=severe/nearly every day. A rating of 2 or higher on the level 1 items was set as the threshold level for each domain, with the exception of “substance use” in adult and child patients and “attention” in child patients, which were set at a rating of 1 or higher. The items within the substance use and suicide domains were rated on a “0=No, 1=Yes” basis for child/adolescent raters and a “0=No, 1=Not Sure, and 2=Yes” basis for parent/guardian raters. “Yes” was set as the threshold level response for these domains. Respondents who answered at the threshold level or higher on any level 1 item within a domain were then asked to complete the corresponding level 2 assessment.
The level 2 measures, also self-rated, represent more detailed assessments of certain symptom domains and were usually derived from existing measures, as noted in
Tables 1–
3. With the exceptions of cognition/memory problems, dissociation, personality functioning, psychosis, and suicide, each domain on the adult version of the DSM-5 cross-cutting symptom assessment had a corresponding level 2 measure. For the child/adolescent-rated version of the DSM-5 cross-cutting symptom assessment, there were no associated level 2 child-rated measures for the attention and psychosis domains. A level 2 assessment of attention was completed by the parent/guardian. The parent/guardian version of the DSM-5 cross-cutting symptom assessment did not include a level 2 measure of repetitive thoughts and repetitive behaviors. Suicide had corresponding child- and parent/guardian-rated level 2 assessments. The response options for level 2 items were usually based on a 5-point scale of symptom frequency in the past 7 days, with 0 representing “never” or “not at all” and 4 representing terms such as “nearly every day” or “always.” Regardless of the specific scaling and scoring of the level 2 assessments, a higher score represented higher symptom levels.
Clinician-Rated Cross-Cutting Symptom Measures
Clinician-rated cross-cutting assessments for psychosis and suicidality were also employed in the field trial study visits. The measure for psychosis asked the clinician to rate psychotic symptoms in all patients, as manifested by delusions, hallucinations, or disorganized speech over the past 2 weeks. These symptoms were rated on a 5-point scale ranging from 0 (none) to 4 (present, severe). The clinician rating of psychosis was completed on all patients regardless of patient or, for child patients, parent/guardian ratings of psychosis on the level 1 measures.
The second clinician-rated cross-cutting symptom measure was for level of concern about potential suicidal behavior in adults and for suicide risk severity in children age 11 and older. For the adult scale, study clinicians were asked to assess the presence of 14 clinical and environmental factors associated with suicide for all patients regardless of their self-rating of suicidality. Level of concern about potential suicidal behavior was then rated on a scale of “lowest concern,” “some concern,” “moderate concern,” “high concern,” and “imminent concern.” Descriptors for these anchor points were tied to the level of importance of suicide prevention in the current clinical management of the patient.
For child patients age 11 and over, the process for completing the suicide risk severity scale involved several steps. Before completing this scale, clinicians were asked to review the results of several relevant cross-cutting symptom measures, such as for suicide, depression, and substance use, and to consider the patient’s current symptom and diagnostic status, history of suicide attempts, current suicidal thoughts and plans, and other risk factors. A table of high-risk and very high-risk indicators for suicide was given and using this table as a guide, the clinician then filled out the scale. A rating of 0 indicated minimal suicide risk, a rating of 2 indicated some high-risk factors were present, and a rating of 4 indicated the presence of a very high-risk indicator. Intermediate ratings of 1 and 3 were also possible although not specifically anchored.
Data Analysis
Weighted mean scores for each dimensional level 1 item were calculated for each site. The pooled mean scores and standard deviations were also calculated.
Test-retest reliability estimates for the continuous and ordinal cross-cutting symptom measures were obtained by using the parametric intraclass correlation coefficient (ICC) for stratified samples and are presented with two-tailed 95% confidence intervals (CIs); sampling weights and bootstrap methods were used as described by Clarke et al. (
14). Two ICC models were used in this study: Type- (1, 1), a one-way random model of absolute agreement and Type- (2, 1), a two-way random model of absolute agreement. Type- (1, 1) was used for the reliability estimates of the clinician-rated dimensional measures, since each patient was rated by a different, randomly selected clinician at test and retest. Type- (2, 1) was used for the reliability estimates for the patient-rated cross-cutting measures, since the rater was the same person at test and retest (i.e., the study patient him/herself, or other authorized respondent) (
14,
28). The four substance use questions and two suicide questions asked of child respondents were rated on a yes/no basis. Intraclass kappa coefficients for stratified samples and their associated 95% CIs (using bootstrap methods) were used to calculate test-retest reliability estimates for these items (
13).
Since level 2 assessments were triggered only if at least one level 1 item within a domain was endorsed at a level of “mild” or greater, the reliability of the level 2 assessment was examined as a combined score with the level 1 items. Specifically, in order to calculate ICCs for level 2 assessments, their average scores were combined with level 1 as follows:
1.
A score of 0 on all level 1 items for a particular symptom domain results in a score of 0 on the combined level 1 and 2 score (level 2 was not administered if the level 1 score was 0).
2.
A score of 1 at most (“slight”) on each of the level 1 items for a symptom domain results in a score of 1 on the combined level 1 and 2 score (level 2 was not administered if the level 1 score was 1).
3.
A score of 2 (“mild”) or greater on one or more of the level 1 items for a particular symptom domain is added to the level 2 score as follows:
i.
An average score <0.50 on the level 2 scale is coded as 0, resulting in a total score of 2 on the combined level 1 and 2 score.
ii.
An average score of 0.50–1.49 on the level 2 scale is coded as 1, resulting in a total score of 3 on the combined level 1 and 2 score.
iii.
An average score of 1.50–2.49 on the level 2 scale is coded as 2, resulting in a total score of 4 on the combined level 1 and 2 score.
iv.
An average score of 2.50–3.49 on the level 2 scale is coded as 3, resulting in a total score of 5 on the combined level 1 and 2 score.
v.
An average score ≥3.50 on the level 2 scale is coded as 4, resulting in a total score of 6 on the combined level 1 and 2 score.
All analyses were performed at a site-specific level and then data were pooled by using a meta-analytic approach. However, if data were missing for 25% or more for a measure at a site, the reliability coefficient was not calculated and therefore not included in the pooled estimate. When there was no variance in responses at a site, that site was not included either in the descriptive statistics or in the computation of the reliability coefficient. Otherwise the estimates were pooled across the sites. It should be noted, however, that there were site differences in the results for most responses. Thus the pooled estimate represents the typical result over sites, rather than the result at each site. Results of the data analyses from adult respondents were tabulated separately from results from parent and child respondents to allow comparisons between the latter respondents.
The ICC results were rounded to two decimal places, and the rounded estimates were interpreted as follows: 0–0.39=unacceptable, 0.40–0.59=questionable, 0.60–0.79=good, 0.80–1=excellent. Rounded intraclass kappa results were interpreted as follows: <0.20=unacceptable, 0.20–0.39=questionable, 0.40–0.59=good, 0.60–0.79=very good, 0.80–1=excellent. The underlying rationale for these interpretations can be found elsewhere (
29).
Other measures were tested in the DSM-5 Field Trials, including the World Health Organization Disability Assessment Schedule (
30) and an inventory of maladaptive personality traits (
31). As with the cross-cutting symptom measures, these measures were given to all adult patients and to the older child group, and reliability results will be presented in subsequent publications. Clinicians’ views on the acceptability and clinical utility of the DSM-5 criteria and new measures as well as patients’ views on the self-report measures were also gathered in the field trial, and these data along with the results presented in this article will be considered as final decisions are made for DSM-5.
Results
Supplemental Tables A-E (see the data supplement that accompanies the online edition of this article) show the pooled mean scores for level 1 items, combined level 1 and level 2 scores, and clinician-rated scales for adult, child, and parent respondents. Mean scores for level 1 items are shown in supplemental tables A and B. Sleep problems and anger had relatively high mean scores from both adult participants and the parents of child participants. In addition, for the adult participants, items related to anxiety, depression, and personality functioning had relatively high mean scores, as did attention and irritability for parent respondents. For both adult participants and parent respondents, low mean scores (<1.0) were found for items on substance use, psychosis, suicide, and mania. Several other cross-cutting items had low mean scores on parent report. These included items related to somatic distress, anxiety (avoidance), and repetitive thoughts and behaviors. Most of these items had high standard deviations relative to their means. For the level 1 items, children exhibited similar patterns in mean scores compared with their parents.
Pooled mean scores for the combined level 1 and level 2 items are presented online in supplemental tables C and D. As noted earlier, a combined score of 0 or 1 indicates that the respondent was not sent on to a level 2 assessment for that domain. A combined score of 2 indicates very low levels of symptoms on level 2, with higher scores reflecting increasingly higher levels of symptoms. At the adult sites, depression, anxiety, and sleep problems all had combined mean scores over 3, while mania, repetitive thoughts and behaviors, and “other” substance use had mean scores of less than 2. At the child sites, the only domains with mean scores above 3 were anger and inattentiveness for responding parents of children under age 11. These domains had the highest means for parents of older children as well, but both means were under 3. Child respondents were not administered the level 2 inattentiveness scale; otherwise their mean scores followed a pattern similar to that of the parent scores.
Finally, the pooled mean scores for the two clinician-rated cross-cutting measures, psychosis and suicide, were under 1 for both adult and child patients (supplemental table E). The mean for psychosis in children was very close to zero, indicating that few clinicians diagnosed psychotic symptoms in the child subjects.
Tables 4–8 show the pooled test-retest reliability of the cross-cutting symptom measures. Level 1 reliabilities are presented first. All level 1 items were rated reliably by adult patients, with ICC estimates in the “good” range or better, except the two mania items which were in the “questionable” range (
Table 4). For parents of children under 11 years old, ICC estimates were in the good or excellent range for 19 of the 25 items in the cross-cutting symptom assessment (
Table 5). Two items fell into the questionable range (anxiety item 3 [“cannot do things because of nervousness”] and repetitive thoughts item 1 [“unpleasant thoughts, images or urges entering mind”]) and one item had unacceptable reliability (“misuse of legal drugs”). Lack of variability in responses prevented ICC estimation for the remaining three substance use items in this age group (
Table 5). Parents of children age 11 and over rated the cross-cutting items very reliably, with all ICCs in the good or excellent range except misuse of legal drugs. Reliabilities for child respondents were good or excellent for 17 items. Six items had questionable reliability: both mania items, anxiety item 3, somatic distress item 2 (“worried about health”), psychosis item 2 (“had a vision/saw things”), and repetitive thoughts item 1. Reliability coefficients for the remaining two substance use items (use of illegal drugs, misuse of legal drugs) are not presented because of instability of estimates at sites (i.e., the confidence interval range is over 0.5). There were no significant differences between child and parent reliability estimates, with the following exceptions: parents were more reliable reporters than children for somatic distress item 2, both psychosis items, and sleep. Children were more reliable in reporting “ever attempting suicide” (
Table 5).
For adult patients, the pooled ICC of the combined level 1 and level 2 assessments for depression was excellent, while anger, anxiety, somatic distress, sleep, and other substance use performed in the good range. Conversely, reliabilities for mania and repetitive thoughts and behaviors were questionable (
Table 6). Parents of children under 11 years old were reliable reporters for all cross-cutting domains tested except misuse of legal drugs, for which reliability could not be distinguished from chance agreement. Reliabilities for the other three substance use items could not be computed for this age group because of a lack of variability in responses. Similar results were obtained from parents of children age 11 and over, except that variability in the substance use responses allowed for estimation of ICCs with confidence intervals, with estimates in the good or excellent range. For child respondents, ICC estimates fell into the good or excellent range, except for mania, misuse of legal drugs, and suicidal ideation. Among the older child patients, the parents were significantly more reliable reporters of irritability, mania, and sleep than the children. Children were significantly more reliable reporters of illegal drug use, tobacco use, and suicide attempts (however, both parent and child reports had excellent reliabilities for the latter two domains) (
Table 7).
For scales rated by clinicians, ICCs for the suicide scales were in the questionable range for adults and unacceptable, indistinguishable from chance agreement, for children. The ICCs for psychosis were in the good range at the adult sites and unacceptable at the child sites. The ICC for clinician-rated psychosis in children is based on only one site because of excessively large standard errors at the other three sites (
Table 8).
Discussion
This article has presented the initial psychometric findings for the DSM-5 cross-cutting symptom measures, showing that a substantial majority of the level 1 and combined level 1 and 2 assessments demonstrated good or excellent test-retest reliability for adult, parent, and child respondents. These results support the inclusion of these measures in the DSM-5 diagnostic assessment recommendations as a standardized source of clinical data, available to the clinician as a mental health review of systems. The structure of the cross-cutting measures allows for less reliable scales to be removed for further development and possible inclusion in future versions of DSM-5 if their reliability can be improved.
The strengths of the DSM-5 Field Trials are enumerated elsewhere in detail (
14), but those relevant for this article include random patient sampling, diverse clinical settings and patient samples, and testing under conditions anticipated to be close to the real-world conditions under which the various elements of the DSM-5 assessment strategy will be implemented. Further, because the cross-cutting measures were given to each participating patient or an informant, sample sizes were generally adequate to produce stable reliability estimates.
The limitations of the field trials relevant to the current analyses include the design of the test-retest study which, in its focus on categorical diagnoses, allowed for a retest interval of up to 2 weeks. Symptom levels could be expected to change, especially at the upper levels of this time frame, because of inherent fluctuations of symptoms over time and because ongoing treatment was being provided to the patients involved in the study. Nonetheless, while such change in symptom levels would be expected to result in underestimation of the ICC, the substantial majority of our reliability results were still in the good or excellent range. Another limitation is that the DSM-5 Field Trials were not designed to test the validity of the cross-cutting patient measures, although level 2 scales, assessing symptoms in depth, were taken from existing measures with supporting validity data when available.
In contrast to reliabilities from the self- and parent-reported measures, only the clinician rating of psychosis in adults had good reliability, while the reliability for the adult suicide concern scale was questionable. In children, the clinician ratings on both scales had unacceptable reliability. There are several possible explanations for the higher reliabilities of the self-administered measures. The level 1 cross-cutting items for patients contained relatively simple concepts concerning recent suicidal ideation, past suicide attempts, delusions, and hallucinations. Furthermore, the same patient rated the items at the test and retest visits. These factors would all be expected to enhance reliability for the patient-rated items. In contrast, clinicians were asked to synthesize a large amount of information in addition to the level 1 information for their ratings of suicide concern in adults, suicide risk in children, and level of psychosis. The complex factors involved in making clinical judgments (
32), and the fact that two different clinicians were making these judgments at the test and retest visits, may have contributed to the lower reliability of the clinician-rated domains compared with the patient-rated domains. Logistic regression analyses did not show a significant effect of time interval between test and retest visits on the differences in clinician scores at these visits. The low reliabilities of these scales, with the possible exception of the adult psychosis scale, suggest that the components used to determine a rating need to be revised, the rating procedures need to be clarified, or clinician training is required to achieve reliability.
The cross-cutting symptom measures tested in the DSM-5 Field Trials represent a first step in moving psychiatric diagnosis away from solely categorical descriptions toward assessments that recognize different levels of symptom frequency and intensity. They also reflect clinical and research evidence that any given patient may experience common psychopathological symptoms that are not listed in the criteria for his or her categorical diagnosis. The use of these measures has several potential advantages for the clinician. They help to ensure, in a relatively straightforward way, that a wide range of symptoms has been assessed, thereby decreasing the possibility of missed symptoms. They also have the potential to draw attention to mixed presentations with important treatment and prognostic implications, such as major depressive disorder with anxiety symptoms. Rates of spurious comorbidity and “not elsewhere classified” diagnoses may decrease if, for example, the clinician could diagnose major depressive disorder and specify the severity of additional anxiety symptoms, rather than diagnosing comorbid major depressive disorder and anxiety disorder not elsewhere classified. Documentation of significant levels of cross-cutting symptoms in addition to a diagnosis will also help clinicians to justify treatment decisions as measurement-based care is increasingly implemented.
Clinical research may also benefit from the assessment of cross-cutting symptoms along with categorical diagnoses. Having a standard assessment for these symptoms will facilitate research into the prevalence, course, underlying pathology, and treatment of various combinations of categorical diagnoses and cross-cutting symptoms. Such research can be expected to contribute to the development of new disorder boundaries, and eventually new conceptualizations of mental disorders, particularly as synergies develop with findings from basic neuroscience and behavioral science initiatives such as the NIMH Research Domain Criteria project.
Finally, although patient-reported experiences are the foundation of psychiatry (
33), the proposed DSM-5 cross-cutting symptom measures are the DSM’s first attempt to systematically assess these experiences in self-administered questionnaires. It is hoped that these measures will enhance patients’ understanding of their symptoms and involvement in their treatments and that the combination of dimensional patient-reported symptoms, categorical diagnostic criteria, and the application of sound clinical judgment will facilitate the delivery of quality care.
Acknowledgments
The authors wish to acknowledge the extensive efforts of the participating clinicians at each of the DSM-5 Field Trial sites, including Principal Investigators: Bruce Pollock, M.D., Ph.D., F.R.C.P.C., Michael Bagby, Ph.D., C. Psych., and Kwame McKenzie, M.D. (Centre for Addiction and Mental Health, Toronto, Ont., Canada); Carol North, M.D., M.P.E., and Alina Suris, Ph.D., A.B.P.P. (Dallas VA Medical Center, Dallas, Tex.); Laura Marsh, M.D., and Efrain Bleiberg, M.D. (Michael E. DeBakey VA Medical Center and the Menninger Clinic, Houston, Tex.); Mark Frye, M.D., Jeffrey Staab, M.D., M.S., and Glenn Smith, Ph.D., L.P. (Mayo Clinic, Rochester, Minn.); Helen Lavretsky, M.D., M.S. (David Geffen School of Medicine, University of California Los Angeles, Los Angeles, Calif.); Mahendra Bhati, M.D. (Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pa.); Mauricio Tohen, M.D., Dr.P.H., M.B.A. (School of Medicine, The University of Texas San Antonio, San Antonio, Tex.); Bruce Waslick, M.D. (Baystate Medical Center, Springfield, Mass.); Marianne Wamboldt, M.D. (Children’s Hospital Colorado, Aurora, Colo.); Prudence Fisher, Ph.D. (New York State Psychiatric Institute, New York, N.Y.; Weill Cornell Medical College, Payne Whitney and Westchester Divisions, New York and White Plains, N.Y.; North Shore Child and Family Guidance Center, Roslyn Heights, N.Y.); Carl Feinstein, M.D., and Debra Safer, M.D. (Stanford University School of Medicine, Stanford, Calif.).
The authors also wish to acknowledge the contributions of the DSM-5 work group and study group members who provided the revised diagnostic criteria and cross-cutting measures for DSM-5. Chairs of these groups are Jack D. Burke, Jr., M.D., M.P.H. (Diagnostic Assessment Instruments); Dan Blazer, M.D., Ph.D., M.P.H. (Chair, Neurocognitive Disorders); William T. Carpenter, Jr., M.D. (Psychotic Disorders); F. Xavier Castellanos, M.D. (Co-Chair, ADHD and Disruptive Behavior Disorders); Thomas Crowley, M.D. (Co-Chair, Substance-Related Disorders); Joel E. Dimsdale, M.D. (Somatic Symptom and Related Disorders); Jan A. Fawcett, M.D. (Mood Disorders); Dilip V. Jeste, M.D. (Chair Emeritus, Neurocognitive Disorders); Charles O’Brien, M.D., Ph.D. (Chair, Substance-Related Disorders); Ronald Petersen, M.D., Ph.D. (Co-Chair, Neurocognitive Disorders); Katharine A. Phillips, M.D. (Anxiety, Obsessive-Compulsive and Related, Trauma and Stress-Related, and Dissociative Disorders); Daniel Pine, M.D. (Child and Adolescent Disorders); Charles F. Reynolds III, M.D. (Sleep-Wake Disorders); David Shaffer, M.D. (Chair, ADHD and Disruptive Behavior Disorders); Andrew E. Skodol, M.D. (Personality and Personality Disorders); Susan Swedo, M.D. (Neurodevelopmental Disorders); B. Timothy Walsh, M.D. (Eating Disorders); and Kenneth J. Zucker, Ph.D. (Sexual and Gender Identity Disorders).