Skip to main content

Abstract

Objective: The lack of an accepted standard for measuring cognitive change in schizophrenia has been a major obstacle to regulatory approval of cognition-enhancing treatments. A primary mandate of the National Institute of Mental Health’s Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) initiative was to develop a consensus cognitive battery for clinical trials of cognition-enhancing treatments for schizophrenia through a broadly based scientific evaluation of measures. Method: The MATRICS Neurocognition Committee evaluated more than 90 tests in seven cognitive domains to identify the 36 most promising measures. A separate expert panel evaluated the degree to which each test met specific selection criteria. Twenty tests were selected as a beta battery. The beta battery was administered to 176 individuals with schizophrenia and readministered to 167 of them 4 weeks later so that the 20 tests could be compared directly. Results: The expert panel ratings are presented for the initially selected 36 tests. For the beta battery tests, data on test-retest reliability, practice effects, relationships to functional status, practicality, and tolerability are presented. Based on these data, 10 tests were selected to represent seven cognitive domains in the MATRICS Consensus Cognitive Battery. Conclusions: The structured consensus method was a feasible and fair mechanism for choosing candidate tests, and direct comparison of beta battery tests in a common sample allowed selection of a final consensus battery. The MATRICS Consensus Cognitive Battery is expected to be the standard tool for assessing cognitive change in clinical trials of cognition-enhancing drugs for schizophrenia. It may also aid evaluation of cognitive remediation strategies.
Despite the importance of cognitive deficits in schizophrenia, no drug has been approved for treatment of this aspect of the illness. The absence of a consensus cognitive battery has been a major impediment to standardized evaluation of new treatments to improve cognition in this disorder (1, 2) . One of the primary goals of the National Institute of Mental Health’s (NIMH’s) Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) initiative was to develop a consensus cognitive battery for use in clinical trials in schizophrenia. The development of a standard cognitive battery through a consensus of experts was designed to establish an accepted way to evaluate cognition-enhancing agents, thereby providing a pathway for approval of such new medications by the U.S. Food and Drug Administration (FDA). It would also aid in standardized evaluation of other interventions to treat the core cognitive deficits of schizophrenia.
The desirable characteristics of the battery were determined through an initial survey of 68 experts (3) . The MATRICS Neurocognition Committee then reviewed and integrated results from all available factor-analytic studies of cognitive performance in schizophrenia to derive separable cognitive domains (4) . An initial MATRICS consensus conference involving more than 130 scientists from academia, government, and the pharmaceutical industry led to agreement on seven cognitive domains for the battery and on five criteria for test selection (1) . The criteria emphasized characteristics required for cognitive measures in the context of clinical trials: test-retest reliability; utility as a repeated measure; relationship to functional status; potential changeability in response to pharmacological agents; and practicality for clinical trials and tolerability for patients. The seven cognitive domains included six from multiple factor-analytic studies of cognitive performance in schizophrenia—speed of processing; attention/vigilance; working memory; verbal learning; visual learning; and reasoning and problem solving (4) . The seventh domain, social cognition, was included because it was viewed as an ecologically important domain of cognitive deficit in schizophrenia that shows promise as a mediator of neurocognitive effects on functional outcome (5, 6), although studies of this domain in schizophrenia are too new for such measures to have been included in the various factor-analytic studies. Participating scientists initially provided more than 90 nominations of cognitive tests that might be used to measure performance in the seven cognitive domains.
In this article, we describe the procedures and data that the MATRICS Neurocognition Committee employed to select a final battery of 10 tests—the MATRICS Consensus Cognitive Battery (MCCB)—from among the nominated cognitive tests. These procedures involved narrowing the field to six or fewer tests per cognitive domain; creating a database of existing test information; using a structured consensus process involving an interdisciplinary panel of experts to obtain ratings of each test on each selection criterion; selecting the 20 most promising tests for a beta version of the battery; administering the beta battery to individuals with schizophrenia at five sites to directly compare the 20 tests (phase 1 of the MATRICS Psychometric and Standardization Study); and selecting the final battery based on data from this comparison.
A second article in this issue (7) describes the development of normative data for the MCCB using a community sample drawn from the same five sites, stratified by age, gender, and education (phase 2 of the MATRICS Psychometric and Standardization Study). This step was critical to making the consensus battery useful in clinical trials.
During the MATRICS process, it became apparent that the FDA would require that a potential cognition-enhancing agent demonstrate efficacy on a consensus cognitive performance measure as well as on a “coprimary” measure that reflects aspects of daily functioning. Potential coprimary measures were therefore evaluated, as reported in a third article in this issue (8) .
Because of space limitations, a brief description of the methods used to evaluate the nominated cognitive measures is presented here; more details are available in a data supplement that accompanies the online edition of this article.

From Test Nominations to a Beta Battery

Summary of Methods and Results

Initial evaluation

The MATRICS Neurocognition Committee, cochaired by Drs. Nuechterlein and Green and including representatives from academia (Drs. Barch, Cohen, Essock, Gold, Heaton, Keefe, and Kraemer), NIMH (Drs. Fenton, Goldberg, Stover, Weinberger, and Zalcman), and consumer advocacy (Dr. Frese), initially evaluated the extent to which the 90 nominated tests met the test selection criteria based on known reliability and validity as well as feasibility for clinical trials. Because the survey established that the battery would optimally not exceed 90 minutes, individual tests with high reliability and validity that took less than 15 minutes were sought. In this initial review, 36 candidate tests across seven cognitive domains were selected.

Expert panel ratings

Procedures based on the RAND/UCLA appropriateness method were used to systematically evaluate the 36 candidate tests (9, 10) . This method starts with a review of all relevant scientific evidence and then uses, iteratively, methods that help increase agreement among members of an expert panel that represents key stakeholder groups. A summary of available published and unpublished information on each candidate test was compiled into a database by the MATRICS staff, including information relevant to each test selection criterion (see the Conference 3 database at www.matrics.ucla.edu).
Using this database, an expert panel then evaluated and rated the extent to which each of the 36 candidate tests met each of the five selection criteria. The panel included experts on cognitive deficits in schizophrenia, clinical neuropsychology, clinical trials methodology, cognitive science, neuropharmacology, clinical psychiatry, biostatistics, and psychometrics. These preconference ratings were then examined to identify any that reflected a lack of consensus. Twenty of the 180 ratings indicated a notable lack of consensus, so the expert panel discussed each of these and completed the ratings again. Dispersion decreased, and the median values for nine ratings changed. The median values of all final ratings are presented in Table 1, grouped by cognitive domain.

Selection of beta battery

The Neurocognition Committee used the expert panel ratings to select the beta version of the battery for the MATRICS Psychometric and Standardization Study. The goal was to select two to four measures per domain. The resulting beta version of the MCCB included 20 tests (see Table 2 ; for more details, see the online data supplement and http://www.matrics.ucla.edu/matrics-psychometrics-frame.htm).

From Beta Battery to Final Battery

Summary of Methods

The MATRICS Psychometric and Standardization Study was conducted to directly compare the tests’ psychometric properties, practicality, and tolerability to allow the best representative(s) of each domain to be selected for the final battery. Details of the study’s methods are provided in the online data supplement.

Sites and Participants

The study sites had extensive experience with schizophrenia clinical trials and expertise in neuropsychological assessment: University of California, Los Angeles; Duke University, Durham, N.C.; Maryland Psychiatric Research Center, University of Maryland, Baltimore; Massachusetts Mental Health Center, Boston; and University of Kansas, Wichita. Each site contributed at least 30 participants with schizophrenia or schizoaffective disorder, depressed type, who were tested twice, 4 weeks apart.

Study Design and Assessments

Potential participants received a complete description of the study and then provided written informed consent, as approved by the institutional review boards of all study sites and the coordinating site. Next, the Structured Clinical Interview for DSM-IV (24) was administered to each potential participant. If entry criteria were met, baseline assessments were scheduled. Participants were asked to return 4 weeks later for a retest.
In addition to the 20 cognitive performance tests, data collected included information about clinical symptoms (from the Brief Psychiatric Rating Scale [BPRS; 25, 26]), self-report measures of community functioning (from the Birchwood Social Functioning Scale [27] supplemented with the work and school items from the Social Adjustment Scale [28] ), measures of functional capacity, and interview-based measures of cognition (8, 29, 30) . See the online data supplement for descriptions of alternate cognitive test forms and staff training for neurocognitive assessments, symptom ratings, and community functioning measures.

Results

Participants

Across the five study sites, 176 patients were assessed at baseline, and 167 were assessed again at the 4-week follow-up (a 95% retention rate). Participants’ mean age was 44.0 years (SD=11.2), and their mean educational level was 12.4 years (SD=2.4). Three-quarters (76%) of the participants were male. The overall ethnic/racial distribution of the sample was 59% white (N=104), 29% African American (N=51), 6% Hispanic/Latino (N=11), 1% Asian or Pacific Islander (N=2), <1% Native American or Alaskan (N=1), and 4% other (N=7).
Based on the diagnostic interviews, 86% of participants received a diagnosis of schizophrenia and 14% a diagnosis of schizoaffective disorder, depressed type. At assessment, 83% were taking a second-generation antipsychotic, 13% a first-generation antipsychotic, and 1% other psychoactive medications only; current medication type was unknown for 3%. Almost all participants were outpatients, but at one site patients in a residential rehabilitation facility predominated.
As expected for clinically stable patients, symptom levels were low. At the initial assessment, the mean BPRS thinking disturbance factor score was 2.6 (SD=1.3), and the mean BPRS withdrawal-retardation factor score was 2.0 (SD=0.9). Ratings were similar at the 4-week follow-up: the mean thinking disturbance score was 2.4 (SD=1.2), and the mean withdrawal-retardation score was 2.0 (SD=0.8).

Dimensions of Community Functional Status

A principal-components analysis with the seven domain scores from the Social Functioning Scale and a summary measure of work or school functioning from the Social Adjustment Scale yielded a three-factor solution (social functioning, independent living, and work functioning; see supplementary Table 1 in the online data supplement) that explained 59% of the variance and was consistent with previous findings (31, 32) . Factor scores from these three outcome domains, as well as a summary score across domains, were used as dependent measures for functional outcome.

Site Effects

Cognitive performance was generally consistent across sites; only four of the 20 analyses of variance (ANOVAs) showed a significant site effect, and the differences were relatively small. In contrast, there were clear site differences in community functioning. ANOVAs revealed significant differences in social outcome (F=3.36, df=4, 170, p<0.02) and independent living (F=4.18, df=4, 170, p<0.01). Work outcome showed a similar tendency (F=2.35, df=4, 170, p=0.06), and three of the pairwise comparisons were significant.

Test-Retest Reliability

At MATRICS consensus meetings, high test-retest reliability was considered the most important test feature in a clinical trial. Test-retest reliability data are summarized in Table 2 . We considered both Pearson’s r and the intraclass correlation coefficient, which takes into account changes in mean level (for Pearson’s r, see the supplementary Table 2 in the online data supplement). Alternate forms were used for five of the tests. Test-retest reliabilities were generally good. The committee considered an r value of 0.70 to be acceptable test-retest reliability for clinical trials. Most of the tests achieved at least that level.

Utility as a Repeated Measure

Tests were considered useful for clinical trials if they showed relatively small practice effects or, if they had notable practice effects, scores did not approach ceiling performance. We considered performance levels at baseline and 4-week follow-up, as well as change scores, magnitude of change, and the number of test administrations with scores at ceiling or floor. Practice effects were generally quite small ( Table 3 ), but several were statistically significant. Some tests in the speed of processing and the reasoning and problem-solving domains showed small practice effects (roughly one-fifth of a standard deviation), but even so, there were no noticeable ceiling effects or constrictions of variance at the second testing.

Relationship to Self-Reported Functional Outcome

As mentioned above, the sites differed substantially in participants’ functional status. At one site, only one participant was working, and another site largely involved patients in a residential rehabilitation program. As a result, correlations between cognitive measures and functional outcome showed considerable variation from site to site. Statistical analyses indicated that the heterogeneity of correlation magnitudes across sites was somewhat greater than would be expected by chance, particularly for the independent living factor. Pooled correlations weighted by sample size, however, were very similar to the overall correlations across sites. Given variability in strength of relationships, the MATRICS Neurocognition Committee examined the correlations by combining participants across sites and by looking at the five sites separately and considering the median correlation among sites. Both methods may underestimate the correlations that might be achieved without such site variations (e.g., nine of the tests had correlations >0.40 with global outcome at one or more sites). Furthermore, the restriction of the functional outcome measures to self-report data may have reduced the correlation magnitudes. However, the data did allow direct comparisons among the tests within the same sample, which was the primary goal ( Table 4 ). The correlations tended to be larger for work outcome and smaller for social outcome, consistent with other studies (33, 34) .

Practicality and Tolerability

Practicality refers to the test administrator’s perspective. It includes consideration of test setup, staff training, administration, and scoring. In assessing practicality, a 7-point Likert scale was used in three categories (setup, administration, and scoring) and in a global score. Ratings by each tester were made after data collection for the entire sample. Tolerability refers to the participant’s view of a test. It can be influenced by the length of the test and any feature making completing the test more or less pleasant, including an unusual degree of difficulty or excessive repetitiveness. We asked participants immediately after they took each test to point to a number on a 7-point Likert scale (indicated by unhappy to happy drawings of faces; 1=extremely unpleasant; 7=extremely pleasant) to indicate how unpleasant or pleasant they found the test.
Table 5 presents the practicality and tolerability results as well as the mean time it took to administer each test. Despite some variability, most tests were considered to be both practical and tolerable. This result likely reflects the efforts to take these factors into consideration in the earlier stages of test selection.

Selection of the Final Battery

The MATRICS Neurocognition Committee used the data in Tables 2–5 to select the tests that make up the final MCCB. Two site principal investigators of the Psychometric and Standardization Study who were not already part of the MATRICS Neurocognition Committee (Drs. Baade and Seidman) were added to the decision-making group to maximize input from experts in neurocognitive measurement. After a discussion of results in a given cognitive domain, committee members independently ranked each candidate test through an e-mail ballot. Members who had a conflict of interest with any test within a domain recused themselves from the vote on that domain. The 10 tests in the final battery are presented in Table 6 in the recommended order of administration. Based on the time the individual tests took to administer during the MATRICS Psychometric and Standardization Study, total testing time (without considering rest breaks) is estimated to be about 65 minutes. Training to administer these 10 tests should take no more than 1 day, including didactic instruction and hands-on practice. Below, we briefly summarize the reasons that these 10 tests were selected.

Speed of processing

The committee had planned to include two types of measures in this category: a verbal fluency measure and a graphomotor speed measure. The measures were psychometrically comparable in most respects. The Brief Assessment of Cognition in Schizophrenia symbol coding subtest was selected because it showed a smaller practice effect than the WAIS-III digit symbol coding subtest. Given the brief administration time and high tolerability of these measures, the committee decided to include an additional graphomotor measure with a different format (the Trail Making Test, Part A), for a total of three tests (the Trail Making Test, Part A; the Brief Assessment of Cognition in Schizophrenia symbol coding subtest; and the category fluency test).

Attention/vigilance

The Continuous Performance Test—Identical Pairs Version was selected for its high test-retest reliability and the absence of a ceiling effect.

Working memory

The spatial span subtest of the Wechsler Memory Scale, 3rd ed., was selected for nonverbal working memory because of its practicality, its brief administration time, and the absence of a practice effect. For verbal working memory, the Letter-Number Span test was selected because of its high reliability and its somewhat stronger relationship to global functional status.

Verbal learning

The committee considered the verbal learning tests to be psychometrically comparable. The Hopkins Verbal Learning Test—Revised was selected because of the availability of six forms, which may be helpful for clinical trials with several testing occasions.

Visual learning

The Brief Visuospatial Memory Test—Revised was selected because it had higher test-retest reliability, a brief administration time, and the availability of six forms.

Reasoning and problem solving

The Neuropsychological Assessment Battery mazes subtest was selected for its high test-retest reliability, small practice effect, and high practicality ratings.

Social cognition

The managing emotions component of the Mayer-Salovey-Caruso Emotional Intelligence Test was selected for its relatively stronger relationship to functional status.
With selection of the final battery, it became possible to calculate test-retest reliabilities for cognitive domain scores that involve multiple tests and for an overall composite score for all 10 tests. Test scores were transformed to z-scores using the schizophrenia sample and averaged to examine the reliability of the composite scores. Four-week test-retest intraclass correlation coefficients were 0.71 for the speed of processing domain score, 0.85 for the working memory domain score, and 0.90 for the overall composite score.

Discussion

Discussions among representatives of the FDA, NIMH, and MATRICS indicated that the absence of a consensus cognitive battery had been an overriding obstacle to regulatory approval of any drug as a treatment for the core cognitive deficits of schizophrenia. The steps and data presented here moved the process from an initial nomination list of more than 90 cognitive tests to the selection of the final 10 tests. The process involved the participation of a large number of scientists in diverse fields to achieve a consensus of experts. To ensure a fair and effective process for selecting the best brief cognitive measures for clinical trials, the methods included both a structured consensus process to evaluate existing data and substantial new data collection.
One clear point to emerge from the MATRICS consensus meetings was the importance of reliable and valid assessment of cognitive functioning at the level of key cognitive domains (1, 4) . Although some interventions may improve cognitive functioning generally, evidence from cognitive neuroscience and neuropharmacology suggests that many psychopharmacological agents may differentially target a subset of cognitive domains (35, 36) . A battery assessing cognitive performance at the domain level requires more administration time than one that seeks to measure only global cognitive change. One challenge was to select a battery that adequately assessed cognitive domains and was still practical for large-scale multisite trials. The beta battery was twice as long as the final battery yet was well tolerated by participants, with 95% returning as requested for retesting. Furthermore, no participant who started a battery failed to complete it. The final battery offers reliable and valid measures in each of the seven cognitive domains and should be even easier to complete. If a clinical trial involves an intervention hypothesized to differentially affect specific cognitive domains, our experience with the beta battery suggests that the MCCB could be supplemented with additional tests to assess the targeted domains without substantial data loss through attrition.
The psychometric study sites varied substantially in the level and variance of their participants’ functional status. In contrast, cognitive performance showed few significant site effects. Local factors may influence functional status in ways that are not attributable to cognitive abilities. Thus, cognitive abilities establish a potential for everyday functional level, while environmental factors (e.g., the local job market, employment placement aid, and housing availability) may influence whether variations in cognitive abilities manifest themselves in differing community functioning. In this instance, the sites varied in the extent to which treatment programs and local opportunities encouraged independent living and return to work, and this variation contributed to differences across sites in the magnitude of correlations between individual measures and functional outcome. To further examine variability across sites, we considered the correlation for each site between the overall composite cognitive score and global functional status. Three sites showed clear relationships (with r values of 0.36, 0.38, and 0.44) and two did not (r values of 0.03 and 0.11). Calculating the correlations across sites probably led to a low estimate of the true magnitude of this correlation in people with schizophrenia. Nevertheless, the correlations allowed a reasonable direct comparison of the tests.
Another factor in the magnitude of correlations observed between cognitive performance and functional outcome may have been the restriction of community functioning measures to self-report data, which may include biases. Use of informants to broaden the information base or use of direct measures of functional abilities in the clinic may yield stronger relationships to cognitive performance (8) .
The final report of the MATRICS initiative identified the components of the MCCB and recommended its use as the standard cognitive performance battery for clinical trials of potential cognition-enhancing interventions. This recommendation was unanimously endorsed by the NIMH Mental Health Advisory Council in April 2005, and it was accepted by the FDA’s Division of Neuropharmacological Drug Products. To facilitate ease of use and distribution, the tests in the MCCB were placed into a convenient kit form by a nonprofit company, MATRICS Assessment, Inc. (37) . The MCCB is distributed by Psychological Assessment Resources, Inc., Multi-Health Systems, Inc., and Harcourt Assessment, Inc.
The primary use of the MCCB is expected to be in clinical trials of potential cognition-enhancing drugs for schizophrenia and related disorders. Use of the MCCB in trials of cognitive remediation would facilitate comparison of results across studies. Another helpful application would be as a reference battery in basic studies of cognitive processes in schizophrenia, as it may aid evaluation of sample variations across studies.
The last step in the development of the MCCB was a five-site community standardization study to establish co-norms for the tests, examine demographic correlates of performance, and allow demographically corrected standardized scores to be generated. This stage is described in the next article (7) .

Footnotes

Received Jan. 7, 2007; revisions received July 7 and Sept. 5, 2007; accepted Sept. 11, 2007 (doi: 10.1176/appi.ajp.2007.07010042). From the Department of Psychology and the Semel Institute for Neuroscience and Human Behavior, Geffen School of Medicine at UCLA, Los Angeles; VA Greater Los Angeles Healthcare System, Los Angeles; University of Kansas School of Medicine, Wichita; Departments of Psychology and Psychiatry, Washington University, St. Louis; Department of Psychology, Princeton University, Princeton, N.J.; Department of Psychiatry, University of Pittsburgh, Pittsburgh; Department of Psychiatry, Mount Sinai School of Medicine, New York; Mental Illness Research, Education, and Clinical Center, James J. Peters VA Medical Center, Bronx, New York; NIMH, Bethesda, Md.; Department of Psychiatry, Northeastern Ohio Universities College of Medicine, Rootstown, Ohio; Maryland Psychiatric Research Center, University of Maryland, Baltimore; Psychiatry Research Division, Zucker Hillside Hospital, Glen Oaks, New York; Department of Psychiatry, University of California, San Diego; Department of Psychiatry and Behavioral Sciences, Duke University, Durham, N.C.; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, Calif.; Massachusetts Mental Health Center, Division of Public Psychiatry, Beth Israel Deaconess Medical Center, Boston; and Department of Psychiatry and Massachusetts General Hospital, Harvard Medical School, Boston. Address correspondence and reprint requests to Dr. Nuechterlein, UCLA Semel Institute for Neuroscience and Human Behavior, 300 Medical Plaza, Rm. 2251, Los Angeles, CA 90095-6968; [email protected] (e-mail).
Drs. Nuechterlein and Green have leadership positions but no financial interest in MATRICS Assessment, Inc., a nonprofit company formed after selection of the final MATRICS Consensus Cognitive Battery to allow its publication. Dr. Kern receives some financial support from MATRICS Assessment, Inc. Drs. Keefe, Gold, and Goldberg were developers of, and receive royalties from, the Brief Assessment of Cognition in Schizophrenia tests. Drs. Gold and Nuechterlein were developers of, but have no financial interest in, the Letter-Number Span test and the 3–7 Continuous Performance Test, respectively. Dr. Nuechterlein has received research funding from Janssen LLP. Dr. Green has consulted for Lundbeck, Abbott, Astellas, Bristol-Myers Squibb, Eli Lilly, Pfizer, Memory Pharmaceuticals, Otsuka, Roche, Sanofi-Aventis, and Solvay. Dr. Keefe reports having received research funding, consulting fees, advisory board payments, or lecture or educational fees from Abbott, Acadia, AstraZeneca, Bristol-Myers Squibb, Cephalon, Dainippon Sumitomo Pharma, Eli Lilly, Forest Labs, Gabriel Pharmaceuticals, GlaxoSmithKline, Johnson & Johnson, Lundbeck, Memory Pharmaceuticals, Merck, NIMH, Orexigen, Otsuka, Pfizer, Repligen, Saegis, Sanofi-Aventis, and Xenoport. Dr. Marder has served as a consultant for Bristol-Myers Squibb, Otsuka, Pfizer, Merck, Roche, Solvay, and Wyeth. All other authors report no competing interests.
Dr. Fenton died in September 2006.
Funding for the MATRICS Initiative was provided through NIMH contract N01MH22006 to the University of California, Los Angeles (Dr. Marder, principal investigator; Dr. Green, co-principal investigator; Dr. Fenton, project officer). Funding for this study came from an option (Dr. Green, principal investigator; Dr. Nuechterlein, co-principal investigator) to the NIMH MATRICS Initiative.
The authors thank Dr. Jim Mintz and the UCLA Semel Institute Biostatistical Unit for developing the data entry systems and Bi-Hong Deng for her help in data management and preparation of the tables. The following staff members at the UCLA-VA site coordinated the training and quality assurance procedures: Karen Cornelius, Psy.D., Kimmy Kee, Ph.D., Mark McGee, Ayala Ofek, and Mark Sergi, Ph.D.

Supplementary Material

File (ajp_165_2_203_01.pdf)

References

1.
Green MF, Nuechterlein KH, Gold JM, Barch DM, Cohen J, Essock S, Fenton WS, Frese F, Goldberg TE, Heaton RK, Keefe RSE, Kern RS, Kraemer H, Stover E, Weinberger DR, Zalcman S, Marder SR: Approaching a consensus cognitive battery for clinical trials in schizophrenia: the NIMH-MATRICS conference to select cognitive domains and test criteria. Biol Psychiatry 2004; 56:301–307
2.
Marder SR, Fenton WS: Measurement and treatment research to improve cognition in schizophrenia: NIMH MATRICS initiative to support the development of agents for improving cognition in schizophrenia. Schizophr Res 2004; 72:5–10
3.
Kern RS, Green MF, Nuechterlein KH, Deng BH: NIMH-MATRICS survey on assessment of neurocognition in schizophrenia. Schizophr Res 2004; 72:11–19
4.
Nuechterlein KH, Barch DM, Gold JM, Goldberg TE, Green MF, Heaton RK: Identification of separable cognitive factors in schizophrenia. Schizophr Res 2004; 72:29–39
5.
Sergi MJ, Rassovsky Y, Nuechterlein KH, Green MF: Social perception as a mediator of the influence of early visual processing on functional status in schizophrenia. Am J Psychiatry 2006; 163:448–454
6.
Brekke J, Kay DD, Lee KS, Green MF: Biosocial pathways to functional outcome in schizophrenia. Schizophr Res 2005; 80:213–225
7.
Kern RS, Nuechterlein KH, Green MF, Baade LE, Fenton WS, Gold JM, Keefe RSE, Mesholam-Gately R, Mintz J, Seidman LJ, Stover E, Marder SR: The MATRICS Consensus Cognitive Battery, part 2: co-norming and standardization. Am J Psychiatry (published online January 2, 2008;
8.
Green MF, Nuechterlein KH, Kern RS, Baade LE, Fenton WS, Gold JM, Keefe RSE, Mesholam-Gately R, Seidman LJ, Stover E, Marder SR: Functional co-primary measures for clinical trials in schizophrenia: results from the MATRICS Psychometric and Standardization Study. Am J Psychiatry (published online January 2, 2008;
9.
Fitch K, Bernstein SJ, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, van het Loo M, McDonnell J, Vader JP, Kahan JP: The RAND/UCLA Appropriateness Method User’s Manual. Santa Monica, Calif, RAND, 2001
10.
Young AS, Forquer SL, Tran A, Starzynski M, Shatkin J: Identifying clinical competencies that support rehabilitation and empowerment in individuals with severe mental illness. J Behav Health Serv Res 2000; 27:321–333
11.
Spreen O, Strauss E: A Compendium of Neuropsychological Tests. New York, Oxford University Press, 1991
12.
Army Individual Test Battery: Manual of Directions and Scoring. Washington, DC, Adjutant General’s Office, War Department, 1944
13.
Wechsler D: Wechsler Adult Intelligence Scale—3rd ed (WAIS-III): Administration and Scoring Manual. San Antonio, Tex, Psychological Corp, 1997
14.
Keefe RSE: Brief Assessment of Cognition in Schizophrenia (BACS) Manual—A: Version 2.1. Durham, NC, Duke University Medical Center, 1999
15.
Nuechterlein KH, Edell WS, Norris M, Dawson ME: Attentional vulnerability indicators, thought disorder, and negative symptoms. Schizophr Bull 1986; 12:408–426
16.
Cornblatt BA, Risch NJ, Faris G, Friedman D, Erlenmeyer-Kimling L: The Continuous Performance Test, Identical Pairs version (CPT-IP), I: new findings about sustained attention in normal families. Psychiatry Res 1988; 26:223–238
17.
Gold JM, Carpenter C, Randolph C, Goldberg TE, Weinberger DR: Auditory working memory and Wisconsin Card Sorting Test performance in schizophrenia. Arch Gen Psychiatry 1997; 54:159–165
18.
Wechsler D: The Wechsler Memory Scale, 3rd ed. San Antonio, Tex, Psychological Corp (Harcourt), 1997
19.
Hershey T, Craft S, Glauser TA, Hale S: Short-term and long-term memory in early temporal lobe dysfunction. Neuropsychology 1998; 12:52–64
20.
White T, Stern RA: Neuropsychological Assessment Battery: Psychometric and Technical Manual. Lutz, Fla, Psychological Assessment Resources, Inc, 2003
21.
Brandt J, Benedict RHB: The Hopkins Verbal Learning Test—Revised: Professional Manual. Odessa, Fla, Psychological Assessment Resources, Inc, 2001
22.
Benedict RHB: Brief Visuospatial Memory Test—Revised: Professional Manual. Odessa, Fla, Psychological Assessment Resources, Inc, 1997
23.
Mayer JD, Salovey P, Caruso DR: Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) User’s Manual. Toronto, MHS Publishers, 2002
24.
First MB, Spitzer RL, Gibbon M, Williams JBW: Structured Clinical Interview for DSM-IV Axis I Disorders, Patient Edition (SCID-P), version 2. New York, New York State Psychiatric Institute, Biometrics Research, 1997
25.
Ventura J, Lukoff D, Nuechterlein KH, Liberman RP, Green MF, Shaner A: Brief Psychiatric Rating Scale (BPRS), expanded version (4.0): scales, anchor points, and administration manual. Int J Methods Psychiatr Res 1993; 3:227–243
26.
Overall JE, Gorham DR: The Brief Psychiatric Rating Scale. Psychol Rep 1962; 10:799–812
27.
Birchwood M, Smith J, Cochran R, Wetton S, Copestake S: The social functioning scale: the development and validation of a new scale of social adjustment for use in family intervention programs with schizophrenic patients. Br J Psychiatry 1990; 157:853–859
28.
Weissman M, Paykel E: The Depressed Woman: A Study of Social Relationships. Chicago, University of Chicago Press, 1974
29.
Green MF, Kern RS, Heaton RK: Longitudinal studies of cognition and functional outcome in schizophrenia: implications for MATRICS. Schizophr Res 2004; 72:41–51
30.
McKibbin CL, Brekke JS, Sires D, Jeste DV, Patterson TL: Direct assessment of functional abilities: relevance to persons with schizophrenia. Schizophr Res 2004; 72:53–67
31.
Brekke JS, Long JD: Community-based psychosocial rehabilitation and prospective change in functional, clinical, and subjective experience variables in schizophrenia. Schizophr Bull 2000; 26:667–680
32.
Brekke JS, Long JD, Nesbitt N, Sobel E: The impact of service characteristics from community support programs for persons with schizophrenia: a growth curve analysis. J Consult Clin Psychol 1997; 65:464–475
33.
Kee KS, Green MF, Mintz J, Brekke JS: Is emotional processing a predictor of functional outcome in schizophrenia? Schizophr Bull 2003; 29:487–497
34.
Gold JM, Goldberg RW, McNary SW, Dixon LB, Lehman AF: Cognitive correlates of job tenure among patients with severe mental illness. Am J Psychiatry 2002; 159:1395–1402
35.
Geyer MA, Tamminga CA: Measurement and treatment research to improve cognition in schizophrenia: neuropharmacological aspects. Psychopharmacology (Berl) 2004; 174:1–2
36.
Gazzaniga MS (ed): The Cognitive Neurosciences III, 3rd ed. Cambridge, Mass, MIT Press, 2004
37.
Nuechterlein KH, Green MF: MATRICS Consensus Cognitive Battery. Los Angeles, MATRICS Assessment, Inc, 2006

Information & Authors

Information

Published In

Go to American Journal of Psychiatry
Go to American Journal of Psychiatry
American Journal of Psychiatry
Pages: 203 - 213
PubMed: 18172019

History

Published online: 1 February 2008
Published in print: February, 2008

Authors

Details

Keith H. Nuechterlein, Ph.D.
Michael F. Green, Ph.D.
Deanna M. Barch, Ph.D.
Jonathan D. Cohen, M.D., Ph.D.
Frederick J. Frese, III, Ph.D.
Robert K. Heaton, Ph.D.
Richard S.E. Keefe, Ph.D.
Raquelle Mesholam-Gately, Ph.D.
Larry J. Seidman, Ph.D.
Daniel R. Weinberger, M.D.
Alexander S. Young, M.D., M.S.H.S.
Stephen R. Marder, M.D.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Get Access

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - American Journal of Psychiatry

PPV Articles - American Journal of Psychiatry

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share