Skip to main content
To the Editor: We are grateful to our colleagues for their letters. Overall, they raised seven substantial points. All of the issues raised had already been discussed in considerable detail during the planning of the review, the design of its protocol, and the process of interpreting the results.
Chronis-Tuscano et al. (1) raise a number of interesting and important points in their letter, specifically in relation to the analysis of behavioral interventions, although the issues raised are also relevant for the other intervention domains covered in the review.

Were we justified in restricting trials in the meta-analysis to a carefully selected subset of the wider literature?

Chronis-Tuscano et al. highlight the fact that the “behavioral intervention studies included in this meta-analysis constitute only a small fraction of the larger treatment literature.” It is unclear whether this represents a comment on the restrictive nature of our protocol or the way it was implemented. With regard to the latter, given our exhaustive search and the rigor of our study selection procedures, we are confident that we have reviewed all the relevant literature in each domain (see the data supplement that accompanies the online edition of our article [2]). With regard to the protocol itself, we chose to include only trials that were directly relevant to ADHD and were undertaken using a randomized controlled design. To our surprise, only a small proportion of the trials constituting “more than 40 years of solid evidence for the efficacy of behavioral interventions” employed such a design, which is regarded by many as a minimum design requirement for inclusion in meta-analyses of treatment efficacy. Clearly we agree with Chronis-Tuscano et al. that the field of behavioral interventions for ADHD needs far more rigorously designed trials.

What is efficacy in relation to ADHD treatments?

Claims made for the efficacy of interventions for ADHD are often expressed in rather general terms. Such claims rarely specify which outcome they are referring to, and it is quite reasonable that many assume that such claims are being made in relation to the treatment of ADHD per se and not some more general feature of child psychopathology (e.g., functional impairment). It was to start to clarify this question—efficacy in relation to what?—that we undertook what we agree were necessarily highly circumscribed analyses. In this regard, Chronis-Tuscano et al. are incorrect when they wrote that we “concluded that limited evidence is available for behavioral interventions.” Our conclusions, for all domains, were drawn very specifically in relation to core ADHD symptoms. We agree, of course, that other outcomes can be equally important and that these may be improved by behavioral interventions—we explicitly state this in the discussion: “Third, although not effective for ADHD symptoms themselves, behavioral interventions may result in other positive effects (e.g., by reducing oppositional behavior)” (2). We could also have referred to parent-child relationship as an example here, as mentioned by the authors. It would indeed be surprising if parent training, for instance, did not reduce negative parenting and increase positive parenting. What is less clear is whether this will have secondary effects on the behavior and the functioning of children with ADHD. We are currently undertaking secondary analyses in relation to additional outcomes of behavioral interventions, including child functional impairment. Despite the claims for the importance of functional impairment as a driver of referral and an outcome in clinical practice, we have found that validated measures of this construct are rarely included as an outcome measure in trials (only one study that met our entry criteria for the meta-analysis included a standard measure of functional impairment, and even then impairment scores were reported only at baseline). In summary, our hope is that our analysis, circumscribed as it is, has started to bring some clarity to the question of whether current evidence supports the use of behavioral interventions for the treatment of core ADHD symptoms.

What is a behavioral intervention?

Chronis-Tuscano et al. (1) were critical of our decision to draw the boundary broadly around the construct of behavioral interventions on the grounds that child-focused behavioral treatments for ADHD are known not to have efficacy (3). Doing this, the authors claim, “may grossly undermine the estimated effects of behavioral interventions.” Our justification for this approach is that we were interested in looking at those interventions employing behavioral or social learning principles rather than focusing on predefined specific types of interventions that previous analyses had shown to be efficacious. In practice, this decision had little effect on our reported effect size estimates. Of the 15 behavioral intervention trials included in our meta-analysis, only one exclusively included a child-only focused approach of this sort. Excluding this trial in a sensitivity analysis did not change the results (most proximal outcome standard mean difference, 0.41; 95% confidence interval [CI]=0.23–0.61; probably blinded outcome standard mean difference, 0.01; 95% CI=−0.28 to 0.33).

Can blinded assessment be disentangled from the effect of setting?

We agree with the letter writers that a major challenge facing the field is the development of valid assessment approaches that are blind to treatment allocation, providing an unbiased measure of clinical improvement. Parents cannot provide these when interventions are implemented at home, especially for interventions with major parental involvement. Including a meta-analysis of assessments that are probably blinded represented a step in the right direction, but it is not the ultimate solution to the problem—an analytical sticking plaster over a deep methodological wound. One complication was that three of our seven probably blinded measures in the behavioral interventions meta-analysis used teacher ratings. Relying on such measures does not allow us to disentangle the effects of treatment setting from assessment blinding. It may be that behavioral interventions have the “largest effects in the setting in which they are implemented,” as suggested in the letter (1). However, this claim itself is largely based on the comparison of setting-confounded teacher and parent data and is challenged by a closer look at our results. In the discussion we wrote, “Another possibility is that parents’ unblinded most proximal assessments accurately captured treatment effects established in the therapeutic setting but that these effects did not generalize to the settings in which probably blinded assessments were made. If so, we would expect the four behavioral intervention trials that had blind assessments made by independent trained observers within the home-based therapeutic setting to show significant treatment effects. This was not the case” (2).
Having carefully considered the points raised by Chronis-Tuscano et al., we feel even more confident in our conclusion that “Better evidence for efficacy from blinded assessments is required for behavioral interventions … before they can be supported as treatments for core ADHD symptoms” (2). At the same time, we continue to accept that a more finely grained analysis, employing rigorous meta-analytic methods, of different behavioral intervention approaches including a broader range of outcomes, over and above ADHD symptoms, is still required.
The letter by Arns and Strehl (4) focuses on our neurofeedback meta-analysis, which indicated a trend for probably blinded outcomes that did not reach standard levels of significance. The letter highlights the importance of three general issues (again, relevant across all treatment domains in the meta-analysis) and makes two specific criticisms of the way we implemented our protocol with regard to neurofeedback.

Can an optimized active treatment be a control condition?

Their first criticism relates to our selection of the waiting list rather than adaptive cognitive training group as the control condition for the calculation of effect size in the Steiner et al. study (5), claiming a breach of the protocol, which calls for the selection of the most stringent control condition. However, in the context of this meta-analysis, this type of cognitive training was not considered a control condition but rather an optimized active ADHD treatment. In the study protocol (which applied to all six domains of treatment), we were most careful to include only studies that had a genuine control condition and to exclude head-to-head comparisons of two optimized active ADHD treatments. For instance, we did not include estimates of treatment effects based on comparisons of nonpharmacological treatments with optimized medication. The situation in the Steiner et al. study is similar to this, because the authors carefully selected the individual components of their cognitive training condition for reported positive effects on ADHD symptoms (i.e., it was not a genuine control condition, in contrast to other studies that used the same generic training program as a control condition). Furthermore, the Steiner et al. cognitive training condition was also included in our meta-analysis of the effects of cognitive training. Clearly, including the same cognitive training condition as a treatment in one analysis (i.e., cognitive training) and as a control in another (i.e., neurofeedback) would be inconsistent.

Can the effects of medication be accounted for?

Arns and Strehl (4) also raise the possibility that different patterns of medication use in the treatment group compared with the comparison groups may have had an impact on the effect size reported in the Steiner et al. study (5). This point is well taken. Two-thirds of the medicated patients in the neurofeedback and cognitive training conditions reduced their medication, but no patients in the control condition did so. This may have led to an underestimation of the effects of treatment on core ADHD symptoms. Our published protocol directly addressed this issue by including an additional analysis of trials with no or low levels of medication. However, too few such trials were in the neurofeedback domain for such an analysis.

What constitutes neurofeedback?

Arns and Strehl (4) also felt that the Lansbergen et al. study (6) should have been excluded as it used a “nonstandard” neurofeedback approach. This is a similar criticism to that raised by Chronis-Tuscano et al. (1) with regard to child-focused interventions in the behavioral intervention analysis. In fact, the Lansbergen et al. study used mainly standard theta and beta frequencies for neurofeedback (theta suppression and enhancement of sensorimotor rhythm, a low beta somatosensory motor rhythm for all but one patient). This is recommended (7), and it is similar to the ranges used in other studies included in the meta-analysis. Some additional individualization (as also used in this study) is common and is claimed to improve outcomes (8). However, the rapid automatic threshold adaptation employed in the Lansbergen et al. trial was discussed as a possible limitation by the authors themselves (6). Still, many neurofeedback parameters including threshold adjustments have not been systematically studied and standardized, even though they may have the potential to contribute to training success or failure. Our study protocol did not introduce revised neurofeedback standards, but we consider it critical that future trials “implement adequately blinded designs that do not compromise the quality of the treatment” (2).
In summary, the proposals by Arns and Strehl (4) to use adaptive cognitive training as the control condition in the Steiner et al. study (5) and to exclude the Lansbergen et al. study (6) for its use of nonstandard elements in their neurofeedback approach would have given a more positive result, as they eliminate the trials with the smallest effect sizes. However, rather than “strictly adhering” to our protocol, this would have meant a serious breach of it.
Given the potential impact that our meta-analysis (2) could have on practice, it is vital that our work is held up to the highest level of scrutiny. We are grateful to the letter writers for raising these points so that we could again reflect on and review our decisions and interpretations. In each case raised, we are confident that we made the appropriate decisions during the development of the protocol and the interpretation of the results. However, it is essential that the results of the meta-analysis are interpreted in a circumscribed manner, in keeping with the highly specific question we addressed (i.e., in relation to core ADHD symptoms) and the limitations of the literature we were reviewing. We are certain that both sets of letter writers would echo our conclusion that “Properly powered, randomized controlled trials with blinded, ecologically valid outcome measures are urgently needed, especially in the psychological treatment domain.”

References

1.
Chronis-Tuscano A, Chacko A, Barkley R: Key issues relevant to the efficacy of behavioral treatment for ADHD (letter). Am J Psychiatry 2013; 170:799
2.
Sonuga-Barke E, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Döpfner M, Dittmann R, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong I, Sergeant J; European ADHD Guidelines Group: Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. Am J Psychiatry 2013; 170:275–289
3.
Pelham WE, Fabiano GA: Evidence-based psychosocial treatments for attention-deficit/hyperactivity disorder. J Clin Child Adolesc Psychol 2008; 37:184–214
4.
Arns M, Strehl U: Evidence for efficacy of neurofeedback in ADHD? (letter). Am J Psychiatry 2013; 170:799
5.
Steiner NJ, Sheldrick RC, Gotthelf D, Perrin EC: Computer-based attention training in the schools for children with attention deficit/hyperactivity disorder: a preliminary trial. Clin Pediatr (Phila) 2011; 50:615–622
6.
Lansbergen MM, van Dongen-Boomsma M, Buitelaar JK, Slaats-Willemse D: ADHD and EEG-neurofeedback: a double-blind randomized placebo-controlled feasibility study. J Neural Transm 2011; 118:275–284
7.
Moriyama TS, Polanczyk G, Caye A, Banaschewski T, Brandeis D, Rohde LA: Evidence-based information on the clinical use of neurofeedback for ADHD. Neurotherapeutics 2012; 9:588–598
8.
Arns M, Drinkenburg W, Leon Kenemans J: The effects of QEEG-informed neurofeedback in ADHD: an open-label pilot study. Appl Psychophysiol Biofeedback 2012; 37:171–180

Information & Authors

Information

Published In

Go to American Journal of Psychiatry
Go to American Journal of Psychiatry
American Journal of Psychiatry
Pages: 800 - 802
PubMed: 23820834

History

Accepted: April 2013
Published online: 1 July 2013
Published in print: July 2013

Authors

Details

Edmund Sonuga-Barke, Ph.D.
Daniel Brandeis, Ph.D.
Samuele Cortese, M.D., Ph.D.
Marina Danckaerts, M.D., Ph.D.
Manfred Döpfner, Ph.D.
Maite Ferrin, M.D., Ph.D.
Martin Holtmann, M.D., Ph.D.
Saskia Van der Oord, Ph.D.
European ADHD Guidelines Group

Competing Interests

The authors’ disclosures and affiliations accompany the original article.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - American Journal of Psychiatry

PPV Articles - American Journal of Psychiatry

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share