Skip to main content
Full access
Brief Report
Published Online: 23 August 2023

Fidelity Assessment of Peer-Delivered Cognitive-Behavioral Therapy for Postpartum Depression

Abstract

Objective:

Fidelity assessment of peer-administered interventions (PAIs) by expert therapists can be costly and limit scalability. This study’s objective was to determine whether peer facilitators could assess the fidelity of peer-delivered group cognitive-behavioral therapy (CBT) for postpartum depression as effectively as an expert psychiatrist or a trained graduate student.

Methods:

Intervention adherence and competence were assessed by three peers (N=9 sessions) and by one expert psychiatrist and one graduate student (N=18 sessions). Interrater reliability was assessed with intraclass correlation coefficients (ICCs).

Results:

ICCs were good to excellent (0.88–0.98) for adherence and competence ratings among the three types of raters (psychiatrist vs. peers, psychiatrist vs. student, and student vs. peers).

Conclusions:

Trained peers may be able to reliably rate the fidelity of a PAI for postpartum depression. This preliminary study represents the first step toward peer-led feedback as an alternative to expert-led supervision of peer-delivered group CBT for postpartum depression.

HIGHLIGHTS

Numerous barriers hinder fidelity assessment of peer-administered interventions.
Trained peers were found to reliably rate the fidelity of peer-delivered group cognitive-behavioral therapy for postpartum depression, as demonstrated by good to excellent interrater reliability among the three types of raters.
Shifting fidelity assessment from experts to peer facilitators can serve as the first step toward training peers to provide feedback to other peers, representing an alternative to costly expert-conducted fidelity assessment and supervision.
Peer-administered interventions (PAIs), delivered by nonprofessionals with a history of mental illness (1), have been gaining acceptance (2). PAIs can increase the availability, affordability, and scalability of psychotherapeutic treatments. PAIs are more commonly used in low- and middle-income countries than in high-income ones, but the fidelity of these interventions has rarely been assessed (13).
Treatment fidelity has been defined as the extent to which a treatment is delivered as intended and is composed of two elements: adherence and competence (4, 5). Adherence represents the degree to which the therapeutic techniques used are consistent with the treatment protocol, and competence represents the level of skill and judgment shown by therapists (4, 5). Fidelity assessment can increase confidence that the changes seen during treatment are due to the intervention, inform whether a provider is delivering therapy effectively, and determine whether therapist training needs to be modified (6).
Treatment manuals, validated rating scales, and supervision are recommended to optimize fidelity (7). However, fidelity assessment in clinical settings remains challenging because of the time required to perform it (4); its cost; and the effort required to develop rating systems, to train raters, and to establish rating reliability (8). Although the task-shifting of psychotherapy to peer facilitators could help address treatment gaps, the dearth of professionals available to assess fidelity and to provide psychotherapy supervision remains a barrier to widespread implementation of PAIs (3, 9).
Shifting the task of fidelity assessment and supervision to peers could partially mitigate these bottlenecks (9). However, peer-led supervision is only possible if peers can effectively evaluate intervention fidelity. Prior studies (2, 9) have suggested that peer-led fidelity evaluation is feasible in low-resource settings, but this task-shifting has not been investigated for PAIs for postpartum populations in high-income countries. The objective of this study was to determine whether individuals who have recovered from postpartum depression (i.e., peers) can effectively and reliably assess the fidelity of a peer-delivered online group cognitive-behavioral therapy (CBT) intervention, compared with an expert psychiatrist and with a trained graduate student.

Methods

The study took place from March 28, 2022, to May 30, 2022. Ethical approval was obtained from the Hamilton Integrated Research Ethics Board (approval 3781). Mothers who had recovered from postpartum depression were recruited and trained to deliver a structured 9-week online group CBT intervention to mothers with current postpartum depression (10). The intervention’s weekly 2-hour sessions consisted of teaching and practicing CBT skills, followed by discussing topics relevant to people with postpartum depression (e.g., sleep, support, transitions). For this study, two 9-week groups were held simultaneously and were each delivered by two randomly selected peer facilitators.
One expert psychiatrist (R.J.V.L., who developed the treatment used in this study), one psychiatry graduate student (Z.B.), and three peers who had delivered this intervention previously individually viewed video recordings of the peer-delivered CBT sessions and rated the sessions (10). All study participants signed an online consent form.
Adherence and competence scales were developed by an expert psychologist (P.J.B.) and a psychiatrist (R.J.V.L.). The development of these scales was based on these experts’ experience in developing and delivering group CBT and in providing supervision (11), as well as on fidelity measures developed for another group CBT intervention for depression—Building Recovery by Improving Goals, Habits, and Thoughts (5).
Because each of the intervention’s nine sessions varied in content, the adherence scale was composed of different items for each session and assessed topics such as agenda setting, content delivery, and homework review. Individual items were rated on a 3-point Likert scale ranging from 0, not covered at all, to 2, adequate coverage, or on a 4-point Likert scale ranging from 0, not covered at all, to 3, thorough coverage, with higher scores indicating greater adherence. Possible adherence scores for each session were variable and ranged from 0 to 15 (sessions 4, 6, and 9) to 0 to 31 (session 1).
The competence scale was composed of the same seven items for all nine sessions, with individual item scores ranging from 0, low competence, to 6, expert/high competence. Competence was assessed on structure and use of time, genuineness, empathy, collaboration, guided discovery, group participation, and emotional expression elicited. Possible competence scores for each session ranged from 0 to 42, with higher scores indicating greater competence.
The graduate student and peer facilitators were trained by the expert psychiatrist; training consisted of two 3-hour sessions. The first training session familiarized the trainees with the concept of fidelity and with the measures. During the second session, the trainees independently rated two previously recorded 2-hour peer-delivered sessions, and their ratings were compared and discussed.
After the training sessions, recorded group CBT sessions were sent to the expert psychiatrist, the graduate student, and the three peer facilitators weekly. One peer rated all nine sessions of one group; the two other peers rated all nine sessions of the other group. The expert psychiatrist and graduate student rated all 18 sessions. Each rater independently used the adherence and competence scales to rate the sessions and then sent their ratings to the research coordinator. Adherence and competence rating scores were standardized for each session and rater by dividing them by the total possible score for a given session. Interrater reliability was calculated for the expert psychiatrist versus the graduate student, for the expert psychiatrist versus the peer facilitators, and for the graduate student versus the peer facilitators.
Interrater reliability was calculated by using intraclass correlation coefficients (ICCs) for the three types of raters across sessions. Because not all peers rated the same sessions, a one-way random-effects model was used to calculate ICCs between the expert psychiatrist and the peer facilitators and between the graduate student and the peer facilitators. A two-way random-effects model was used to calculate ICCs between the expert psychiatrist and the graduate student because both rated all 18 sessions. We used nonparametric analyses to account for our small number of total raters (N=5) and sessions (N=18). Wilcoxon signed-rank tests were used to compare differences between the mean adherence and competence ratings of the three types of raters. Data were analyzed with SPSS Statistics, version 28, and statistical significance was set at p<0.05.

Results

Interrater reliability for adherence was excellent between the expert psychiatrist and the peer facilitators (ICC=0.98) and between the psychiatrist and the graduate student (ICC=0.91). Good interrater reliability for adherence between the student and peers (ICC=0.88) was noted.
Excellent interrater reliability for competence ratings between the psychiatrist and the peers (ICC=0.96) and between the student and the peers (ICC=0.92) was observed. Good interrater reliability was found for competence between the psychiatrist and the student (ICC=0.88).
No statistically significant differences in adherence or competence rating scores were found between groups. Mean adherence ratings for the three types of raters varied by only 0.02, and mean competence ratings varied by only 0.03 (Table 1). Score ranges for adherence and competence among the three types of raters were also narrow.
TABLE 1. Mean adherence and competence rating scores for peer-delivered group cognitive-behavioral therapy for postpartum depression, stratified by type of ratera
Variable, rater typeMSDRange
Adherence
 Expert psychiatrist (N=1).76.17.45–1.00
 Graduate student (N=1).78.11.60–1.00
 Peer (N=3).77.19.50–1.00
Competence
 Expert psychiatrist (N=1).67.08.48–.81
 Graduate student (N=1).70.11.48–.83
 Peer (N=3).67.11.42–.86
a
Adherence and competence rating scores were standardized for each session and rater by dividing them by the total possible score for a given session. Possible standardized adherence and competence scores for each session range from 0 to 1, with higher scores indicating greater adherence or competence, respectively. Mean scores for all sessions (N=9 for peers and N=18 for the expert psychiatrist and graduate student) are reported.

Discussion

The results of this small study suggest that, with sufficient training and practice, peer facilitators can rate the fidelity of peer-delivered group CBT for postpartum depression as effectively as an expert and a trained student rater. These results represent the first step toward training peer facilitators to provide feedback to other peer facilitators during expert-led supervision and toward eventually shifting the task of supervision from experts to peers.
Our results were consistent with those of Singla and colleagues (9), who compared peer and expert ratings of a peer-delivered behavioral intervention for perinatal depression. The structured scales used by Singla et al. to measure treatment-specific (e.g., homework assignment) and general skills (e.g., peer-client collaboration) were similar to our adherence and competence scales. In that study, peer providers used their ratings to guide the provision of supervision for select sessions later in the trial. Singla et al. found that, with training and practice, peers were able to rate sessions as reliably as experts, and they reported that the use of structured scales to rate therapy quality enabled effective supervision. In another study, Singla and colleagues (2) examined the interrater reliability of therapy quality ratings for an intervention for depression and alcohol use and found that rating consistency between the expert and peers was achieved after 8 months of practice.
In addition to the use of structured scales, thorough training is also important for effective fidelity assessment. In our study, after two training sessions, the three peer facilitators evaluated fidelity as effectively as did the expert psychiatrist and the trained graduate student. In the studies by Singla and colleagues (2, 9), peers needed more than 6 months of practice and supplementary training to rate sessions as effectively as did the experts. This difference may have been caused by the large numbers of raters in those studies or may reflect varying education and experience levels among raters.
The primary limitation of this preliminary study was its small number of sessions and peer raters, challenging the generalizability of our results to other peers and to future therapy sessions. In addition, use of the ICC method to calculate agreement among a small sample of raters may have yielded an overestimation of interrater reliability. Furthermore, we believe that our scales, developed and revised by a perinatal psychiatrist and by a psychologist with extensive experience in scale development and validation, have face validity and content validity. We did not formally assess the criterion or construct validity of these scales, however, given the lack of a gold standard with which to compare our measures (5). Because of the limited data collected, the analysis was conducted at only the whole-scale level rather than for individual items on the scales. Moreover, our study was conducted in a high-income Western country, which may limit generalizability of the results to other parts of the world. Debate is ongoing about whether a distinction should be made between adherence and competence or whether a broader term, such as therapy quality, might be more meaningful (12). We acknowledge that, in clinical practice, high adherence is of little use in the presence of low competence (i.e., doing the right things poorly), as is high competence in the presence of low adherence (i.e., doing the wrong things well). In our context of research on a question involving a treatment delivered by nonprofessionals (i.e., peers), we decided to distinguish between applying the right psychotherapeutic procedures (i.e., adherence) and implementing the procedures skillfully (i.e., competence).

Conclusions

The results of this small study suggested that trained peer facilitators who have recovered from postpartum depression may effectively evaluate the fidelity of peer-delivered group CBT for postpartum depression. The next step is to conduct a larger study, with more peers and sessions, to confirm our preliminary interrater reliability results. Future work should also incorporate item-level analysis to obtain more specific insight into treatment integrity and investigate potential drawbacks of distinguishing between adherence and competence. Furthermore, future studies are needed to provide further support for shifting the task of supervision of peer facilitators from experts to trained peers. Finally, criterion validity and construct validity of the scales used to measure adherence and competence require formal assessment.

References

1.
Bryan AE, Arkowitz H: Meta-analysis of the effects of peer-administered psychosocial interventions on symptoms of depression. Am J Community Psychol 2015; 55:455–471
2.
Singla DR, Weobong B, Nadkarni A, et al: Improving the scalability of psychological treatments in developing countries: an evaluation of peer-led therapy quality assessment in Goa, India. Behav Res Ther 2014; 60:53–59
3.
Atif N, Nisar A, Bibi A, et al: Scaling-up psychological interventions in resource-poor settings: training and supervising peer volunteers to deliver the ‘Thinking Healthy Programme’ for perinatal depression in rural Pakistan. Glob Ment Health 2019; 6:e4
4.
Couturier J, Kimber M, Barwick M, et al: Assessing fidelity to family-based treatment: an exploratory examination of expert, therapist, parent, and peer ratings. J Eat Disord 2021; 9:12
5.
Hepner KA, Stern S, Paddock SM, et al: A Fidelity Coding Guide for a Group Cognitive Behavioral Therapy for Depression. Santa Monica, CA, RAND, 2011. https://www.rand.org/content/dam/rand/pubs/technical_reports/2011/RAND_TR980.pdf
6.
Borrelli B: The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. J Public Health Dent 2011; 71:S52–S63
7.
Waltz J, Addis ME, Koerner K, et al: Testing the integrity of a psychotherapy protocol: assessment of adherence and competence. J Consult Clin Psychol 1993; 61:620–630
8.
Carroll KM, Nich C, Sifry RL, et al: A general system for evaluating therapist adherence and competence in psychotherapy research in the addictions. Drug Alcohol Depend 2000; 57:225–238
9.
Singla DR, Ratjen C, Krishna RN, et al: Peer supervision for assuring the quality of non-specialist provider delivered psychological intervention: lessons from a trial for perinatal depression in Goa, India. Behav Res Ther 2020; 130:103533
10.
Amani B, Merza D, Savoy C, et al: Peer-delivered cognitive-behavioral therapy for postpartum depression: a randomized controlled trial. J Clin Psychiatry 2021; 83:21m13928
11.
Van Lieshout RJ, Yang L, Haber E, et al: Evaluating the effectiveness of a brief group cognitive behavioural therapy intervention for perinatal depression. Arch Womens Ment Health 2017; 20:225–228
12.
Fairburn CG, Cooper Z: Therapist competence, therapy quality, and therapist training. Behav Res Ther 2011; 49:373–378

Information & Authors

Information

Published In

Go to American Journal of Psychotherapy
Go to American Journal of Psychotherapy
American Journal of Psychotherapy
Pages: 159 - 162
PubMed: 37608754

History

Received: 5 October 2022
Revision received: 10 May 2023
Revision received: 26 June 2023
Accepted: 14 July 2023
Published online: 23 August 2023
Published in print: December 11, 2023

Keywords

  1. Psychotherapy
  2. Cognitive-behavioral therapy
  3. Peer-administered interventions
  4. Intervention fidelity
  5. Pregnancy and childbirth

Authors

Details

Zoryana Babiy, M.Sc. [email protected]
Neuroscience Graduate Program (Babiy, Merza), Health Research Methodology Graduate Program (Layton), and Department of Psychiatry and Behavioural Neurosciences (Bieling, Van Lieshout), McMaster University, Hamilton, Ontario, Canada.
Donya Merza, M.Sc.
Neuroscience Graduate Program (Babiy, Merza), Health Research Methodology Graduate Program (Layton), and Department of Psychiatry and Behavioural Neurosciences (Bieling, Van Lieshout), McMaster University, Hamilton, Ontario, Canada.
Haley Layton, M.P.H.
Neuroscience Graduate Program (Babiy, Merza), Health Research Methodology Graduate Program (Layton), and Department of Psychiatry and Behavioural Neurosciences (Bieling, Van Lieshout), McMaster University, Hamilton, Ontario, Canada.
Peter J. Bieling, Ph.D.
Neuroscience Graduate Program (Babiy, Merza), Health Research Methodology Graduate Program (Layton), and Department of Psychiatry and Behavioural Neurosciences (Bieling, Van Lieshout), McMaster University, Hamilton, Ontario, Canada.
Ryan J. Van Lieshout, M.D., Ph.D.
Neuroscience Graduate Program (Babiy, Merza), Health Research Methodology Graduate Program (Layton), and Department of Psychiatry and Behavioural Neurosciences (Bieling, Van Lieshout), McMaster University, Hamilton, Ontario, Canada.

Notes

Send correspondence to Ms. Babiy ([email protected]).

Competing Interests

The authors report no financial relationships with commercial interests.

Funding Information

This work was supported by a grant from the Canada Research Chairs Program (CRC-2021-00290).

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - APT - American Journal of Psychotherapy

PPV Articles - APT - American Journal of Psychotherapy

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share