In 2003, Pinfold et al. (
1) published a report of a contact-based education study conducted with a police force in England. In addition to completing an attitudinal measure at the 4-week follow-up, officers were asked whether the training had had any impact on their behavior and if so, what. Fifty-nine percent identified ways in which it had affected police work, most commonly the way they communicated with people with mental health problems. Three-quarters of the sample recommended that the training be delivered to other forces.
Eleven years later, a contact-based education study by Hansson and Markström (
2), conducted with Swedish police trainees, was published. This study used a control group for the immediate follow-up and a 6-month follow-up for the intervention group. The immediate follow-up showed positive effects of the training with respect to stigma-related knowledge, attitudes, and intended behavior compared with the control group. Within-group comparison in the intervention group showed that these effects endured at 6 months. Although participants could not be asked about the impact on their behavior while in training during the study, a behavioral outcome at the organizational level was observed, in that the police force adopted the training as routine.
The lesson from these studies is that if researchers inquire about behavioral impact among study populations in which such an impact is feasible, they may then be able to record it. However, these data do not involve the use of standardized measures, and the desire to stick to using such measures may be one reason why Jorm (
3) found few studies that reported behavioral outcomes.
The choice of outcome measures is problematic in many studies of contact-based interventions. For example, scales validated in the general population may not be suitable for use in specific target groups, such as health professionals (
4), because of their specialized knowledge and because questions about behavioral intention with respect to friends or neighbors lack relevance to their work. The more general points Jorm makes about study quality are indeed important; reporting is at times inadequate, and this makes quality assessment difficult.
However, controlled trials that follow CONSORT guidelines are available, including those that show positive outcomes and outcomes sustained after several months of follow-up (
5). For example, the Clement et al. (
5) study suggests that at least some contact interventions are effective. Jorm’s statement that the effects are short-lived acknowledges this but neglects to then pose the question, What are the qualities of studies that show lasting effects? Hansson and Markström made a point that may be critically important here, about the context in which they conducted their study, which occurred during a national antistigma campaign. Effectively, an unknown proportion of the intervention group was exposed to a further intervention between follow-up points, albeit one that was not targeted to them as police trainees. On the other hand, samples exposed to only some aspects of public stigma, such as media stereotypes, after a contact intervention, as well as in their life before the intervention, may be less likely to experience any long-term impact.
In this sense, contact interventions may be likened to medications for long-term conditions, which should not be disregarded because they have a short-term impact. The implication is that contact should be ongoing or repeated to continue to counter the ongoing influence of public stigma. Such contact then begins to resemble having a relationship with someone with a mental health problem, which is strongly associated with having more stigma-related knowledge, more positive attitudes, and less desire for social distance. Jorm, however, questions the direction of causality for these associations. This is a pertinent point regarding some friendships. However, those reporting that the closest person to them who has a mental health problem is a friend are a minority of those who know someone with a mental illness. The majority are those who know a family member, coworker, or someone else, such as neighbor (
6); these are relationships over which people usually have no choice.
A related question concerns a “dose effect.” Jorm interprets the lack of association between effect and length of contact up to 105 minutes to mean there is no dose effect, and therefore that contact is ineffective. This logic is hard to follow. The length of a one-off intervention is unlikely to be important as long as it has effective components; hence, social marketing campaigns use very short advertisements, also known as public service announcements, using parasocial contact, to communicate key messages and generate empathy or reduce anxiety. The important thing is that they are repeated over time so that the effect can continue, as discussed above. The effectiveness of parasocial contact means that national scale-up can be and has been done effectively, such as with the Time to Change program in England. With regard to numbers of people with whom study participants have contact, Knaak et al. (
7), in a metaregression of interventions for health professionals, found evidence of a dose effect of contact with more than one presenter with lived experience.
Along with parasocial contact, other forms of nondirect contact shown to have an effect include extended-relationship contact (knowing someone who knows someone in the other group) and imagined contact. Jorm attributes these results to demand effects because they should have, but do not have, less effect than direct contact. This statement is problematic for a number of reasons. First, there is evidence that direct contact may generally be more effective (
8). Second, his statement about the control being a topic unrelated to mental illness is not true for all studies. The contact-based intervention trials I am familiar with have used a lecture on stigma by an academic (
5,
9) to provide education in the absence of contact, rather than a lecture on an unrelated topic, as he asserts without providing references. Third, the problem of the demand effect is not relevant in the case of extended contact (
10). Fourth, a demand effect does not explain why some studies show a short-term impact whereas others show lasting effects (
2,
5). Finally and most important, such an assumption risks closing off research from inquiry into how different forms of contact may work and what can influence their effects. There is plenty still to learn about different forms of delivery (extended, imagined, parasocial, direct) and about aspects of the content and the context that influence the effectiveness of contact interventions. There is scope for enormous variations that are likely to influence the interventions’ effectiveness, such as the relevance of the content conveyed to the target population and the extent to which those delivering contact have occupational and demographic features in common with the target (
11). With the words “baby” and “bathwater” in mind, let’s consider contact interventions more carefully before either dismissing this field or adding to it.