Here we address four issues that are frequently raised in discussions of the COVR software: reliance on self-report information, clinical interpretation of the risk estimate, communicating the risk estimate, and implications of the software for risk management.
Self-Report: Can it be Believed?
In both the original research in which the COVR’s multiple ICT methodology was developed and in the subsequent research in which this methodology was validated, the authors operated under the protection of a Federal Confidentiality Certificate and the patients studied were made aware of this protection. The certificate meant that most disclosures that the examinees made to the examiners were not discoverable in court and did not have to be reported to the police. In addition, in both the development study and the validation study, the examinees were notified that information from their police and hospital records was being collected, as well as information from interviewing a collateral informant who knew them well, to verify the information given by the examinee. Both of these components—the guarantee of confidentiality and the reliance on multiple information sources—may have encouraged examinees to provide ‘‘more truthful’’ information.
However, in the real world of clinical practice in which the COVR will be used, Federal Confidentiality Certificates will not be obtainable and police records and collateral informants will not be routinely available. Will examinees be as forthcoming and honest in answering questions when the COVR is used as a tool to make actual decisions on the nature or venue of their care as they were when the COVR was being constructed or validated and the answers had no personal impact? More to the point, what is the clinician to do when he or she has reason to doubt the truthfulness of an answer that a client gives to a question that the COVR presents?
Many examinees’ answers, of course, cannot be ‘‘verified,’’ because they relate to events in their interior lives (e.g., Does the patient really daydream often about harming someone?). Other answers are, in principle, capable of verification, but could be verified only with great difficulty in usual clinical situations (e.g., Were the patient’s parents arrested when the patient was a child?). Still other answers may be verifiable, to a greater or lesser degree, by recourse to data in the patient’s existing hospital chart or outpatient record. What is the clinician to do when a patient’s answer to a question posed by the COVR is contradicted or called into question by other data sources? Consider the following examples.
•
When asked about using alcohol or other drugs, Patient A answers in the negative, yet the chart indicates that when he was admitted to the hospital on the previous evening, he had a blood alcohol level of. 30 or had tested positive for cocaine in his urine.
•
When asked about past arrests, Patient B denies ever having been arrested, yet the file contains previous evaluations for competence to stand trial on felony charges.
•
When asked whether she was abused as a child, Patient C answers ‘‘Never,’’ but there is a notation in the chart that the patient’s mother and sister stated that the mother’s boyfriend repeatedly sexually assaulted the patient when she was a girl.
Four courses of action are theoretically possible in situations involving conflicting information. First, the clinician could simply enter into the COVR the patient’s answers as given, even if the clinician were convinced that the examinee was being untruthful.
Second, the clinician could enter into the COVR his or her best judgment as to the factually correct answer to the question asked. For example, the clinician could choose to credit the toxicology report indicating recent drug abuse over the patient’s self-reported denial of drug abuse, and enter ‘‘yes’’ to the appropriate drug abuse question.
Third, the clinician familiar with the contents of the chart could confront the patient when apparent discrepancies arise between chart information and the patient’s answers. For example, the clinician could say to the patient who denied ever having been arrested, ‘‘I have a problem. You say you’ve never been arrested, but in your hospital record there’s a competence report that says you were arrested for assault twice last year and once the year before. What about this?’’
Finally, the clinician, on being convinced from information available in the record that the examinee was being untruthful in his or her answers, could simply terminate the administration of the COVR and arrive at a risk estimate without the aid of this actuarial tool. For example, the clinician could base his or her clinical risk estimate entirely on information available in the chart or from other data sources that do not rely on the patient’s unreliable self-reports.
Which of these options is recommended? It is important to emphasize that the COVR was constructed and validated using a variant of the third option described above. Although information obtained from a collateral informant is never revealed to the examinee, the examinee would be confronted on any apparent inconsistencies between his or her answers and information contained in the hospital chart.
The first option of simply entering into the program the examinee’s answers as given, even if the clinician is convinced that the examinee is being untruthful, is clinically and ethically inappropriate. If the clinician does not clearly note the patient’s apparent untruthfulness in the report that accompanies the COVR score, the clinician would knowingly be basing a risk estimate on information that, in his or her best judgment, is invalid, without warning potential users of the estimate of its uncertain foundation. Moreover, if the clinician does clearly note the patient’s apparent untruthfulness in the accompanying report, the clinician would, in effect, be telling the reader to disregard the risk estimate that the clinician had just offered.
The second option of the clinician entering into the COVR his or her best judgment as to the factually correct answer to the question asked, rather than the answer that the patient provided, is problematic for two reasons. First, it makes no attempt to determine whether there is an explanation for the apparent discrepancy that would indicate that the patient’s account is actually correct (e.g., someone else’s toxicology laboratory slip was mistakenly placed in the patient’s chart). Second, it varies from the procedures used to construct and validate the COVR, hence threatening the validity of the resulting estimate of risk. In the latter regard, to use clinician-generated answers rather than patient-generated answers raises the question of whether a clinician can accurately detect a patient’s deception when it occurs, without at the same time erroneously considering many true responses to be deceptive. On the other hand, as mentioned earlier, the guarantee of confidentiality and the reliance on multiple information sources that was used in developing and validating the COVR may have encouraged patients to provide us with ‘‘true’’ information, not just with ‘‘self-report’’ information. If this is the case, one can argue that, in clinical practice, entering information believed to be ‘‘true’’ is actually the more empirically appropriate strategy.
Our recommendation for the user of the COVR is the third option—the examiner familiar with the contents of the chart should confront the patient when apparent discrepancies arise between the chart information and the patient’s answers. If the discrepancy is satisfactorily resolved, the clinician would then enter into the COVR the answer that the patient gives. If the discrepancy is not satisfactorily resolved (e.g. if, after confrontation, the clinician still credits a recent toxicology report indicating drug abuse over the patient’s denial of ever having used drugs), then the clinician would enter ‘‘missing’’ for that piece of data, rather than entering
either the patient’s self-report
or the clinician’s own judgment about the accurate scoring of this item, being sure to note this action in an accompanying report. Our rationale for this recommendation is that the COVR contains 10 classification tree models but can produce a reliable estimate of risk using only five models (
Banks et al., 2004). As long as at least five models contain no missing data due to the clinician disbelieving a patient’s answer (or for any other reason), the COVR will operate as designed. If more than five models contain missing data, the COVR will not produce an estimate of risk and the clinician will have to arrive at a risk estimate without the aid of this actuarial tool (i.e. choose the fourth option above).
Risk Communication: What are the Options?
Much recent research has addressed the manner in which risk estimates are communicated to decision makers (
Heilbrun et al., 2004;
Monahan et al., 2002;
Monahan & Steadman, 1996;
Slovic, Monahan, & MacGregor, 2000). Three format options for communicating risk are frequently discussed: (a) probability, (b) frequency, and (c) categories. Each of these three formats appears to have advantages and disadvantages. Because the field of risk communication is very new, no professional consensus has emerged on a standard way of communicating risk estimates. For this reason, the COVR report generates these multiple formats and then allows the user to choose the format in which risk estimates are conveyed: as a probability, as a frequency, as a category, or as some combination of these three formats.
In the probability format, risk is communicated in terms of percentages for the 95% confidence interval and the best point estimate. For example, the first of five levels of risk is communicated in the probability format as ‘‘The likelihood that the patient will commit a violent act toward another person in the next several months is estimated to be between 0% and 2%, with a best estimate of 1%.’’ For the other four levels of risk, the probabilities are expressed as ‘‘between 5% and 11%, with a best estimate of 8%,’’ ‘‘between 20% and 32%, with a best estimate of 26%,’’ ‘‘between 46% and 65%, with a best estimate of 56%,’’ and ‘‘between 65% and 86%, with a best estimate of 76%.’’
In the frequency format, risk is communicated in terms of whole numbers for the 95% confidence interval and the best point estimate. For example, the first of five levels of risk is communicated in the frequency format as ‘‘Of every 100 people like the patient, between 0 and 2 are estimated to commit a violent act toward another person in the next several months, with a best estimate of 1 person.’’ The other four levels of risk are analogously communicated using whole numbers.
In the categorical format, risk is communicated in terms of the group or class into which an individual is estimated to fall. The default options are the following.
•
Category 1: very low risk [corresponding to a risk of 1%/1 of 100]
•
Category 2: low risk [corresponding to a risk of 8%/8 of 100]
•
Category 3: average risk [corresponding to a risk of 26%/26 of 100]
•
Category 4: high risk [corresponding to a risk of 56%/56 of 100]
•
Category 5: very high risk [corresponding to a risk of 76%/76 of 100].
Note that the labels given to each of the five categories are merely illustrative and can be defined by the user. For example, Category 5 could be re-labeled as ‘‘Discharge must be approved by Dr Smith.’’