Skip to main content
Full access
Regular Articles
Published Online: 3 July 2019

Distress Tolerance and Symptom Severity as Mediators of Symptom Validity Failure in Veterans With PTSD

Publication: The Journal of Neuropsychiatry and Clinical Neurosciences

Abstract

Objective:

Performance validity tests (PVTs) and symptom validity tests (SVTs) are necessary in clinical and research contexts. The extent to which psychiatric distress contributes to failure on these tests is unclear. The authors hypothesized that the relation between posttraumatic stress disorder (PTSD) and validity would be serially mediated by distress tolerance and symptom severity.

Methods:

Participants included 306 veterans, 110 of whom met full criteria for current PTSD. PVTs included the Medical Symptom Validity Test (MSVT) and b Test. The Structured Inventory of Malingered Symptomatology (SIMS) was used to measure symptom validity.

Results:

MSVT failure was significantly and directly associated with PTSD severity (B=0.05, CI=0.01, 0.08) but not distress tolerance or PTSD diagnosis. b Test performance was not significantly related to any variable. SIMS failure was significantly associated with PTSD diagnosis (B=0.71, CI=0.05, 1.37), distress tolerance (B=−0.04, CI=–0.07, –0.01), and symptom severity (B=0.07, CI=0.04, 0.09). The serial mediation model significantly predicted all SIMS subscales.

Conclusions:

PTSD severity was associated with failing a memory-based PVT but not an attention-based PVT. Neither PVT was associated with distress tolerance or PTSD diagnosis. SVT failure was associated with PTSD diagnosis, poor distress tolerance, and high symptomatology. For veterans with PTSD, difficulty managing negative emotional states may contribute to symptom overreporting. This may reflect exaggeration or an inability to tolerate stronger negative affect, rather than a “cry for help.”
Inclusion of performance validity tests (PVTs) and symptom validity tests (SVTs) has been described as “an essential part of a neuropsychological evaluation” (1) and “important in all evaluations” (2). From 1995 through 2014 there was a significant increase in neuropsychological research on validity assessment. During this decade, 25% of peer-reviewed papers in two major neuropsychology journals addressed this subject, highlighting the increased attention to and awareness of validity assessment issues (3). Historically, “SVT” was used to represent validity of both self-reported symptoms and performance on cognitive or behavioral testing. More recently, “SVT” has evolved to represent self-report of mood, personality, symptoms, or behavior; in contrast, “PVT” reflects performance on objective tests of cognitive abilities, which is thought to be independent of but not mutually exclusive to “SVT” (4). The importance of assessing validity in clinical and forensic contexts has been generally accepted, and a clinical/forensic evaluation that omits validity assessment may be considered substandard depending on context (1, 5); however, the use of validity testing in research settings has received less attention.
Stand-alone PVTs are relatively easy tests, designed solely for identification of invalid performance, and are created using known groups paradigms or simulations studies. Some of the most popular tests include the Test of Memory Malingering (TOMM), the Word Memory Test (WMT), and the Medical Symptom Validity Test (MSVT) (6). Additionally, numerous cognitive tests have scores that are sensitive to invalid performance, permitting the development of embedded validity indices (for a review, see Boone [7]). Because these embedded PVTs are part of established measures of cognitive performance, genuinely impaired individuals can appear invalid (see Erdodi and Lichtenstein [8]). This is less of a concern when using stand-alone PVTs, except in cases of severe cognitive impairment, because they are relatively easy and were developed to measure validity rather than cognitive functioning. Factors such as conditions involving known cognitive impairment, inattention, or distress are less well studied in stand-alone PVTs, and the studies that do exist suggest that even very impaired individuals pass these measures (9).
A handful of studies have evaluated various factors that could contribute to failed PVTs. For example, Batt and colleagues (10) conducted an experimental study in a sample of acquired brain injury patients who completed the TOMM and the WMT under one of three conditions: control, simulated malingering, and distraction. The distraction group completed an auditory span addition task during the learning stage of the PVTs. This resulted in significantly poorer performance compared with the control group for the WMT but not the TOMM, suggesting that the WMT may be sensitive to situational distractors. The simulated malingering group had the worst performance on both measures. Another study using an acute pain condition experimentally manipulated with an ice pressor task found that moderate pain did not affect PVT performance (11). Other potential explanations have also been hypothesized, such as stereotype threat, negative expectation biases, or hindsight biases, but empirical evidence is lacking on these explanations (12). These studies present possible reasons for PVT failure other than intentional response bias or poor effort.
Theoretically, psychiatric distress might impede performance on PVTs if symptom burden were severe. For example, in posttraumatic stress disorder (PTSD), hypervigilance or active flashbacks could potentially disrupt attention and impair performance. However, few studies have examined PVT failure in groups with PTSD. Specifically, a 2011 review cited eight studies of PTSD and cognition in veterans (13). Although a number of impairments were observed in different domains across those eight studies, none evaluated performance validity (1421). When performance validity is accounted for, outcomes differ. For example, using a research sample of veterans meeting criteria for PTSD, Clark and colleagues (22) found that poor performance on stand-alone and embedded PVTs accounted for 10%–28% of the variance in cognitive performance. In a study of veterans clinically referred for neuropsychological evaluations, 69% of veterans with PTSD failed the WMT (23). Participants with PTSD who failed validity demonstrated cognitive impairment compared with healthy controls; there were no differences between participants with PTSD who passed validity and healthy controls.
More recent studies on cognition and PTSD that accounted for invalid performance have demonstrated variable results. For example, Stricker and colleagues (24) evaluated cognitive functioning in female veterans with and without PTSD, after excluding invalid performance. The PTSD group showed significantly poorer scores on intelligence, verbal learning (but not retention), executive functioning, and processing speed; however, all scores were within the average range for both groups. Similarly, Wrocklage and colleagues (25) found significantly poorer performance on measures of processing speed and executive functioning, but not memory. A longitudinal study (26) found associations between PTSD symptom severity and measures of recall and reaction time after excluding invalid participants, though clinical significance is unclear from the data that were presented. In addition, the relation between PTSD severity and memory functioning was bidirectional, resurrecting questions initially raised by Gilbertson and colleagues (17) regarding whether poorer memory serves as a risk factor for development of PTSD. A study by Verfaellie and colleagues (27) found “small but measurable neuropsychological performance decrements” associated with PTSD (p. 340). Despite statistically significant findings of associations between PTSD and various cognitive variables, none of these studies give a clear indication of the clinical significance of any findings, and none controlled for symptom validity. Although PTSD is not classified as a primary cognitive disorder, additional evaluation of PVT failure in PTSD samples is warranted.
SVTs include stand-alone measures of self-report items, structured interviews, and validity scales embedded in larger multiscale self-report inventories. Scales are generated from various types of items, such as rarely endorsed symptoms in different populations, implausible symptom combinations, or inconsistent responding across a measure (for further details, see Rogers [28]). Veterans with PTSD demonstrated higher rates of elevated scores on overreporting scales in several studies. For example, 50% of a sample of Vietnam veterans with PTSD had elevated MMPI F scores, compared with 8% of the non-PTSD group (29). In a study of Iraq and Afghanistan veterans seeking PTSD treatment, 54% had elevated (over 80 T) MMPI-2 F scores (30). One possible explanation is that elevated SVT scores reflect greater psychological distress. This theoretical phenomenon has been labeled a cry for help (31). Initially used to describe a pattern of invalid report on the MMPI, the term has been broadened to other measures of symptom exaggeration, though this causal explanation has yet to be verified in research. Like PVT failure, a better understanding of SVT failure rates is necessary. If elevated clinical distress (a potential treatment target) results in failure on validity measures in those diagnosed with PTSD, it would alter the interpretative approach to those measures.
The purpose of the present study was to examine the association between a PTSD diagnosis and performance on both SVTs and PVTs in a research setting that precluded overt secondary or clinical gain. We hypothesized a serial mediation model wherein the association of PTSD diagnosis and performance on PVT or SVT would be mediated by distress tolerance and PTSD symptom severity, respectively.

Methods

Participants

Data analyzed in this project were drawn from a larger study reviewed and approved by the local institutional review board. Informed consent was collected before participation, and participants were made fully aware that procedures were for research purposes only. Study procedures were fully explained, and participants provided verbal and written consent before any study activities. Welfare and privacy of participants were protected and maintained. All participants were screened initially by telephone for eligibility. They were then scheduled for an in-person assessment visit during which final eligibility for an imaging visit was determined. The data for these analyses were obtained from the interview visit; therefore, it is possible that participants met some of the exclusion criteria listed below.
Inclusion criteria for the larger study were deployment in support of the wars in Iraq and Afghanistan, English fluency, 18 years of age or older, ability to complete study tasks, and ability to provide informed consent. Exclusion criteria were brain injury history outside of deployment involving loss of consciousness, moderate to severe traumatic brain injury (TBI), neurological disorder, severe mental illness (e.g., psychotic disorder, bipolar disorder), current substance use disorder (past 30 days), or current psychotic symptoms. Participants completed cognitive testing, interviews, and self-report questionnaires. Tests were administered in a fixed order by trained master’s- or doctoral-level study staff, with consultation and oversight by board-certified neuropsychologists.

Measures

Psychological.

The Clinician–Administered PTSD Scale-5 (CAPS-5) (32) is a 30-item structured interview for the diagnosis of PTSD using the DSM-5. Items assess duration of symptom, level of distress, and effect on functioning and can be scored to reflect current functioning (past month) to provide a current diagnosis or worst functioning (worst month) to produce a lifetime diagnosis. Interviewers rate responses on a Likert-type scale (0=absent to 4=extreme/incapacitating). Participants meeting full DSM-5 criteria using the CAPS-5 at the time of the assessment were assigned a PTSD diagnosis. The PTSD Checklist-5 (33) is a 20-item self-report questionnaire that assesses severity of symptoms occurring secondary to an identified trauma over the past month. Respondents use a Likert-type scale (0=not at all to 4=extremely); total score ranges from 0 to 80, with higher scores reflecting greater symptom severity. Continuous scores were used in analyses to reflect level of symptom severity. The Distress Tolerance Scale (34) is a 15-item self-report questionnaire measuring ability to experience and tolerate negative affect. Respondents use a Likert-type scale (1=strongly agree to 5=strongly disagree); total scale score ranges from 15 to 75, with higher scores reflecting better tolerance.

Validity.

The MSVT (35) is a stand-alone PVT that employs a forced-choice verbal recognition memory format. Participants scoring below cutoff scores published in the manual for the Immediate Recall, Delayed Recall, or Consistency subscales were classified as failing the test. The b Test (36) is a stand-alone PVT using letter recognition and discrimination. The manual provides cutoff scores for different groups; the current study used the recommended cutoff for the Normal-Effort Groups Combined (sensitivity=73.6, specificity=85.1). Participants scoring above the cutoff were classified as failing. The Structured Inventory of Malingered Symptomatology (SIMS) (37) is a stand-alone SVT that produces a Total Scale and the following subscales: Psychosis, Neurologic Impairment, Amnestic Disorders, Low Intelligence, and Affective Disorders. All subscales comprised 15 items scored true or false, for a total of 75 self-reported items. The Total Scale score ranges from 0 to 75, with higher scores indicating more items endorsed. Participants scoring above the manualized clinical cutoff score for the Total Scale were classified as failing. Post hoc analyses used continuous scores for SIMS subscales.

Data Analysis

All analyses were conducted in SAS Enterprise Guide 7.1 (SAS Institute, Cary, N.C.). Serial mediation analysis was conducted using the PROCESS macro 3.1 (38). There were no missing data points. Hypothesis testing was conducted using three separate serial mediation analyses predicting b Test, MSVT, and SIMS failure (dichotomous, 1=failure). Post hoc tests analyzed continuous SIMS subscale outcome variables. The independent variable was PTSD diagnosis (dichotomous, 1=present); distress tolerance and symptom severity were serial mediators, respectively. All reported confidence intervals are 95%, bootstrapped with 10,000 subsamples. Results were considered significant if the p value was <0.05 and 0 was not included in the confidence intervals. False discovery rate (39) was used to adjust for multiple comparisons at an alpha of 0.05.
Despite the focus of this article on PTSD and outcomes, TBI is also common in the veteran population. To assess for potential effects of TBI, we conducted analyses with mild TBI as a covariate. No veterans in this sample had a greater than mild TBI. There was no change in outcomes when accounting for TBI; therefore, analyses presented do not include a covariate.

Results

Demographic characteristics of the study participants are presented in Table 1. Participants were 306 veterans (13.07% female) between 23 and 71 years old. Most participants identified racially as either white (57.52%, N=176) or black (39.54%, N=121). PTSD was diagnosed in 35.95% of the sample (N=110). Less than half of participants had a history of mild TBI (47.71%, N=146). There were no significant differences on age in veterans with and without PTSD (p=0.453), though veterans with PTSD (mean=14.63 [SD=1.81]) tended to have fewer years of education compared with those with no PTSD diagnosis (mean=15.29 [SD=14.96]), t=2.56, df=304, p=0.011.
TABLE 1. Demographic and clinical characteristics of the study subjectsa
  Current PTSD diagnosis
 Total sample (N=306)Absent (N=196)Present (N=110)
CharacteristicN or M% or SDN or M% or SDN or M% or SD
Male26686.9317589.299182.73
Age (years) (M ± SD)41.5710.0241.8910.2440.999.64
Education (years) (M ± SD)15.052.1915.292.3514.631.81
Race/ethnicityb      
 White17657.5211458.166256.36
 Black12139.547638.784540.91
 Other185.8894.5998.18
Branch of service      
 Air Force309.802311.7376.36
 Army22473.2014071.438476.36
 Marine Corps3210.46147.141816.36
 Navy206.54199.6910.91
Current PTSD11035.95
PCL-5 total score (M ± SD)32.1019.7823.9717.3446.5615.05
DTS total score (M ± SD)51.0113.5554.2112.7445.3213.13
Deployment mild TBI14647.718241.846458.18
Years since TBIc (M ± SD)10.954.5211.584.7010.124.17
Validity      
 Failed MSVTb216.8673.571412.73
 Failed b Testb165.2384.0887.27
 Failed SIMSb13142.815327.047870.91
a
DTS=Distress Tolerance Scale, MSVT=Medical Symptom Validity Test, PCL-5=PTSD Checklist-5, PTSD=posttraumatic stress disorder, SIMS=Structured Inventory of Malingered Symptomatology, TBI=traumatic brain injury.
b
Categories are not mutually exclusive.
c
Only participants with mild TBI were included (N=146).
Overall, 6.86% of participants failed the MSVT (N=21), 5.23% failed the b Test (N=16), and 42.81% failed the SIMS (N=131). The mean score on the Distress Tolerance Scale was 51.01 (SD=13.55; range, 15–75) and on the PTSD Checklist-5 was 32.09 (SD=19.78; range, 0–77). Direct effects are summarized in Table 2. PTSD diagnosis was significantly associated with distress tolerance (p<0.001). Both PTSD diagnosis (p<0.001) and distress tolerance (p<0.001) were significantly associated with symptom severity.
TABLE 2. Direct associations between posttraumatic stress disorder (PTSD) diagnosis, distress tolerance, symptom severity, and validity outcomes among study participants (N=306)a
PathBSEBt or Zp95% CI
PTSD     
 Distress toleranceb–8.891.53–5.79<0.001–11.91, –5.87
 Symptom severityc15.301.609.54<0.00112.15, 18.46
 MSVT (Z score)d0.500.540.920.359–0.56, 1.55
 b Testd–0.010.59–0.010.992–1.17, 1.16
 SIMSd0.710.342.110.0350.05, 1.37
  Neurologic impairment–0.100.31–0.310.754–0.72, 0.52
  Amnestic disorders0.710.352.030.0440.02, 1.40
  Affective disorders0.690.282.430.0160.13, 1.24
  Low intelligence–0.150.20–0.750.455–0.54, 0.25
  Psychosis0.060.210.300.765–0.36, 0.48
Distress tolerance     
 Symptom severitye–0.820.06–14.41<0.001–0.93, –0.71
 MSVTf0.000.020.050.957–0.04, 0.04
 b Testf–0.040.03–1.500.133–0.09, 0.01
 SIMSf–0.040.02–2.580.010–0.07, –0.01
  Neurologic impairment–0.020.01–1.650.101–0.05, 0.00
  Amnestic disorders–0.030.01–2.330.021–0.06, –0.01
  Affective disorders–0.020.01–1.490.137–0.04, 0.01
  Low intelligence–0.010.01–1.320.187–0.03, 0.01
  Psychosis–0.020.01–2.780.006–0.04, –0.01
Symptom severity     
 MSVTg0.050.022.590.0090.01, 0.08
 b Testg0.020.020.750.452–0.02, 0.06
 SIMSg0.070.015.33<0.0010.04, 0.09
  Neurologic impairment0.080.017.88<0.0010.06, 0.10
  Amnestic disorders0.070.015.92<0.0010.04, 0.09
  Affective disorders0.060.017.18<0.0010.05, 0.08
  Low intelligence0.020.012.590.0100.00, 0.03
  Psychosis0.030.013.89<0.0010.01, 0.04
a
B=Unstandardized beta, MSVT=Medical Symptom Validity Test, PTSD=posttraumatic stress disorder, SEB=standard error of beta, SIMS=Structured Inventory of Malingered Symptomatology. Statistical significance is indicated in bold. The paths represent associations between variables as noted.
b
Corresponds with path a1 (PTSD diagnosis and distress tolerance [as shown in Figure 1]).
c
Corresponds with path a2 (PTSD diagnosis and PTSD severity [as shown in Figure 1]).
d
Corresponds with path c′ (PTSD diagnosis and validity measures [as shown in Figure 1]).
e
Corresponds with path d (distress tolerance and symptom severity [as shown in Figure 1]).
f
Corresponds with path b1 (distress tolerance and validity [as shown in Figure 1]).
g
Corresponds with path b2 (PTSD severity and validity [as shown in Figure 1]).
FIGURE 1. Serial mediation analysis with labeled pathsa
a Posttraumatic stress disorder (PTSD) diagnosis was determined with the Clinician–Administered PTSD Scale for DSM-5 PTSD diagnosis criteria, distress tolerance with the Distress Tolerance Scale total score, and PTSD severity with the PTSD Checklist for DSM-5. Validity indices included the Medical Symptom Validity Test (MSVT), the b Test, and the Structured Inventory of Malingered Symptomatology (SIMS). Separate exploratory analyses on SIMS subscale scores were also conducted. The paths represent the following associations between variables: a1=PTSD diagnosis and distress tolerance, a2=PTSD diagnosis and PTSD severity, d=distress tolerance and PTSD severity, b1=distress tolerance and validity, b2=PTSD severity and validity, c′=PTSD diagnosis and validity measures.
* Three validity measures were evaluated separately as outcomes (i.e., this model was repeated three times, with a different validity outcome each time).

Performance Validity

MSVT failure was positively associated with PTSD severity (Table 2) but not PTSD diagnosis or distress tolerance. The overall model was significant (Table 3). The indirect effect models that were significant included PTSD severity (Table 4), indicating PTSD severity as a driving explanatory factor. Failure on the b Test was not significantly associated with PTSD diagnosis, distress tolerance, or symptom severity (Table 2).
TABLE 3. Model summary for full serial mediation model outcomes and post hoc testsa
Test or subscaleCox and SnellbNagelkerkeb pc
MSVT0.0590.151 <0.001
b Test0.0280.083 0.034
SIMS0.3560.517 <0.001
 R2Fdfp
 Neurologic impairment0.3862.263, 302<0.001
 Amnestic disorders0.3656.683, 302<0.001
 Affective disorders0.4169.083, 302<0.001
 Low intelligence0.088.373, 302<0.001
 Psychosis0.2228.523, 302<0.001
a
The data represent the model summary for full serial mediation (PTSD → distress tolerance → symptom severity → outcome variable). MSVT=Medical Symptom Validity Test, PTSD=posttraumatic stress disorder, SIMS=Structured Inventory of Malingered Symptomatology.
b
Data represent pseudo R2 values.
c
Statistical significance after adjusting for multiple comparisons using false discovery rate is indicated in bold.
TABLE 4. Summary of serial mediation indirect effectsa
Outcome variableBBoot SE95% CI
MSVT0.340.170.02, 0.68
b Test0.110.15–0.19, 0.42
SIMS0.480.130.27, 0.77
 Neurologic impairment0.570.130.33, 0.84
 Amnestic disorders0.470.110.27, 0.72
 Affective disorders0.470.110.27, 0.69
 Low intelligence0.120.050.03, 0.22
 Psychosis0.190.070.05, 0.33
a
The data represent the model summary for full serial mediation (PTSD → distress tolerance → symptom severity → outcome variable). B=unstandardized beta, MSVT=Medical Symptom Validity Test, PTSD=posttraumatic stress disorder, SIMS=Structured Inventory of Malingered Symptomatology. Statistical significance is indicated in bold.

Symptom Validity

SIMS failure was positively associated with a diagnosis of PTSD, negatively associated with distress tolerance, and positively associated with symptom severity (Table 2). The overall indirect effect of the serial mediation analysis of PTSD diagnosis on SIMS failure was significant such that presence of a PTSD diagnosis was associated with failure on the SIMS when distress tolerance was low and symptom severity was high (Table 4).
Post hoc analyses were conducted to determine if subscales of the SIMS were differentially predicted by the serial mediation model. Overall, the serial mediation model significantly predicted all subscales (Table 4). In comparison to the other subscales, the effect size for the Low Intelligence subscale was relatively small.

Discussion

PTSD severity was directly associated with failure on the memory-based PVT (MSVT), but not the PVT integrating visual attention and discrimination (b Test). Given that hypervigilance and poor concentration are symptoms of PTSD, these findings were unexpected. Results may reflect the generally higher sensitivity of the MSVT to validity threats, or they may reflect different patterns of performance across different types of tests. As noted by Boone (40), it is atypical for a noncredible test-taker to underperform on every type of task administered. There may be an assumption that PTSD should be associated with memory impairment, and conscious or unconscious expectations about cognitive limitations associated with PTSD may have resulted in underperformance. There is mixed evidence regarding the association between PTSD and learning and memory. Results from studies excluding invalid participants have ranged from no significant decrements, to statistically but not clinically significant differences, to a complex bidirectional relation between PTSD and memory (2326, 41, 42). Although significant steps were taken to minimize obvious secondary gain in the present study (e.g., participants were informed that research results would not be available for forensic or medical purposes, using a research-only sample), perception of potential gain may have remained. Participants with PTSD may receive monetary compensation for the condition through the Veteran’s Benefits Association; it is possible that they may have remained unconvinced that results could not affect compensation status.
Conversely, failure on the SVT was directly associated with a current PTSD diagnosis, self-reported poorer distress tolerance, and higher PTSD symptom severity. Specifically, individuals with PTSD who self-reported higher symptom severity and poor distress tolerance scored higher on subscales measuring atypical neurological problems, memory problems inconsistent with brain dysfunction, atypical mood/anxiety symptoms, general cognitive incapacity, and atypical psychotic symptoms. For veterans with PTSD, difficulty managing negative emotional states in general may contribute to symptom overreporting, perhaps reflecting inability to modulate emotional experiences or tolerate stronger affect. This is different from a cry for help, which insinuates that the purpose of endorsing high distress is an effort to communicate that distress is unbearable or to communicate a need for services. By definition, the cry for help phenomenon constitutes malingering because it involves conscious exaggeration as a means to an external incentive: namely mental health services. Our findings support an alternative explanation: a potential lack of distress tolerance or emotional control results in overreporting rather than an internal or external motivator (i.e., clinical attention or services). It is noteworthy that others have cautioned against using the psychopathology=superordinate line of logic (i.e., using psychiatric symptoms as a causal explanation for overreporting); approaching failed SVT interpretation using psychopathology as a causal explanation of very abnormal symptoms follows circular reasoning (43). In line with Merten and Merckelbach (43), it is unlikely that individuals with PTSD are actually experiencing a high number of very bizarre, atypical, or implausible symptoms; when they fail SVTs, we must continue to interpret their self-report more generally as simply invalid.
On the basis of our findings, recommendations would be to include both SVTs and PVTs in clinical evaluations and research. A current PTSD diagnosis and symptom self-report were related to PVT and SVT performance in our sample. Though this is consistent with existing clinical and forensic recommendations (1), it is often overlooked in research contexts. In our study, 43% of a research-only sample failed an SVT (compared with 53% in a study of treatment-seeking veterans with PTSD [44]), and failing was not directly related to a PTSD diagnosis. Additionally, clinicians and researchers should consider including PVTs that sample across multiple domains, not just memory or attention. In our sample, PTSD severity was associated with failing a memory PVT but not an attention-based PVT. Inclusion of only one of these measures would lead to very different conclusions about the validity of cognitive results. If a client or participant fails a PVT, clinicians should refrain from attributing that failure to a PTSD diagnosis or a cry for help. Regarding treatment, these results suggest that veterans with PTSD and difficulty tolerating negative affect (e.g., report higher distress and symptomatology) might benefit from an intervention to strengthen distress tolerance skills. It is perhaps unsurprising that, in a disorder marked by avoidance, poor distress tolerance was associated with worse self-reported symptoms. Increased coping skills may contribute to improved mental health and reduced overreporting of psychological and cognitive symptoms.
Limitations of the present study included low rates of failure on the b Test, which may have contributed to null findings for this PVT. Future studies replicating our preliminary findings of differences using PVTs based on diverse domains are warranted. We did not examine contributions of comorbid psychiatric diagnoses or conditions such as depression and TBI. Additional studies may seek to replicate the current findings in different psychological conditions, such as major depression and TBI, or investigate whether increasing comorbidity is related to increasing problems with distress tolerance or validity failure. We also did not assess interrater reliability for the CAPS-5. Finally, results may not generalize to other populations, such as older veterans, active-duty service members, and civilian samples.

Footnotes

Preliminary data were previously presented at the 28th Annual Meeting of the American Neuropsychiatric Association, Atlanta, March 8–11, 2017, and the 37th Annual Meeting of the National Academy of Neuropsychology, Boston, October 25–28, 2017.
Supported by the U.S. Army Medical Research and Materiel Command and the U.S. Department of Veterans Affairs (Chronic Effects of Neurotrauma Consortium) (award W81XWH-13-2-0095); the Department of Veterans Affairs Rehabilitation Research and Development Service (award I01RX002172-01); the Mid-Atlantic Mental Illness Research, Education and Clinical Center; the Department of Veterans Affairs Office of Academic Affiliations Advanced Fellowship Program in Mental Illness, Research, and Treatment; and the Salisbury Veterans Affairs Health Care System, Salisbury, N.C.

References

1.
Bush SS, Ruff RM, Tröster AI, et al: Symptom validity assessment: practice issues and medical necessity NAN policy and planning committee. Arch Clin Neuropsychol 2005; 20:419–426
2.
Heilbronner RL, Sweet JJ, Morgan JE, et al: American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. Clin Neuropsychol 2009; 23:1093–1129
3.
Martin PK, Schroeder RW, Odland AP: Neuropsychologists’ validity testing beliefs and practices: a survey of North American professionals. Clin Neuropsychol 2015; 29:741–776
4.
Van Dyke SA, Millis SR, Axelrod BN, et al: Assessing effort: differentiating performance and symptom validity. Clin Neuropsychol 2013; 27:1234–1246
5.
Carone DA, Bush S: Mild Traumatic Brain Injury: Symptom Validity Assessment and Malingering. New York, Springer, 2013
6.
Sollman MJ, Berry DT: Detection of inadequate effort on neuropsychological testing: a meta-analytic update and extension. Arch Clin Neuropsychol 2011; 26:774–789
7.
Boone KB: Assessment of Feigned Cognitive Impairment: A Neuropsychological Perspective. New York, Guilford Press, 2007
8.
Erdodi LA, Lichtenstein JD: Invalid before impaired: an emerging paradox of embedded validity indicators. Clin Neuropsychol 2017; 31:1029–1046
9.
Carone DA: Young child with severe brain volume loss easily passes the Word Memory Test and Medical Symptom Validity Test: implications for mild TBI. Clin Neuropsychol 2014; 28:146–162
10.
Batt K, Shores EA, Chekaluk E: The effect of distraction on the Word Memory Test and Test of Memory Malingering performance in patients with a severe brain injury. J Int Neuropsychol Soc 2008; 14:1074–1080
11.
Etherton JL, Bianchini KJ, Greve KW, et al: Test of Memory Malingering performance is unaffected by laboratory-induced pain: implications for clinical use. Arch Clin Neuropsychol 2005; 20:375–384
12.
Silver JM: Invalid symptom reporting and performance: what are we missing? NeuroRehabilitation 2015; 36:463–469
13.
Qureshi SU, Long ME, Bradshaw MR, et al: Does PTSD impair cognition beyond the effect of trauma? J Neuropsychiatry Clin Neurosci 2011; 23:16–28
14.
Vasterling JJ, Brailey K, Constans JI, et al: Attention and memory dysfunction in posttraumatic stress disorder. Neuropsychology 1998; 12:125–133
15.
Vasterling JJ, Duke LM, Brailey K, et al: Attention, learning, and memory performances and intellectual resources in Vietnam veterans: PTSD and no disorder comparisons. Neuropsychology 2002; 16:5–14
16.
Samuelson KW, Neylan TC, Metzler TJ, et al: Neuropsychological functioning in posttraumatic stress disorder and alcohol abuse. Neuropsychology 2006; 20:716–726
17.
Gilbertson MW, Paulus LA, Williston SK, et al: Neurocognitive function in monozygotic twins discordant for combat exposure: relationship to posttraumatic stress disorder. J Abnorm Psychol 2006; 115:484–495
18.
Hart J Jr, Kimbrell T, Fauver P, et al: Cognitive dysfunctions associated with PTSD: evidence from World War II prisoners of war. J Neuropsychiatry Clin Neurosci 2008; 20:309–316
19.
Barrett DH, Green ML, Morris R, et al: Cognitive functioning and posttraumatic stress disorder. Am J Psychiatry 1996; 153:1492–1494
20.
Yehuda R, Golier JA, Tischler L, et al: Learning and memory in aging combat veterans with PTSD. J Clin Exp Neuropsychol 2005; 27:504–515
21.
Koso M, Hansen S: Executive function and memory in posttraumatic stress disorder: a study of Bosnian war veterans. Eur Psychiatry 2006; 21:167–173
22.
Clark AL, Amick MM, Fortier C, et al: Poor performance validity predicts clinical characteristics and cognitive test performance of OEF/OIF/OND veterans in a research setting. Clin Neuropsychol 2014; 28:802–825
23.
Wisdom NM, Pastorek NJ, Miller BI, et al: PTSD and cognitive functioning: importance of including performance validity testing. Clin Neuropsychol 2014; 28:128–145
24.
Stricker NH, Keller JE, Castillo DT, et al: The neurocognitive performance of female veterans with posttraumatic stress disorder. J Trauma Stress 2015; 28:102–109
25.
Wrocklage KM, Schweinsburg BC, Krystal JH, et al: Neuropsychological functioning in veterans with posttraumatic stress disorder: associations with performance validity, comorbidities, and functional outcomes. J Int Neuropsychol Soc 2016; 22:399–411
26.
Vasterling JJ, Aslan M, Lee LO, et al: Longitudinal associations among posttraumatic stress disorder symptoms, traumatic brain injury, and neurocognitive functioning in Army soldiers deployed to the Iraq War. J Int Neuropsychol Soc 2018; 24:311–323
27.
Verfaellie M, Lafleche G, Spiro A, et al: Neuropsychological outcomes in OEF/OIF veterans with self-report of blast exposure: associations with mental health, but not MTBI. Neuropsychology 2014; 28:337–346
28.
Rogers R: Clinical Assessment of Malingering and Deception, 3rd ed. New York, Guilford Press, 2008
29.
Hyer L, Fallon JH, Harrison WR, et al: MMPI Overreporting by Vietnam Combat Veterans. Hoboken, NJ, John Wiley and Sons, 1987, pp 79–83
30.
Garcia HA, Franklin CL, Chambliss J: Examining MMPI-2 F-family Scales in PTSD-diagnosed Veterans of Operation Enduring Freedom and Operation Iraqi Freedom. Washington, DC, Educational Publishing Foundation, 2010, pp 126–129
31.
Berry DT, Adams JJ, Clark CD, et al: Detection of a cry for help on the MMPI-2: an analog investigation. J Pers Assess 1996; 67:26–36
32.
Weathers FW, Bovin MJ, Lee DJ, et al: The Clinician-Administered PTSD Scale for DSM-5 (CAPS-5): development and initial psychometric evaluation in military veterans. Psychol Assess 2018; 30:383–395
33.
Blevins CA, Weathers FW, Davis MT, et al: The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): development and initial psychometric evaluation. J Trauma Stress 2015; 28:489–498
34.
Simons JS, Gaher RM: The distress tolerance scale: development and validation of a self-report measure. Motiv Emot 2005; 29:83–102
35.
Green P: Green’s Word Memory Test for Windows User’s Manual. Edmonton, Canada, Green’s Publishing, 2005
36.
Boone KB, Lu P, Herzberg D: The b Test. Los Angeles, Western Psychological Services, 2002
37.
Widows MR, Smith GP: Structured Inventory of Malingered Symptomatology: Professional Manual. Odessa, Fla, Psychological Assessment Resources, 2005
38.
Hayes AF: PROCESS: a versatile computational tool for observed variable mediation, moderation, and conditional process modeling, 2012. http://www.afhayes.com/public/process2012.pdf
39.
Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B 1995; 57:289–300
40.
Boone KB: The need for continuous and comprehensive sampling of effort/response bias during neuropsychological examinations. Clin Neuropsychol 2009; 23:729–741
41.
Demakis GJ, Gervais RO, Rohling ML: The effect of failure on cognitive and psychological symptom validity tests in litigants with symptoms of post-traumatic stress disorder. Clin Neuropsychol 2008; 22:879–895
42.
Stricker NH, Lippa SM, Green DL, et al: Elevated rates of memory impairment in military service-members and veterans with posttraumatic stress disorder. J Clin Exp Neuropsychol 2017; 39:768–785
43.
Merten T, Merckelbach H: Symptom validity testing in somatoform and dissociative disorders: a critical review. Psychol Inj Law 2013; 6:122–137
44.
Freeman T, Powell M, Kimbrell T: Measuring symptom exaggeration in veterans with chronic posttraumatic stress disorder. Psychiatry Res 2008; 158:374–380

Information & Authors

Information

Published In

Go to The Journal of Neuropsychiatry and Clinical Neurosciences
Go to The Journal of Neuropsychiatry and Clinical Neurosciences
The Journal of Neuropsychiatry and Clinical Neurosciences
Pages: 161 - 167
PubMed: 31266409

History

Received: 30 November 2017
Revision received: 21 December 2018
Revision received: 17 February 2019
Revision received: 11 March 2019
Accepted: 11 March 2019
Published online: 3 July 2019
Published in print: Spring 2020

Keywords

  1. Performance Validity
  2. Posttraumatic Stress Disorder
  3. Assessment

Authors

Details

Holly M. Miskey, Ph.D. [email protected]
The Salisbury Veterans Affairs Health Care System, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); the Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); and the Wake Forest School of Medicine, Winston-Salem, N.C. (Miskey, Martindale, Shura).
Sarah L. Martindale, Ph.D.
The Salisbury Veterans Affairs Health Care System, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); the Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); and the Wake Forest School of Medicine, Winston-Salem, N.C. (Miskey, Martindale, Shura).
Robert D. Shura, Psy.D.
The Salisbury Veterans Affairs Health Care System, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); the Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); and the Wake Forest School of Medicine, Winston-Salem, N.C. (Miskey, Martindale, Shura).
Katherine H. Taber, Ph.D.
The Salisbury Veterans Affairs Health Care System, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); the Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, N.C. (Miskey, Martindale, Shura, Taber); and the Wake Forest School of Medicine, Winston-Salem, N.C. (Miskey, Martindale, Shura).

Notes

Send correspondence to Dr. Miskey ([email protected]).

Competing Interests

The authors report no financial relationships with commercial interests.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Journal of Neuropsychiatry and Clinical Neurosciences

PPV Articles - Journal of Neuropsychiatry and Clinical Neurosciences

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share