To the Editor:
The clinicians and researchers of the PTSD Systems Biology Consortium have carried out an admirable study . Their attempt at finding a diagnostic tool for post-traumatic stress disorder based on biological data clearly is highly original, labor-intensive, and potentially very influential work. It is also of interest far beyond the field of PTSD. With no less than a million markers tested, combined and boiled down in sophisticated ways, it appears the authors marshalled everything biological psychiatry and artificial intelligence can currently muster in order to find a diagnostic tool for a psychiatric disorder. To me, this project represents the state of the art in our field’s many decades long quest for biological methods suitable for diagnostic use and with an impact on clinical practice—a search that, so far, has been largely frustrating.
The authors rightly emphasize the sensitivity (85%) and specificity (77%) of their method. However, additional key characteristics are welcome in analyzing a diagnostic instrument (Table 1), for example: Cohen’s kappa as measurement of a test’s ability to find the diagnosis beyond chance. The likelihood ratio (LR+) that is the ratio of the percentage of correct positive diagnoses to the percentage of false positives. The positive predictive value (PPV) as the proportion of true-positives among all positive test results and thus an approximation of an instrument’s usefulness in clinical practice. Applied to the figures presented in the paper, the instrument’s PPV amounts to 79%, indicating that about four in five patients with a positive diagnosis will indeed have PTSD. Cohen’s kappa turns out to be 0.62, and the (positive) likelihood ratio is 3.67. In commonly used definitions [2, 3], these values signify substantial (kappa) and small (LR) diagnostic advances, respectively.
For comparison: the widely used clinical interview CAPS (Clinician Administered PTSD Scale) reaches higher kappa values . For example, in an older but similar sample of veterans, kappa amounted to 0.75, with PPV at above 90% . In addition, Cohen’s kappa and PPV are influenced by prevalence, and the study’s sample included 50% PTSD patients , an unusually high proportion in most clinical and community environments. Twenty-five percent or 12.5% are still high but more realistic, particularly in a military context, as conveyed in Dean et al. paper’s introduction . Applied to these prevalence rates, Cohen’s kappa shrinks to 0.52 and 0.38, respectively—moderate and fair agreement, respectively, in the words of Landis and Koch . In this scenario, PPV reduces to 55% and 34%, hence only half or a third of patients assigned a PTSD diagnosis by the instrument will actually suffer from PTSD. If anything, the instrument, with its relatively low Cohen’s kappa, its low PPV and its higher negative predictive value (97 at 12.5% prevalence), seems to be better suited to ruling out subjects than to finding true cases.
As the authors discuss, their tool’s clinical usefulness may well decrease in samples without the artificially high clinical contrast—judging from the wide gap in CAPS scores—between cases and controls included in this study, which may not reflect reality in clinical situations. What’s more: Is it possible that the approach diagnoses stress rather than PTSD? After all, heart rate and hyperarousal were solid correlates of positive diagnosis, and while the test results are relatively strong in patients with comorbid major depressive disorder (MDD), they were much less marked in patients without MDD .
In conclusion, despite all its ingenuity, the instrument’s diagnostic properties do not indicate a revolution in biology-based diagnosis in psychiatry, in particular, relative to diagnoses based on a structured clinical interview. Combining clinical interviews with biology-based approaches may yield higher retrieval rates of PTSD cases in the future, but that remains to be shown.
Dean KR, Hammamieh R, Mellon SH, Abu-Amara D, Flory JD, Guffanti G, et al. Multi-omic biomarker identification and validation for diagnosing warzone-related post-traumatic stress disorder. Mol Psychiatry. 2019. https://doi.org/10.1038/s41380-019-0496-z.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74.
Grimes DA, Schulz KF. Refining clinical diagnoses with likelihood ratios. Lancet. 2005;365:1500–5.
Weathers FN, Keane TM, Davidson JRT. Clinician-administered ptsd scale: a review of the first ten years of research. Depression Anxiety. 2001;13:132–56.
Hyer L, Summers MN, Boyd S, Litaker M, Boudewyns P. Assessment of older combat veterans with the clinician-administered PTSD scale. J Trauma Stress. 1996;9:587–93.
Conflict of interest
The author declares that he has no conflict of interest.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Baethge, C. What are the merits of a multi-omic approach to diagnosing PTSD?. Mol Psychiatry 25, 3127–3128 (2020). https://doi.org/10.1038/s41380-020-0694-8