Author + information
Address for correspondence:
Jagat Narula, MD, PhD, Editor-in-Chief, JACC: Cardiovascular Imaging, 3655 Nobel Drive, Suite 630, San Diego, California 92112
In a recent prospective, double-blind, randomized multicenter phase 3 trial, ADVANCE (Adenosine versus Regadenoson Comparative Evaluation for Myocardial Perfusion Imaging), the A2A selective adenosine receptor agonist, regadenoson, was shown to be noninferior to the nonselective vasodilator, adenosine, for detecting myocardial ischemia (1). The overall visual agreement was comparably low (in the low 60% range) for the adenosine–regadenoson and for the adenosine–adenosine comparisons. Conversely, when quantitative analysis was applied, regadenoson induced virtually identical results to adenosine–regarding the size and severity of left ventricular perfusion defect size and extent of ischemia (2). What are the regulatory implications of these findings? Should the regulatory bodies rely on subjective visual interpretation of myocardial perfusion studies, on objective quantitative programs to appraise the comparability between vasodilators, or both? What is the “true” standard?
In order to avoid the inherent biological variability of reimaging a subject twice, Food and Drug Administration (FDA) clinical trials usually keep the minimal time interval between serial studies, and maintain the same medical regimen and image acquisition parameters. For example, if the clinical trial is comparing 2 vasodilators, such as regadenoson and adenosine, then the investigators would apply the same radiotracer, camera, and image acquisition protocol for the 2 serial imaging studies. On the other hand, if the comparison is between 2 radiotracers, then the investigators would apply the same stressor, exercise, or vasodilator, for the 2 serial imaging studies. When it comes to the interpretation of the regional perfusion defects, there are 2 options: 1) visual interpretation, where 2 or more blinded expert readers apply a pre-defined semiquantitative scoring system; or 2) an automated quantitative analysis using a previously validated software that employs a sex-specific normal database.
Currently, the regulatory agencies favor visual interpretation of images by expert readers. Because visual interpretation is the standard for interpreting myocardial perfusion studies in clinical practice, the regulatory body would like to mimic the “real-world” application as close as possible. Visual analysis entails reviewing the raw stress and rest cardiac images as well as reconstructed and normalized paired tomographic images to identify all sorts of extracardiac and motion artifacts. Segmental scores are assigned, using a 17-segment model, to generate size and severity of perfusion defects on stress (summed stress score) and extent of reversibility when compared to rest images (summed difference score). While the blinded readers may all agree on the final interpretation of the images, which essentially reflects the impression of a clinical report, there may be significant variability in the visual scoring of the anatomical extent, severity, and reversibility of the perfusion defects among the blinded readers. The latter has been ascribed as the human variability component (3). Thus, when it comes to visual interpretation and scoring, is “the devil in the details”?
Because of the inherently digital nature of radionuclide imaging, it lends itself well to an objective quantitative analysis. Quantitative programs are highly reproducible and may be better suited for assessing serial studies and/or for identifying differences in defect size and reversibility. On the other hand, there are several automated software programs that use different methodology for quantification, and the data are not necessarily interchangeable. Unlike visual assessment, automated programs cannot differentiate artifactual defects (e.g., patient motion or subdiaphragmatic visceral activity) from true perfusion defects.
Since quantitative programs are almost universally available on all nuclear cameras, perhaps the best approach is to go “hybrid.” That is, use the advantages of visual and quantitative analysis in combination in order to optimize the data to be closest to the “truth.” One of the key advantages of visual imaging is to differentiate true regional perfusion defects from artifacts. In contrast, an important advantage of automated analysis is to objectively quantify the presence, extent, and severity of myocardial perfusion defects. From a regulatory perspective, perhaps the blinded readers should review the stress and rest image set for quality of acquisition and artifacts. If the images pass the visual inspection or pass after minor modification, e.g., reslicing the images or using motion correction algorithm, then the data are generated by the automated quantitative program in all 17 segments. On the other hand, if there are extracardiac activities, e.g., bowel activity adjacent to or overlying the inferior region that cannot be corrected, then the images will be interpreted and scored visually. Such a hybrid approach should minimize noise incurred by segmental human scoring while taking into consideration extracardiac artifacts that often go undetected or identified spuriously abnormal by automated programs.
At the present time, the role of a fully automated quantitative analysis, with or without visual interpretation, for a new imaging agent or vasodilator is not clearly defined in FDA guidance documents. Perhaps it is time for the best of both worlds!
- American College of Cardiology Foundation
- Cerqueira M.D.,
- Nguyen P.,
- Staehr P.,
- Underwood S.R.,
- Iskandrian A.E.,
- ADVANCE-MPI Trial Investigators
- Mahmarian J.J.,
- Cerqueira M.D.,
- Iskandrian A.E.,
- et al.
- Udelson J.E.