Author + information
- Received August 11, 2009
- Revision received November 3, 2009
- Accepted November 6, 2009
- Published online May 1, 2010.
- Warren K. Laskey, MD⁎,⁎ (, )
- Ludwig E. Feinendegen, MD†,‡,
- Ronald D. Neumann, MD§ and
- Vasken Dilsizian, MD∥
- ↵⁎Reprint requests and correspondence:
Dr. Warren K. Laskey, University of New Mexico School of Medicine, MSC10-5550, Albuquerque, New Mexico 87131
Clinical decision-making regarding the use of low-level ionizing radiation for diagnostic and/or therapeutic purposes in patients with cardiovascular disease must, as in all other clinical scenarios, encompass the broad range of the risk–benefit ratio. Concerns regarding the late carcinogenic effects of exposure to low levels, i.e., <100 mSv, of ionizing radiation stem from extrapolation of exposure-outcome data in survivors of World War II atomic bomb explosions. However, ongoing debate regarding the true incremental risk to subjects exposed to doses currently administered in cardiovascular procedures fails to take into account the uncertainty of the dose-response relationship in this lower range, as well as tissue-specific reparative responses, also manifest at lower levels of exposure. The present discussion draws attention to both of these aspects as they relate to clinical decision-making.
Epidemiology, Clinical Decision-Making, and the Risk–Benefit Ratio
Clinicians are constantly weighing the risk of a procedure, or treatment, against the benefit. In deciding on the appropriate recommendation(s), the clinician resorts to the available “evidence”: that body of information derived from carefully obtained observations and measurements that themselves have been tested for their validity and predictive value. When such evidence is obtained from population-based observational or rigorous epidemiologic studies, the conclusions and predictions made on the basis of such observations will generally apply to an “average” member of a similar population. The problem arises when such population-derived data are then applied to an individual patient. This classic dilemma of clinical decision-making is mitigated, but not obviated, when most of the patient's characteristics match those of the studied population. However, even under ideal conditions of complete matching of such characteristics, the predictive probability of an event (outcome) in a given individual will always be characterized by greater uncertainty when compared to the probability of an event in a large sample (Fig. 1).
Uncertainty in the estimate of risk must be expressed in conditional probabilistic terms. Distributions for the parameter of interest, e.g., effective dose (ED), should be explicitly presented as probability density functions. Point estimates and their associated credible intervals should be reported. Perhaps the best example of this uncertainty is demonstrated in the distinction between an “average” subject and a subject with specific characteristics exposed to ionizing radiation. This distinction is at the heart of the application, or its failure, of ED at the level of the individual patient, or test. The ED is, by definition, a calculated (as opposed to measured) quantity that reflects the “average” age, “average” gender (realizing there cannot be such an entity), and “average” relative tissue radiosensitivities (tissue weighting factors) in a given population exposed to a given amount of ionizing radiation.
Table 1 summarizes the influence of age and gender on the uncertainty in the risk of cancer-related mortality, as presented in the Biologic Effects of Ionizing Radiation (BEIR) VII report (1). The 2% increase in risk (relative risk [RR]: 1.02) associated with a dose of 0.1 Gy differs significantly from the long-standing International Commission for Radiation Protection–developed overall risk of 5% per Sievert (2). However, an order of magnitude of error in the estimation of risk may not be unexpected when both exposure and risk are “low” and each is characterized by considerable uncertainty. For the clinician, the ED (and the associated risk of fatal cancer) must be viewed in broader terms and should not be used to assess the risk of any 1 test involving ionizing radiation or the risk to any 1 patient. The population-based data discussed earlier must be interpreted in that light, i.e., as population-based. Uncertainty in the “true” relationship between absorbed dose and risk at low doses further removes the concept of ED from a predictable single exposure–single patient context.
If, then, the risk to the individual patient is uncertain at low levels of ionizing radiation, the potential benefit of the radiation-based procedure or treatment must be clear-cut in order to effect sound decision-making. Unfortunately, too few ionizing radiation-based diagnostic or therapeutic modalities have been subjected to the rigor of randomized clinical trials. Although few would argue with a favorable risk–benefit ratio for cardiac catheterization in the setting of acute ST-segment elevation myocardial infarction, where the life-saving benefit of the procedure is immediate, clear-cut, and overwhelmingly in favor of the procedure. At the other end of the clinical spectrum is the widespread use of computed tomography (CT) scanning as a screening tool in low- to medium-risk subjects in the general population. In this instance, where virtually no data are available pointing to the benefit of the procedure—either at the individual or population level—the lack of objective benefit likely equals (at best) or is less than (at worst) the population attributable risk of cancer induction. In “real-world” clinical medicine, where the majority of risk–benefit decision-making resides in a gray area, the lack of relevant data, clinical equipoise, and the level of clinical exigency characterize this process. The difficulty in predicting risk for fatal cancer 40 or 50 years following nonsurgical catheter-based correction of a life-altering congenital heart defect in a child might more properly be considered in terms of years per life lost, or disability-adjusted life years. Such approaches are needed for these gray areas where cancer-related mortality is, practically speaking, a nonissue, but morbidity and quality of life are very real, and immediate, issues for the individual patient. Considering this background, a re-examination of the basic concepts of the interaction of ionizing radiation with tissue is in order.
The Concept(s) of Dose
Absorbed dose is defined as the amount of energy associated with ionizing radiation that is deposited per unit mass (of matter, tissue, etc.). The biological hazard of ionizing radiation is expressed as the ED and reflects not only the absorbed dose, but age at exposure, gender, cellular radiosensitivity, the specific type of radiation and its “biological effectiveness,” the population in which the biological sequelae were ascertained, and the mathematical relationship between absorbed dose and biological response. The terms absorbed dose and ED are, unfortunately, often used interchangeably in the lay press as well as in the scientific literature. Although absorbed dose is the appropriate quantity to use for experimental and epidemiologic analyses of dose-response relationships, ED is the appropriate quantity to use for the comparison of an exposure to ionizing radiation with regulatory dose limits based on the risks of whole-body exposure (3). The risks, however, are based on mathematical modeling of the rates of incident cancers mainly in the survivors of the atomic bomb explosions as a function of (inferred) absorbed dose (4,5) and have been the subject of considerable discussion, if not frank controversy, at absorbed doses <100 mGy. The absorbed dose of ionizing radiation for diagnostic testing in cardiovascular medicine resides at the lower end of this range and presents the greatest challenge for meaningful interpretation of the previously noted epidemiologic relationship between dose and outcome. Thus, careful consideration of the sources of uncertainty in the assessment of ED as well as a more critical understanding of the distinction between absorbed dose and ED is imperative for an informed discussion of the risks associated with ionizing radiation for diagnostic purposes (Table 2).
Table 3 summarizes approximate EDs for the most commonly performed diagnostic and therapeutic studies in cardiovascular medicine. However, ongoing modifications in acquisition protocols for multidetector CT coronary angiography will result in somewhat lower EDs than those reported (6). Additionally, with current advances in both single-photon emission tomography and positron emission tomography instrumentation, diagnostic quality images can be acquired using nearly 50% of the doses listed in Table 3.
Comparison of High- Versus Low-Dose Ionizing Radiation Exposure
All would agree that high values for absorbed doses of ionizing radiation are harmful and result in acute illness as well as late consequences, e.g., an elevated risk of cancer decades after irradiation (7,8). In contrast, the situation is quite different for short- or long-term exposures to absorbed doses below about 100 mGy. Not only are acute illnesses absent at this dose level, but late effects (such as cancer) have not been observed in populations such as in Japan following the atom bomb explosions (3,4,9), in cohorts of nuclear workers (10), or in populations living in geographic regions with high background-level radiation exposures (11).
The difficulty in attributing an increased risk of cancer to low-dose exposure is due, in part, to the relatively high incidence of nonradiogenic cancer in industrialized countries as well as to the fact that ionizing radiation is a weak carcinogen, compared to many other toxins to which humans are exposed. Despite the widespread belief that the majority of cancer-related deaths following the atom bomb explosions in Hiroshima and Nagasaki were caused by radiation, only 10% of the 3,350 identified cancer-related deaths have been attributed to radiation exposure (7). What was a good intention to protect workers from exposure to ionizing radiation is beginning to cause widespread radiation phobia. Even with no increase in cancer recognizable at doses below about 100 mSv, many patients seeking medical help question the rationale of, and risk from, exposure to doses far below 100 mSv in the course of diagnostic work-up. They fear cancer may result from being exposed. Such fears are re-enforced by publications such as a recent appraisal of medical radiation exposure that concluded that 1.5% to 2% of all solid cancers in the U.S. might be caused by the use of CT for diagnostic testing on a widespread basis (12).
Only by way of modeling using certain assumptions, such as the linear no-threshold (LNT) hypothesis, can epidemiological data be made to fit the notion that any amount of radiation absorbed by the body potentially causes a malignant disease. The LNT hypothesis is widely accepted, and is the basis for current radiation protection regulations and guidelines, as suggested by the International Commission for Radiation Protection (ICRP) for the purpose of making sure that nobody suffers harm from exposure to ionizing radiation (13). However, more recent discoveries on low-dose effects in biological systems challenge the validity of the model based on this hypothesis.
LNT Hypothesis Versus Radiation Hormesis
Ionizing radiation primarily perturbs the molecules in cells in a manner proportional to dose, with potential damage amplification at higher levels. Cellular defenses operate at each level, aimed at scavenging of toxins, repair of damage (especially that of DNA), and removal of damaged components with replacement of lost elements essential for maintaining structural and functional integrity of the organism. Whereas cancer was generally understood to develop from primary and/or secondary DNA damage through cell transformation and subsequent stages to clinical malignancy in the face of rather constant function of defense mechanisms at the various levels, newer research indicates that low doses of ionizing radiation can upgrade these protective mechanisms to operate also against nonradiogenic, i.e., “spontaneous” cancer (14,15). This “up-grading” of defenses, called adaptive protection, or radiation hormesis, functions under genetic guidance against the abundant endogenous (mainly metabolic) sources of DNA damage. A malignant process occurs only when cells with unrepaired or misrepaired DNA damage suffer malignant transformation and exceed the cellular and tissue functions of protection. Therefore, the risk of clinical cancer induction after low-dose irradiation appears as the difference between the risk of induced cancer calculated on the basis of the LNT hypothesis and prevented cancer from the aforementioned protective mechanisms.
Damage to DNA and Its Repair
The immediate DNA damage and cellular responses to that damage include molecular cross-links of various kinds, base changes, sugar modifications, single-strand breaks, and the more serious double-strand breaks (8). It needs be stressed that the radiation-induced physicochemical damage to DNA increases linearly with dose. The reason for this is the particular impact of ionizing radiation in microdose events in biological tissue. The spectrum of these microdoses is characteristic for a given type of radiation. As the radiation energy transfer increases, the number of microdose events increases, and with them, the number of individual damage sites caused by each one of the events. A dose effect curve for chemical DNA damage in tissue actually conforms to a linear “Impact-Number-Effectiveness Function” without threshold (16). This linear function, however, is not identical in all cell types due to differing genomic activity. Moreover, secondary DNA damage may arise in bystander cells and add to the deviation from linearity.
Within minutes after irradiation, there is a plethora of DNA and chromatin modification. Within 24 h after low-dose exposure, double-strand breaks decrease in number, approaching that of the background “spontaneous” double-strand breaks (17). At typical background radiation levels, the probability of a radiation-induced double-strand break per day per cell is, on average, about 1 in 10,000 (18). The capacity of normal cells to repair damage to DNA and other cellular components is genetically determined and may vary individually. More than 150 genes have been described in DNA repair at high and low doses (19,20). Some genes are active only in low-dose stress responses; others again are modulated only after high doses (20,21). These data contradict a fundamental assumption of the LNT hypothesis for assessing long-term health detriment solely as function of dose. Moreover, low-dose irradiated confluent cells in vitro appear to stall DNA repair until cell proliferation begins again (17). Indeed, an immediate induction of DNA repair has been detected in proliferating cells at low doses of about 1 mGy of X- and gamma radiation (21,22). In general, then, initial nonlethal radiation damage is answered in normal individuals by immediate attempts at structural reconstitution with regained functional homeostasis. Radiation effects involving DNA are ultimately determined by the final extent of DNA damage with sequence mutations and the protective response by the organism. Multiple malfunctioning genes or gene cassettes are required for cancer induction, invasion, and metastases to occur.
Figure 2 shows, as a function of dose, the 2 opposing effects: the risk of developing cancer and the degree of adaptive protection against cancer, with the baseline showing the level of lifetime nonradiation-induced cancer that is observed in industrialized countries. Whereas the degree of protection according to experimental observations rises with increasing doses to a maximum at about 100 mGy and then falls toward 0 (23), the cancer risk is given to rise linearly with dose if one assumes that existing protections against cancer are also constant at low doses in the exposed system. The latter approach conforms to the LNT hypothesis. The figure also includes detrimental bystander effects and consequences for cancer risk in the low-dose range and also acknowledges immediate repair induction, as discussed earlier.
Calculations presented elsewhere show that a very small degree of increased protection, in the region of 2% of lifetime cancer risk, might be sufficient to fully balance the assumed cancer risk at 100 mGy, based on the LNT hypothesis. Ongoing, and unresolved, controversy in the literature regarding the differential risk of high- and low-dose ionizing radiation (24,25) has given rise to the hypothesis that long-term, low doses of ionizing radiation may actually be beneficial and may amplify or stimulate repair mechanisms that protect against disease (21,23,26).
Uncertainty in the Measurements Themselves
Starting from classic epidemiologic studies and progressing through extensive experimental data, it is clear that the probability of harm, expressed as ED, is related in a complex fashion to the type of radiation, the value of absorbed dose (and dose rate), and tissue radiosensitivity. Confining the clinician's concerns to the absorbed dose encountered in medical diagnostic imaging simplifies things somewhat but does not avoid dependence on many assumptions. Although exposure (in Roentgen or air kerma) is directly translatable to dose (Gy) for X-rays and gamma rays, the measure of the overall probability of harm is a function of (organ) dose conversion coefficients as well as tissue weighting factors. The former speaks to geometrical considerations and the spatial relationship of the exposed tissue and the primary radiation field, whereas the latter speaks to the radiosensitivity of the irradiated tissue itself. Thus, it is not surprising to see uncertainties in the estimation of individual organ EDs ranging from 20% to 50% and that for a “representative” subject approaching 40%. An in-depth discussion of the assumptions and sources of error in the estimation of ED is available elsewhere (27–29). However, several clinically relevant caveats must be emphasized. Among these are the issues of single exposure versus multiple exposures (with a corollary being the influence of exposure rate), uniform exposure versus more localized (heterogeneous) exposure, and the application of atomic bomb survivor–derived risk data to the individual patient undergoing diagnostic testing.
Coupled with these uncertainties in dose is the relationship of dose to risk. Ignoring for a moment the true shape of the population probability distribution for survival, the LNT relationship posits: 1) no “safe” dose; and 2) a finite, lifelong risk for “low” doses, i.e., <100 mSv. It must be pointed out, however, that the BEIR VII conclusions in this regard may only be meaningful when a normal individual, belonging to a population of 100,000 individuals with an age distribution similar to the U.S. population, is exposed to a single 100 mGy dose. Once again, because risk is a probabilistic concept, it must be expressed by a distribution, or density, with a mean value and an expression of uncertainty: the standard deviation. The uncertainty in the estimate of risk when added to the uncertainty inherent in fitting (stochastic) data even to an assumed linear relationship properly lends itself to a Bayesian approach. For the clinician, the outcome of such a process is itself uncertain. Thus, it comes as no surprise that the credibility intervals for lifetime attributable cancer risk, as presented in the BEIR VII report, are wide (Table 4). Finally, although the LNT relationship posits a linear relationship between risk and dose at low exposure, i.e., <100 mSv, the “true” nature of this relationship in this lower ranges of radiation that are common for medical uses of ionizing radiation (Table 3, diagnostic or diagnostic/therapeutic exposure) is unknown, and it may well display nonlinearities due to the damage-limiting protective mechanisms discussed earlier.
The clinician must understand that the prediction of risk of a subsequent malignancy for an individual undergoing a medical diagnostic test, or procedure, employing ionizing radiation is a complex, uncertain exercise. As with any risk calculation, “average” risks obtained from population-based studies are of little value for the individual. Population-based data must be weighted, or adjusted, for important clinical variables known to be associated with the risk of malignancy, especially at low dose and dose-rate; the risk must be balanced against the benefit of the test, or procedure; and the risk–benefit assessment must be extrapolated over the lifetime of the individual. Few risk assessments in medicine are sufficiently precise in this regard. The application of an extrapolation of a hypothesis for determining the individual's need of diagnostic testing or therapeutic procedures is no exception. What counts is an acceptable ratio of the expected degree of benefit relative to the magnitude of risk. The latter should be calculated based on the best available data and compared with those risks generally accepted in daily life (Table 5).
This paper represents the opinions of the authors. It does not represent the position of the NIH, DHHS, or the U.S. Government.
- Abbreviations and Acronyms
- computed tomography
- effective dose
- linear no-threshold
- Received August 11, 2009.
- Revision received November 3, 2009.
- Accepted November 6, 2009.
- American College of Cardiology Foundation
- National Research Council
- International Commission on Radiological Protection
- Harrison J.D.,
- Streffer C.
- Gerber T.C.,
- Carr J.J.,
- Arai A.E.,
- et al.
- United Nations Scientific Committee on the Effects of Atomic Radiation
- Hall E.J.,
- Giaccia A.J.
- ↵(1977) Recommendations of the International Commission on Radiological Protection, International Commission on Radiological Protection (ICRP Publication 26, Ann ICRP), pp 1–30.
- Feinendegen L.E.,
- Neumann R.D.
- Rothkamm K.,
- Löbbrich M.
- Pollycove M.,
- Feinendegen L.E.
- Tubiana M.,
- Aurengo A.,
- Averbeck D.,
- et al.
- (2010) Annual Report of the International Commission on Radiological Protection. ICRP 2006-2008, Accessed April 23, http://www.icrp.org/docs/ICRP_Report_2006-2008_rev_1.pdf.
- ↵(2005) Patient Dosimetry of X-rays Used in Medical Imaging (Report 74) (ICRU, Bethesda, MD).
- International Commission on Radiological Protection
- Einstein A.J.,
- Moser K.W.,
- Thompson R.C.,
- Cerqueira M.D.,
- Henzlova M.J.
- Epidemiology, Clinical Decision-Making, and the Risk–Benefit Ratio
- The Concept(s) of Dose
- Comparison of High- Versus Low-Dose Ionizing Radiation Exposure
- LNT Hypothesis Versus Radiation Hormesis
- Damage to DNA and Its Repair
- Uncertainty in the Measurements Themselves