Journal Home

Cost-Effectiveness Analysis: What You Always Wanted to Know But Were Afraid to Ask

Richard Norman, BA MSc
Anne Spencer, BA MSc PhD
Gene Feder, BSc MB BS MD

Corresponding author:: Dr. Gene Feder: g.s.feder@gmul.ac.uk


Introduction

Across industrialised countries, health care spending, as a proportion of gross domestic product (GDP) has increased over the past thirty years. In the United States, this percentage increased from 7.6% in 1972 to 14.0% in 1992, to around 16.0% in 2004 (McPake, Normand, and Kumaranayake, 2002; Smith, Cowan, Heffler, and Caitlin, 2006). This is not confined to health care systems with predominantly private funding. Publicly funded systems, as in the United Kingdom, and social insurance-based systems, as in France, have also witnessed large absolute and proportional increases in expenditures. As society's investment in health care increases, appropriate and transparent decision-making by policy makers becomes increasingly important. Cost-effectiveness analysis (CEA), part of the discipline of health economics, plays a crucial role in helping decision-makers allocate scarce funds efficiently, i.e., to health interventions that yield the most improvement in outcome for the least amount of expenditure. The volume of cost-effectiveness studies has increased dramatically in the past 10 years. The importance of these studies has been acknowledged formally by policy makers across the world, such as the Center for Disease Control and Prevention (CDC) in the United States, the National Institute for Health and Clinical Excellence (NICE) in England and Wales, and the Pharmaceutical Benefits Advisory Committee (PBAC) in Australia. At best, they will assure that health care dollars are allocated to interventions that provide the most improvement in outcome per dollar expended. Conversely, there are risks secondary to badly designed analyses or misunderstanding of well designed ones.

The aims of this paper are (i) to promote the value of CEA in evaluating intimate partner violence (IPV) programmes in health care settings and (ii) to outline some key points for health care professionals and others when interpreting the results of cost-effectiveness analyses. We start with an outline of the essential purpose and components of a CEA and then discuss some methodological issues that should be considered in the interpretation of cost-effectiveness studies. This is illustrated with examples from a recent CEA of an intervention to improve identification and referral of IPV survivors in a primary care setting (Norman, Spencer, Eldridge, and Feder, unpublished). Finally, we consider why programmes aimed at IPV are under-represented in the CEA literature.

Basic principle of cost-effectiveness analysis in health care

There is a large range of potential health care interventions that may improve the health of populations. These range from medical technologies to drugs and clinical care systems. If implemented, each will have an economic impact on society, a health care provider, an individual, a family or all of these. If the intervention is clinically effective it will also yield health improvements. CEA is a method that can be used to identify interventions that yield the highest gain in health outcomes per dollar of health expenditure. This is particularly useful for any public or private entity with a fixed budget. However, CEA is not impervious to political pressure. In fact, many distributional issues that CEA does not address must necessarily be resolved in the political arena.

CEA starts with the premise that any health care system, no matter how well funded, will have to limit what it spends. The main objective of CEA is not to identify programmes that will save money for a system, a government or a provider, although this may be a result from some studies. Its aim is to get the most outcome from each additional health care dollar expended. The role of CEA is to demonstrate if a dollar spent in a potential programme A would bring about a greater gain in outcome if invested in an existing programme B.

To illustrate the principle of getting the most from each additional dollar expended, Table 1 presents a hypothetical choice facing a decision maker. The cost and outcome associated with each programme under consideration is given. The cost is in dollar terms and the benefit is in a common measure of outcome, which we denote for now as a "QALY" (Quality Adjusted Life Year). We will explain QALYs in more detail below.

(All costs and outcomes are per person) No Programme Programme A Programme B
Expected total cost ($) 500 1500 1000
Expected outcome (QALY's) 20 25 15
Incremental cost relative to no programme ($) - 1000 500
Incremental outcome relative to no programme (QALY's) - 5 -5
Cost / QALY (relative to no programme) - 200 -100

Once costs and outcomes are determined for each potential programme, they are compared. Often a baseline comparison is included that is specified as "no change" or the "status quo." It could also be currently accepted clinical practice in the case of a medical treatment. In Table 1 it is "no programme." Programme B costs $1000 per woman, increasing total costs compared to "no programme" by $500. However, it leads to fewer QALY units per woman (15 QALYs compared to 20 when there is no programme). An alternative that costs more than another alternative (in this case, no programme) and results in worse outcomes is one said to be "dominated" by the less expensive, more effective programme. It is clearly less valuable than no programme at all.

Comparing Programme A to no programme is more difficult. It generates 5 additional units of QALY per person, but costs $1000 more than not having any program. The next step in CEA is to calculate the cost of each additional unit of outcome, which is $1000 divided by 5 units of QALY equals $200 per unit of additional QALY. The question that must now be answered is whether the $200 spent on each additional unit of improved outcome would be better spent on a different programme. In other words, is $200 per additional unit of improved outcome too much to pay for those units? The approach that much of the health CEA literature has taken is to set a dollar amount threshold above which society is unwilling to pay for the outcome. This is similar to an individual deciding that if the price of an additional pair of shoes rises above a certain cost such as $50, they will not purchase it, but if it falls below they will. The individual would set the threshold based on individual income, preferences, and the quality of the shoes. Similar thresholds can be constructed for health outcomes. If all and only those interventions for which the cost of an additional unit of benefit is below this threshold are implemented then there is no alternative combination of spending this fixed budget that produces more improvement in outcomes for less money. For more details on the decision rules used in cost-effectiveness studies, see Johannesson and Weinstein (1993).

The measurement of costs

There are a number of important considerations in the calculation of the costs of an intervention. The first is that data often come from a large number of different sources. Economic data are not routinely collected in clinical trials, especially in IPV research, where intervention trials are relatively recent and have not been accompanied by parallel CEA. Consequently, data are drawn from other studies or national IPV cost estimates and are often from different time periods. In order to adjust for differences in purchasing power in different years, all costs must be adjusted to reflect a common level of purchasing power. A base year is chosen, usually the year in which the study is conducted. At the most general level, this adjustment involves using the general inflation rate within a country or region to estimate costs expressed in base year dollars. However, there may be a more appropriate inflation rate, such as a health care-specific purchasing power index.

A second consideration in calculating costs is the determination of which costs to include in a CEA. It is standard to include the direct costs of the intervention, such as the salaries and other compensation of any health care professionals or other personnel associated with the intervention, plus any equipment, space, and supplies they use. Programme overhead costs should also be included. If the programme used volunteers or in-kind services, a CEA will impute a value to these under the assumption that from a societal perspective they represent resources that could have been used to produce alternative programmes or interventions. Additionally, just as any future improvements in outcome realized from the intervention are included, costs or cost savings related to future health care must also be included. This is important in situations when an intervention will change future health care use significantly. Screening for IPV is a good example, in that a reduction in violence might be expected to reduce future health care costs and improve future health states.

Some CEAs extend this interpretation of costs and adopt what is known as a societal perspective. Thus, they go beyond health care services and address a broader impact of the intervention. This might mean that they consider the effect of an intervention on the future economic productivity of an individual, out-of-pocket costs, or even children's future productivity losses to name a few. If a survivor of IPV can enter the labour force after, for example, a successful advocacy intervention, it is valuable from a societal perspective, as well as benefiting her as an individual. In our recent CEA of a system based IPV intervention within primary care (Norman et al., unpublished), we identified some key areas where societal costs could differ if the intervention was implemented. These included productivity costs, housing costs, and criminal justice system costs. The advantage of the societal perspective is that it provides an expanded description of an intervention's effect and it captures spill-over effects beyond the immediate changes in health status that a more clinical orientation might take. One potential disadvantage is that the choice of components to include in the societal analysis may seem arbitrary. However, economic theory provides a sound framework for making these choices.

The measurement of improved outcomes

Within economics generally, there are a number of approaches in valuing the outcomes of a project. Often the benefit to a person of an intervention, good, or service is measured by their willingness to give something up in order to receive the improvement in outcomes they expect to obtain from having the intervention, good, or service. Economists quantify this measure in a money metric called willingness to pay when conducting cost-benefit analyses (CBA). However, it has been suggested that this approach is not appropriate for health applications. For CBA to produce valid results, markets for health services would at a minimum have to be competitive which is unlikely in most developed countries even those with large private sectors. For details on why this might be, see Arrow (1963) and Culyer (1989). For this reason, the willingness to pay approach is not used in most of the health care CEA literature. Instead, CEA defines the benefit an individual receives from an intervention by changes in the individual's level of health that result from that intervention.

Going back to our simple example (Table 1), we used an arbitrary measure of benefit defined solely in terms of health change. Cost-effectiveness studies use a variety of outcome measures. Many studies, particularly those conducted with clinical trials, use clinical outcomes in the evaluation. For studies of interventions aimed at reducing or preventing IPV, this might include cost per disclosure of abuse to a health care professional, cost per reduction in further abuse, or costs saved by averting or reducing emergency or unscheduled health care. In situations where one intervention is both more expensive and more effective than another, this kind of approach does not assist the policy maker, because there is no basis for judging the value of this improvement in clinical outcome.

The estimate of Quality-Adjusted Life Years (QALY's) is the most common method for judging this value. The QALY is a quantitative measure of the relative value of a given health state where the value reflects both the quantity and quality of life in that state. One QALY is defined as one year of full health for one person. Consequently, an intervention that resulted in an additional 6 months of full health for 2 people or 3 additional months of health for 4 people would be the same as one additional year of full health for one person or 1 QALY. Similarly, if two people have one year in a health state that is judged to be valued midway between the values of full health and death (usually taken to be valued at 0 but not always), this is also equal to one QALY. There is a significant body of literature describing the measurement of utility, which is beyond the scope of this discussion. For further details, see Ryan et al. (2001), Brazier et al. (1999), or Greenberg and Pliskin (2002).

The advantage of the QALY is that, in principle, it allows comparison between interventions that otherwise would have different outcomes because the diseases being treated and compared are different in their effects. Thus, the suitability of this approach for IPV depends on the ability of the quality of life measures to capture the benefit of IPV interventions and translate them accurately into QALYs or some other common quality of life metric. See Williams (1996) for a discussion on the ethics of QALY's.

To return to our example in Table 1, Programme A costs $1000 compared to $500 if no programme is implemented and produces 5 more QALYs than if no programme is implemented. As shown previously, each additional QALY realized from Programme A costs an additional $200. Whether or not this amount is too much to pay for an additional QALY will differ between countries because of differences in income and preferences for health relative to other commodities (such as education, transport etc). In England and Wales, NICE (2005) have suggested a range of between £20,000 and £30,000 ($39,500 and $59,200 as of January 2007). For the United States, the figure is likely to be higher (for further details, see Hirth, Chernew, Miller, Fendrick and Weissert (2000)).

Discounting

Discounting is an additional adjustment to both costs and improvements in outcomes that is made in CEA. It is used to account for the empirical finding that people are impatient: they prefer to receive improvements in outcomes sooner rather than later. Exactly how people discount is uncertain and will differ between individuals and cultures. One common approach in CEA is to discount both costs and improvements in outcomes at 3.5% per annum (as suggested by NICE in England and Wales). This means that, if a cost occurs next year (say, $1000), the value of that cost expressed in terms of current year dollars is divided by (1 + 0.035). In this example, the present value (this year's cost) of a $1000 cost next year is $966. Due to the uncertainty of how people discount, there is no agreement either about the correct rate or even that it should be done for health-related applications. For further information, see Gravelle and Smith (2001). The importance of this issue is largely dependent on the time horizon of the study, defined as the period of time over which costs and improvements in outcomes are calculated. As the time horizon lengthens, say twenty years or greater, the approach used to discount becomes increasingly important. See McPake et al. (2002) or Cairns (2001) for further details on discounting.

Modelling in cost-effectiveness analysis

Because of the complexity of interventions, or of their outcomes, it is often necessary for CEA to include modelling of costs and effects. Kuntz and Weinstein (2001) describe the method as an analytical structure in which costs and outcomes from different sources are brought together. It is typical for these models to simulate the lifetime of a particular patient or of an entire population, including the health states they will move into and out of over time because of the intervention or its absence. There are two types of modelling that are often used: Markov modelling and decision tree analysis. If costs and outcomes are likely to span a significant period of time, it is more usual to use a Markov approach.

To create a Markov model, the economist has to first identify all possible states in which a person can find themselves in relation to a health problem or intervention. To illustrate this, our recent work looked at the cost-effectiveness of an educational intervention in primary care to improve identification and referral of women experiencing IPV. We identified seven relevant states in which a woman can find herself relative to IPV as follows:

  • No abuse
  • Abuse unidentified
  • Women who have disclosed abuse to a health care professional and are not seeking any intervention
  • Women who are in an advocate-based intervention
  • Women who are in a psychologist-based intervention
  • Women who are in an advocate and psychologist-based intervention
  • Medium term improvement (moderate improvements in quality of life and costs but women still at risk of further abuse or experiencing sequelae of past abuse)

One criticism of the Markov approach is that the choice of states may be arbitrary and is often data driven. However, one objective of a sound CEA is to assure that the model describes the experience of a population without becoming over-reliant on assumptions that are made because results from clinical studies are unavailable to inform model specification.

Once appropriate states have been identified, the likelihood that an individual is in one of these states and the probabilities of moving to each of the other states (transition probabilities) over a certain time period (for example, six months) must be estimated. For instance, using our example, all data sources would be investigated to determine the probability that a woman who is an unidentified survivor will be in an advocate-based intervention six months later. Once we have identified all of these transition probabilities in the literature, the model simulates the progression of a population over time between these states.

The next step is to attribute mean costs and quality of life values to each of the states. Again, these will come from published literature, supplemented with expert opinion where necessary. The measurement of quality of life has a number of methodological issues (see Wittenberg, Lichter, Ganz, and McClosky (2006) for details on the construction of such values for IPV). Once the model predicts the numbers of women in each of the identified states and we know the costs and outcomes associated with being in each of these states, we can aggregate the costs and outcomes. An intervention might aim to change transition probabilities (such as increasing the probability that an unidentified victim will be detected), to change the quality of life for women in certain states, or to reduce future health care costs for a particular group. Using figures on the effectiveness of an intervention, the Markov model will predict the total costs and outcomes expected both with and without the intervention. This then allows estimation of the difference in cost-effectiveness between the intervention and control groups.

The reliability of results -- sensitivity analysis and external validity

There are two additional components that should be included in a CEA. The first is a sensitivity analysis. To understand what sensitivity analysis does, consider that CEA is a combination of estimated values of costs and estimated indices of quality of life. Sensitivity analysis assesses the stability of CEA results when these estimates are varied to reflect the uncertainty associated with them (e.g., the sampling variation in mean costs or probability of being in a certain state at a certain time). The simplest sensitivity analysis is a univariate analysis. In this approach, a range of possible values is specified for each of the model parameters (e.g. "While our estimated cost of nurse time is $50 per hour, a realistic range is $35-80"). The base or average result is then reproduced with the high and low value specified in each of the ranges. The sensitivity analysis examines the effect of changing each of the parameters within the specified range. If the conclusion is invariant to these changes across all feasible ranges of each parameter estimate, the CEA is considered to be robust which means that the results are more likely to be valid. There are a number of extensions to this approach that use more complicated statistical methods like multivariate analysis or probabilistic sensitivity analysis (see Briggs (2000) for further details). However, within all these approaches, the principle remains the same: testing how sensitive CEA results are to changes in the value of each of the estimated parameters.

The second component of CEA is partially shared by analyses of clinical effectiveness and assures the external validity of the result. As with clinical trials, one must determine whether or not the sample investigated in the CEA is representative of the population of interest. However, there is an additional dimension in CEA. While populations across developed countries might be assumed to have similar responses to the same intervention (for example, prescribing aspirin after a myocardial infarction), costs can differ significantly between countries even after adjustments for differences in currency exchange rates due to differences in income and in preferences for health improvements.

The absence of cost-effectiveness analyses for IPV interventions

At present, there is little or no literature on CEA of IPV interventions in health care settings. An American team looked at the cost-effectiveness of an integrated counselling and advocacy service (Domino et al, 2005a; Domino et al, 2005b). Outcome measurement was limited to mental health and trauma scales, thus making cost-utility analysis impossible. However, they identified an improvement in outcomes in the intervention at no extra cost. Therefore, the intervention dominated the control.

There are likely to be a number of reasons for this relative paucity of cost-effectiveness analyses on this topic. First, IPV interventions are more complex than most clinical treatments. This makes the analyst's task harder. Complexity often gives rise to significant caveats that must qualify conclusions. Second, the relevant population may be difficult to identify. There are several reasons for this, such as variations in severity of IPV; less severe IPV may be more difficult for a clinician to recognize. Another difficulty in identifying IPV survivors is that, unlike the diagnosis of a medical condition, women may be reluctant to disclose their IPV status to health care professionals. If one considers a screening study, identification of women experiencing IPV is more complicated than detection of many clinical conditions such as osteoporosis or depression. A third reason that IPV CEAs are scarce is that health care interventions may have a limited role in preventing IPV. For this reason, it may be difficult to demonstrate clinical effectiveness within the relatively short follow-up period of intervention studies, especially using QALYs to measure outcomes. Finally, there are particular aspects of IPV that make the definition of outcomes and costs difficult. Clinical and behavioural interventions usually aim to improve physical and mental health as well as social well-being. The complexity of these different outcomes makes it difficult to compare interventions.

Measuring costs can also be complex. For example, in our recent CEA (Norman et al. (unpublished)), we felt that limiting assessments of cost reductions to only those that resulted from changing health care use would significantly under-estimate the true costs savings. Therefore, we identified costs associated with legal and civil proceedings, victim labour force participation, housing costs, and medical costs associated with both the physical and psychological harm of IPV.

The technical challenges in conducting a CEA of IPV interventions are partially responsible for the small literature in this area to date. This in turn delays incorporation of IPV services into health care settings. Health care policy makers, who must consider all areas of health, not just IPV, are deciding how to allocate scarce health care dollars. Allocation decisions, particularly in the United Kingdom and increasingly in other developed other countries, will be based on CEA and will target areas with strong supportive evidence. Without sound cost effectiveness analyses, IPV interventions are likely to be bypassed. Moreover, IPV has a number of facets that make CEA essential and beneficial. Its high population prevalence and the magnitude of its personal and economic consequences suggest that interventions following disclosure to health care professionals have potentially large improvements in outcomes to individual and societies.

Feedback
Print this Page

Email to a Friend:

References

Arrow, K. (1963). Uncertainty and the welfare economics of medical care. American Economic Review, 53(5), 941-973.

Brazier, J., Deverill, M., Green, C. (1999). A review of the use of health status measures in economic evaluation. Journal of Health Service Research and Policy, 4(3), 174-184.

Briggs, A.H. (2000). Handling uncertainty in cost-effectiveness models. Pharmacoeconomics, 17(5), 479-500.

Cairns, J. (2001). Discounting in economic evaluation. In M. Drummond & A. McGuire (Eds.) Economic Evaluation in Health Care (pp. 236-256). Oxford: OHE.

Culyer A.J. (1989). The normative economics of health care finance and provision. Oxford Review of Economic Policy, 5, 34-58.

Domino, M.E., Morrissey, J.P., Chung, S., Huntingdon, N., Larson, M.J., Russell, L.A. (2005a). Service use and costs for women with co-occurring mental and substance use disorders and a history of violence. Psychiatric services, 56(10), 1223-32.

Domino, M., Morrissey, J.P., Nadlicki-Patterson, T., Chung S. (2005b). Service costs for women with co-occurring disorders and trauma. Journal of Substance Abuse Treatment, 28, 135-143.

Gravelle, H., Smith, D.(2001). Discounting for health effects in cost-benefit and cost-effectiveness analysis. Health Economics, 10(7), 587-599.

Greenberg, D., Pliskin, JS. (2002). Preference-based outcome measures in cost-utility analyses. International Journal of Technology Assessment in Health Care. 18, 461-466.

Hirth, R.A., Chernew, M.E., Miller, E., Fendrick, A.M., Weissert, W.G. (2000). Willingness to pay for a quality-adjusted life year: In search of a standard. Medical Decision Making, 20, 332-342.

Johannesson, M., Weinstein, S. (1993). On the decision rules of cost-effectiveness analysis. Journal of Health Economics,12,459-467.

Kuntz, K.M., Weinstein, M.C. (2001). Modelling in economic evaluation. In M. Drummond & A. McGuire (Eds.) Economic Evaluation in Health Care (pp. 141-172). Oxford: OHE.

McPake, B., Normand, C., Kumaranayake, L. (2002). Health Economics: An International Perspective. London: Routledge

Neumann, P.J. (2005). Using cost-effectiveness analysis to improve health care: Opportunities and barriers. Oxford: OUP.

National Institute for Health and Clinical Excellence. (2005). Social Value Judgements report. Retrieved May 3, 2007, from NICE Web site: here

Norman, R., Spencer, A.E., Eldridge, S., Feder, G.S. (Submitted). Cost-effectiveness of a system level intervention to improve the primary health care response to partner violence.

Ryan, M., Scott, DA., Reeves, C., Bate, A., van Teijlingen, ER., Russell, EM., et al. (2001). Eliciting public preferences for healthcare: a systematic review of techniques. Health Technology Assessment 5(5),1-186.

Smith, C., Cowan, C., Heffler, S., Catlin, A. (2006). National health spending in 2004: recent slowdown led by prescription drug spending. Health Affairs (Millwood). 25(1), 186-96.

Williams, A. (1996). QALYS and ethics: a health economist's perspective. Social Science and Medicine 43(12):1795-1804.

Wittenberg, E., Lichter, E.L., Ganz, M.L., McClosky, L.A. (2006). Community preferences for health states associated with intimate partner violence. Medical Care, 44(8), 738-744.

Family Violence Prevention Fund Health eJournal

ISSN 1556-4827
Copyright © 2006 Family Violence Prevention Fund
All rights reserved