OHE has published an Occasional Paper highlighting the significant shortage of data that forced Claxton et al. to rely on a large number of strong assumptions to produce their results.
Last week, the journal
Health Technology Assessment published the results of a study exploring methods for the estimation of the NICE cost-effectiveness threshold (
Claxton et al., 2015). In this study, previously released in 2013 as CHE Research Paper CHERP81 (
Claxton et al., 2013), the authors present their
“best” estimate of just under £13,000 for the marginal cost of a quality-adjusted life year (QALY) in the English NHS in 2008/09.
NICE currently uses a cost per QALY threshold of £20,000 to £30,000 – and so recommends many medicines and other technologies that cost more than £13,000 per QALY. Professor Claxton argued in the accompanying press release that this was resulting in “real harm” because these decisions cause more QALYs to be lost than gained.
OHE has published an Occasional Paper by Barnsley et al. highlighting the significant shortage of data that forced the Claxton et al. study to rely on a large number of strong assumptions to produce their results (
Barnsley et al., 2013). In this critique,
Barnsley et al. argue that the estimate of £13,000 per QALY is highly uncertain and sensitive to the adoption of plausible alternative assumptions. This was echoed in a journal article by Professor Raftery which compared the Barnsley et al. critique with the Claxton et al. assumptions –
NICE’s Cost-Effectiveness Range: Should it be Lowered? His answer was “No” as “the assumptions required are too many and sweeping to be the basis of a major policy change”.
The key question of interest to Claxton et al. is: “What is the relationship over time between money spent and QALYs gained in Primary Care Trusts (PCTs) in England?” If we can estimate the health gain achieved from additional health spending we can better understand the potential health consequences of spending money on other things including new drugs.
However, the £13,000 estimate is calculated without the use of data on QALYs and without information on how expenditure and outcomes vary within PCTs over time.
Because there are no QALY data available, the authors have to make a number of assumptions in order to adjust mortality data to account for unobserved quality of life. These assumptions, discussed in detail in the Barnsley et al. critique, require a much more complete understanding of PCT behaviour than is demonstrated in the study.
Furthermore,
mortality data used for this adjustment are available for only 11 of the 23 Programme Budget Categories (PBCs) in the NHS, and good quality mortality data are available for only four of these. This means that Claxton et al. are forced to carry out a complex extrapolation process involving many assumptions to arrive at the final estimate of the marginal cost of a QALY. The headline figure of £13,000 per QALY is extremely sensitive to variations in these assumptions.
An earlier version of the paper, where a slightly different assumption was made, produced a “best” threshold estimate of £18,317 – 42% higher than the current figure.
In addition, the authors could not use time-series data to estimate directly how changes to individual PCT budgets over time affect mortality. Instead, they use snapshots of the differences in spending and mortality between different PCTs at a single point in time (cross-sectional data) together with a number of assumptions.
Barnsley et al., 2013 argues that, on balance, “an overall downward bias has been introduced … by a number of the assumptions made in its estimation.”
In the press release accompanying publication of the Claxton et al. study, co-author Professor Sculpher states that the research demonstrates that estimating the cost-effectiveness with which existing NHS resources are being used “is a scientific question that can be informed by evidence and analysis.” Barnsley et al. agree with this, noting that the study “is complex and impressive… and an important contribution to the debate surrounding the optimal value of the threshold to be applied by NICE” but argue that more work is needed to validate or otherwise some of the key assumptions upon which the estimate relies. They note the additional data the authors of CHERP81 propose to collect and suggest additional approaches.
This echoes the abstract of Claxton et al. 2015, which was not in CHERP81. It notes that the “central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable that the other alternatives available … the limitation of currently available data means there is substantial uncertainty associated with the estimate of the overall threshold.” Both Claxton et al. (2015) and the Barnsley et al. critique set out work that can be done to improve on the evidence base to inform policy making on this important issue for the NHS and its patients.