Many policy decisions rely on predictions of how technology performance is likely to change as a function of time or of human efforts in research or manufacturing. In climate policy, such predictions are especially important because of the central role that carbon-free technology must play in mitigating climate change, and the need for government policy to accelerate technological development and adoption. However, making bets on which technologies to support, or what carbon price is reasonable for shifting markets toward emissions reductions, requires predictions of technological progress. If these predictions are off, taxpayers’ dollars are wasted and progress on climate change mitigation is slowed. Yet how accurate have past predictions of technological change been? An important new paper in PNAS (1) addresses this topic by comparing the performance of two common approaches to predicting technology cost change.
This prediction problem is difficult because of the limited data available on technology costs, and the gaps in our understanding of why technologies improve at different rates. In the unrealistic scenario where all technologies steadily improved at the same pace, predictions of their future state would be easier. However, rates of improvement with time differ significantly. A few clean energy technologies have improved rapidly while others have improved more slowly. In the case of US nuclear fission plants, costs have actually risen (2, 3) while the costs of solar energy and lithium-ion batteries have both fallen by more than 95% (4, 5). Alongside these cost trends, nuclear installations in the United States have stalled, while renewable energy markets are booming and electric vehicles using lithium-ion batteries are becoming competitive with internal combustion engine vehicles.
If back in, say, 1990 forecasts had been more uniformly accurate in predicting the rapid cost improvement observed for some critical energy technologies, would we have made faster …