## Introduction

The Paris Agreement of the 21st UNFCCC Conference of Parties (COP21) in 2015 aims to limit “the increase in the global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C above pre-industrial levels”. Various measures are specified in support of this, including efforts “to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century”. To provide context, observations1 show that the global mean surface temperature increase above pre-industrial levels, $$\Delta \bar T_{\mathrm{s}}$$, was about 1.1 °C in 2015 and 2016, with El Niño contributing to the warming in these years, and about 1 °C in 2017, the warmest non-El Niño year on record.

Given that long-term global warming is simulated to scale approximately linearly with cumulative CO2 emissions2, this leaves only limited remaining budgets of anthropogenic CO2 emissions until an atmospheric CO2 burden consistent with $$\Delta \bar T_{\mathrm{s}}$$ = 1.5 °C or 2 °C is reached. These budgets are uncertain and have proven challenging to compute3,4,5, as they depend on several complicating factors, such as the climate sensitivity to the radiative forcing by CO2 and the future relative roles of non-CO2 forcers, especially the intermediate and short-lived climate forcers (SLCFs) including greenhouse gases like methane and ozone, and aerosol particles containing soot, sulfate, and other components. Numerous approaches have yielded a wide range of remaining budget values for various temperature thresholds4. The IPCC3 found that $$\Delta \bar T_{\mathrm{s}}$$ remains below 1.5 °C in the 21st century in 66% of Coupled Model Intercomparison Project phase five (CMIP5) simulations with a cumulative CO2 budget of 400 Gt(CO2) from 2011 onwards, which includes the effects of continued emissions of non-CO2 forcers. For a $$\Delta \bar T_{\mathrm{s}}$$ of 2 °C, the corresponding remaining budget is 1000 Gt(CO2)3. At the current global emissions rate of just over 40 Gt(CO2)/yr6, these 1.5 °C and 2 °C budgets would already be exhausted by 2020 and 2035, respectively. In contrast, a recent analysis5 has suggested that the remaining budgets may be much larger—possibly exceeding 880 Gt(CO2) and 1870 Gt(CO2) from 2015 onwards for 1.5 °C and 2 °C, respectively, which would extend the time window to 2037 and 2062 at the current emissions rate. However, these higher values involve numerous assumptions, including that $$\Delta \bar T_{\mathrm{s}}$$ is currently only 0.9 °C, implying a difference to 1.5 °C of 0.6 °C, which is at the high end of the range of estimates based on observational evidence7, as well as further assumptions such as extensive additional mitigation of SLCFs.

In the context of the Paris Agreement, planned mitigation efforts until 2030 are specified by the Nationally Determined Contributions (NDCs), here also including the Intended NDCs (INDCs) for parties which have not yet ratified the agreement. Analyses of the current NDCs indicate that while emissions in some regions of the world are likely to decrease in the coming decade, total global anthropogenic CO2 emissions from 2015 to 2030 are likely to remain constant8, or even increase by ~1%/yr9. Thus, given the estimated remaining budgets discussed above, limiting $$\Delta \bar T_{\mathrm{s}}$$ to 1.5 °C would very likely require much more ambitious and rapid emissions reduction efforts than the current NDCs. For the 2 °C goal, if the current NDCs were to be followed until 2030, then a 66% probability of keeping $$\Delta \bar T_{\mathrm{s}}$$ ≤ 2 °C has been calculated to require a decrease of CO2 emissions of about 5%/yr thereafter9. Such sustained reductions, proposed as a carbon law of halving global CO2 emissions every decade10, would require extensive efforts in the power, transport, agriculture and consumer goods sectors, far exceeding the current and planned efforts reflected in the NDCs. On the other hand, global warming exceeding 1.5 °C, and especially exceeding 2 °C, is expected to have highly detrimental consequences for societies and ecosystems around the world11, requiring extensive and costly adaptation measures, especially if low-probability, high-risk systemic transitions (e.g., collapsing ice sheets) are triggered by the increasing temperatures12,13.

Recognition of this impending challenge has given increased momentum to often controversial discussions about two additional possible approaches to limiting climate change (Fig. 1): removing greenhouse gases from the ambient atmosphere, particularly CO2 as the most important climate forcer; and intentionally modifying the atmosphere-Earth radiative energy budget to partly counteract unintended anthropogenic climate change. These proposed approaches have been referred to collectively under various names, including geoengineering, climate engineering, and climate interventions14,15,16,17; here we use climate geoengineering, i.e., geoengineering being done specifically for climate-related purposes. Although none of the proposed techniques exists yet at scales sufficient to affect the global climate, they have already taken up prominent roles in climate change scenarios and policy discussions. In particular, extensive application of techniques for removing CO2 from the atmosphere is assumed in the widely used low-carbon RCP2.6 scenario18 of the Representative Concentration Pathways used by the Intergovernmental Panel for Climate Change (IPCC). Furthermore, an analysis19 of 116 scenarios which are consistent with limiting $$\Delta \bar T_{\mathrm{s}}$$ to below 2 °C found that 87% of the scenarios require a transition to global net negative emissions, i.e., a CO2 removal rate exceeding gross emissions, during the second half of this century. In light of this situation, we assess the degree to which proposed climate geoengineering techniques could contribute significantly to achieving the Paris Agreement temperature goals during this century, which techniques can be largely disregarded in this context, and what the main open issues and research needs are, including the broader societal and political context.

## Types, metrics and budgets of proposed techniques

Carbon dioxide removal (CDR) techniques (Fig. 1b) are generally considered in terms of the cumulative amount or rate of CO2 removal from the atmosphere (Gt(CO2) or Gt(CO2)/yr), and compared to the current burden, remaining budgets, or global emissions of CO2. Most literature has focused on removal of CO2, rather than SLCFs20, due to its larger burden and longer lifetime, and thus comparatively slower response to mitigation efforts. This focus is further supported by the low-carbon RCP2.6 scenario18, wherein the emissions of the SLCFs methane and black carbon are already assumed to decrease significantly, meaning further measures to reduce their emissions or remove them from the atmosphere would have a limited additional effect21.

Efforts to modify the radiative energy budget of the atmosphere and Earth’s surface (Fig. 1c) are generally discussed in terms of reducing radiative forcing (in units of W/m2), defined as the change in the Earth’s net radiative energy balance at the tropopause that would occur if one climate system variable were changed while all other variables are held constant, while allowing stratospheric temperatures to equilibrate2. Given this focus and metric, we call this approach radiative forcing geoengineering (RFG), which we define as the cooling term, i.e., the magnitude of the negative radiative forcing. RFG and CDR are not entirely independent, since each can have indirect effects on the other, e.g., afforestation changes the surface albedo, while changes in temperature and light due to RFG techniques could affect biophysical processes, and thus CO2 uptake by oceans and ecosystems22,23,24.

To quantitatively assess the potential of CDR and/or RFG to compensate for a shortfall in the reduction of emissions of climate forcers, we start with emissions scenario data9 which is based on the assumption that the current NDCs will be fulfilled by 2030, and build on this with a parametric analysis (similar to ref. 25 but starting with the NDCs rather than the RCP scenarios). Failure to fulfil the NDCs—or mitigation in excess of the commitments—would accordingly either increase or reduce the expected emissions and gaps to specific targets, as illustrated in one parametric scenario with extensive mitigation starting already in 2021. Figure 2 and Supplementary Table 1 shows results for a range of annual decrease rates for CO2 emissions (see the Methods section for assumptions and computations).

Following the NDCs from 2015 to 2030 would result in cumulative emissions of 700 ± 37 Gt(CO2). This would already exceed our estimate of the remaining CO2 budget for the 1.5 °C goal, which is 650 ± 130 Gt(CO2) (see Methods). Even if a decrease rate of 3%/yr were to start in 2021, the cumulative emissions by 2030 would be ~600 Gt(CO2), requiring near-zero CO2 emissions thereafter to achieve the 1.5 °C goal without invoking CDR or RFG. Achieving the 2 °C goal by mitigation alone (i.e., requiring no emissions gap in 2100) would also be highly challenging, requiring fulfilling the current NDCs by 2030 and reducing emissions by over 5%/yr thereafter, or reducing CO2 emissions by more than 3%/yr starting already a decade earlier in 2021.

Defining clear threshold values for CDR and RFG techniques to be relevant for future climate policy is difficult due to a strong dependence on future emissions pathways. However, in the context of the Paris Agreement, useful reference values can be defined based on the difference between the 2 °C versus the 1.5 °C limits (see Methods): CDRref ≈ 650 Gt(CO2) for the cumulative CO2 budget, and RFGref ≈ 0.6 W/m2 for the equivalent radiative forcing. These reference values help provide orientation for the range of cases considered in Fig. 2 and Supplementary Table 1: they correspond to most of the gap in remaining emissions for the 1.5 °C limit in the case with a 5%/yr emissions reduction after 2030, and likewise for the 2 °C limit in the case of a 3%/yr reduction rate after 2030. In contrast, for the 1%/yr case these reference values only fill 38% of the gap to the 2 °C limit, and only 27% of the gap to the 1.5 °C limit. In these cases, a single technique would need to substantially exceed CDRref or RFGref, or a portfolio of techniques would be needed, each providing CDRref or RFGref or a significant fraction thereof.

Below we discuss the scalability and design challenges for any CDR or RFG technique to reach these values. While technical challenges are hereafter the main focus, we recognize that they cannot be viewed in isolation from the significant ethical, legal, political, and other social aspects that arise when discussing climate geoengineering, and provide an overview of these aspects in Box 1.

## Carbon dioxide removal

Numerous CDR techniques have been proposed (Fig. 1) and the surrounding literature indicates that some CDR techniques could contribute significantly to achieving net zero or net negative CO2 emissions15,16,26,27,28,29. While it is possible that CDR, together with mitigation, could eventually return atmospheric CO2 to previous levels, this would only partially return the climate and other Earth system parameters, such as ocean pH, to the corresponding previous state, due to hysteresis and other effects30,31. Here we examine the potential contributions of CDR towards achieving the Paris Agreement goals, and the challenges that would be faced, complementing previous analyses which have focused on issues like the assumed role of CDR in low-carbon scenarios18,19, or the ability to compensate sectors that are particularly difficult to mitigate (e.g., air travel, agriculture and certain industries).

Several CDR techniques have been developed as prototypes, and afforestation is already in widespread use, as are some of the components involved in other techniques, e.g., bioenergy (in BECCS). However, all of these are far from the scale of CDRref. Attempting to scale up any CDR technique would require addressing many technical and social issues, several of which are common across most or all of the techniques. One of the most important common technical issues is the total CO2 storage capacity (see Box 2). Further issues include limits of required chemical and biological resources, how the techniques would compete with each other and other sectors for resources, the time scales involved, and the economic costs and societal impacts (see Box 1).

### Biomass-based techniques

Numerous biomass-based CDR techniques have been proposed, all removing CO2 from the atmosphere by photosynthesis. Some then use the biomass for primary carbon storage (e.g., in trees, humus, peat, etc.), while others involve combustion and subsequent storage of the products (e.g., compressed CO2 and biochar).

Afforestation (here also including reforestation) involves increasing forest cover and/or density in previously non-forested or deforested areas. Principally the carbon storage potential is large compared to CDRref, given that historic deforestation was 2400±1000 Gt(CO2)16. However, since much of this deforestation was to make space for current agriculture and livestock, extensive land-use competition could be expected for such a degree of afforestation32. More realistic estimates therefore range from about 0.5–3.5 Gt(CO2)/yr by 2050, increasing to 4–12 Gt(CO2)/yr by 210027,28,33, implying a total removal potential of about 120–450 Gt(CO2) from 2015 to 2100 (assuming linear increases in the CO2 uptake rate, starting at zero in 2015).

Combining biomass energy with carbon capture and storage (BECCS), which can be used for either electricity generation or the production of hydrogen or liquid fuels34, is widely assumed in integrated assessment model scenarios to be able to provide sufficient CDR to keep $$\Delta \bar T_{\mathrm{s}}$$ below 2 °C18,19. The range of estimates of the maximum removal potential of BECCS is large, again partly based on assumptions about land-use competition with agriculture, economic incentives for extensive development and deployment, and other factors, such as nature conservation. High-end estimates for BECCS in the literature involve underlying assumptions such as the use of forestry and agriculture residues35, the transition to lower meat diets, and the diversion of over half the current nitrogen and phosphate fertilizer inputs to BECCS, resulting in an uptake of ~10 Gt(CO2)/yr by 205032,33, with estimates for 2100 being similar or possibly even higher27,36. This would also depend on the development of both bioenergy and carbon capture and storage (CCS) technologies, infrastructures, and governance mechanisms to allow a capacity several orders of magnitude greater than current prototypes37,38,39. Assuming a linear development to 10 Gt(CO2)/yr until 2050 and constant thereafter would imply a cumulative removal potential by 2100 of ~700 Gt(CO2), i.e., exceeding CDRref. Various factors may reduce this, but it could also increase under the high-end assumptions mentioned above.

Biochar, a stable form of carbon produced by medium temperature pyrolysis (>350 °C) or high temperature gasification (~900 °C) of biomass in a low oxygen environment, can be buried or ploughed into agricultural soils, enriching their carbon content. Various gases or oils can also be produced by the pyrolysis process. While biochar production could principally be applied to a similar amount of biomass as assumed for BECCS (i.e., ~700 Gt(CO2) removal by 2100), many additional factors come into play40,41, including feedstock type and source, labile carbon fraction, char yield, required energy input, the mean soil residence time of the biochar carbon, sink saturation, and priming effects (i.e., accelerated organic matter decomposition). This results in a much lower estimated maximum removal potential for biochar, ~2–2.5 Gt(CO2)/yr28,41, or up to ~200 Gt(CO2) by 2100, although, as with BECCS, this could possibly be enhanced by additional use of residue biomass from agriculture and forestry41.

In addition to mixing biochar into soils, recent studies have focused on replenishing or enhancing organic carbon in cultivated soils through various agricultural practices42, such as limiting tilling, and composting (rather than burning) crop residues. While these ideas are generating considerable interest, including the COP21 4 per mille initiative43,44, their ability to be scaled up to being relevant for the Paris Agreement is poorly known, due to saturation and other effects. Earlier studies45 suggested a very limited possible role for soil enrichment; however, more recent analyses suggest a physical removal potential of ~200 Gt(CO2) by 210041, i.e., a significant fraction of CDRref, and this could possibly be increased up to 500 Gt(CO2) by practices such as soil carbon enrichment at greater depths43,44. Soil carbon enrichment may be more closely associated with co-benefits for agriculture than with trade-offs like competition for biomass, so that it might be seen as particularly attractive to pursue in the near term, while trade-offs and similar issues with other techniques are being resolved.

Ocean iron fertilization (OIF) is the proposal to fertilize iron-poor regions of the ocean to spur phytoplankton growth and increase the detritus carbon flux to the deep ocean46. The general conclusion emerging from modelling work, perturbative field studies, and analyses of natural iron enrichments downstream of islands, is that some oceanic carbon uptake could likely be achieved, particularly in the iron-limited Southern Ocean46. However, while early studies indicated that CO2 removal by OIF might be capable of far exceeding CDRref, later studies showed that this neglected many limiting factors, so that the removal capacity is likely less than 400 Gt(CO2) by 210047. Furthermore, this would likely result in significant side effects in the oceans, like disruption of regional nutrient cycling, and on the atmosphere, including production of climate-relevant gases like N2O15. Although there are reasons to encourage further research48, the limited removal potential and significant side effects, along with international legal developments that restrict large-scale deployment (see Box 1), make it unlikely that OIF will be employed to contribute significantly to the Paris Agreement goals. It seems similarly unlikely that related ocean carbon cycle techniques, such as using wave-driven pumps to enhance oceanic upwelling and thus increase the rate of mixing of fresh CO2 into deep-ocean waters, will contribute significantly49.

Many further biomass-based CDR techniques have been proposed, such as accelerating the formation of peatlands, or burying timber biomass in anoxic wetlands. A recent assessment15 has concluded that the expected CO2 removal capacity for each of these would likely be less than 100 Gt(CO2) by 2100, and several would have significant environmental side effects. Further research may reveal greater CO2 removal potentials, but current literature indicates that none would be capable of significantly contributing to achieving the Paris Agreement goals.

The biomass-based techniques share a wide range of research needs (Fig. 3), which are relevant to their possible roles in the Paris Agreement context, and can be grouped under three broad categories: (1) the technical carbon removal potential and how this can be increased; (2) social and environmental impacts and how trade-offs can be minimized while capitalizing on co-benefits and synergies; and (3) development and operational costs. Given the current state of research and development, it is not yet possible to generally prioritize any of these categories above the others, although this may be possible in dedicated studies of individual techniques. Several technique-specific aspects of the first two categories were discussed above.

For the third category, estimating development and operational costs has been particularly challenging, despite their importance in determining whether any technique could viably contribute to climate policy around the Paris Agreement. Published values for all of the techniques discussed above can presently only be taken as broadly indicative, and are typically of the order of $100/t(CO2), with the range of values given in the literature for each technique often being a factor of three or more27,28. This uncertainty is due to numerous factors, including extremely limited commercial experience with full-scale operations (e.g., for CCS or biochar), storage site properties and the details of CO2 transport or co-location of infrastructure for BECCS, land-use and resource competition with agriculture, and the compensating revenue from electricity or fuels produced by BECCS and biochar plants36. Complicating things further, land and resource competition might result in operational costs for biomass-based CDR techniques actually increasing as implementation scales grow, in contrast to the typical falling costs for most technologies as they grow in scale. ### Mineralization-based and other abiotic techniques Abiotic CDR techniques for removing CO2 from the atmosphere can be roughly distinguished into two main approaches: spreading weathering materials over large open spaces (enhanced weathering and ocean alkalinisation/liming); and capturing CO2 in some form of enclosure or on constructed machinery (direct air carbon capture and storage, abbreviated DACCS). A review of proposals for terrestrial enhanced weathering50 divides these into (1) ex situ techniques, which involve dispersing mined, crushed and ground silicate rocks (e.g., olivine51,52) in order to increase the exposed surface area and thus allow a more rapid uptake of CO2, particularly in warm, humid regions where CO2 removal would be most rapid52, and (2) in situ techniques, which are forms of underground geological/geochemical sequestration (see Box 2). Similarly, ocean alkalinization has been proposed via distribution of crushed rock into coastal surface waters53, as slowly sinking micrometre-sized silicate particles deposited onto the open-ocean sea surface54,55, or via dispersion of limestone powder into upwelling regions56. Ocean alkalinization would contribute to counteracting ocean acidification, in turn allowing more uptake of CO2 from the atmosphere into the ocean surface waters. Terrestrial enhanced weathering could also enhance ocean alkalinity, via either riverine run-off, or mechanized transport and mixing of the alkaline weathering products into the oceans, though both may vary strongly regionally. Further proposals include combining enhanced weathering and ocean alkalinisation using silicates to neutralize hydrochloric acid produced from seawater57, or heating limestone to produce lime (combined with capture and storage of the by-product CO2), which has been a long-standing proposal for dispersal in the oceans to increase ocean alkalinity58, in turn allowing additional CO2 uptake from the atmosphere by the ocean. Due to the abundance of the required raw materials, the physical CO2 removal potential of enhanced weathering is principally much larger than CDRref. However, since the current rate of anthropogenic CO2 emission is ~200 times the rate of CO2 removal by natural weathering59, the surface area available for reactions would need to be increased substantially via grinding and distribution of the weathering materials. This would imply large investments, including energy input, for the associated mining, grinding and distribution operations. Given that removing a certain mass of CO2 requires a similar mass of weathering material, the operations would need to be comparable to other current mining and mined-materials-processing industries, which could have significant impacts on sensitive ecosystems, as could the large amounts of alkaline weathering products that would be produced, especially in the runoff regions, about which very little is presently known. DACCS could possibly be designed so that it requires a substantially reduced dedicated land or marine surface area compared to other CDR techniques, and might also allow the environmental impacts to be more limited and quantifiable. However, scaling up from small-scale applications of direct air capture technologies, such as controlling CO2 levels in submarines and spaceships60,61, to removing and storing hundreds of Gt(CO2) would involve substantial costs, especially due to the high energy requirements of three main technology components: (1) sustaining sufficient airflow through the systems to continually expose fresh air for CO2 separation; (2) overcoming the thermodynamic barrier required to capture CO2 at a dilute ambient mixing ratio of 0.04%; and (3) supplying additional energy for the compression of CO2 for underground storage. While components (1) and (3) can be quantified using basic principles, and several studies61,62 indicate that combined they would probably require 300–500 MJ/t(CO2) (or ~80–140 kWh/t(CO2)), the energy and material requirements of the separation technology (2) are much more difficult to estimate. The theoretical thermodynamic minimum for separation of CO2 at current ambient mixing ratios is just under 500 MJ/t(CO2)62. However, thermodynamic minimum values are rarely achievable. Current estimates for the efficiency of DACCS are technology-dependent, ranging from at best 3 to likely 20 or more times the theoretical minimum61, or ~1500–10,000 MJ/t(CO2), implying that removing an amount equivalent to CDRref by 2100 would require a continuous power supply of approximately 400–2600 GW. Combined with the energy requirements for (1) and (3) (equivalent to about 100 GW), this represents about 20–100% of the current global electricity generation of ~2700 GW. A wide range of chemical, thermal, and also some biological (algae and enzymes) techniques have been proposed for the separation technology, but the focus of research has been on two main approaches60,62,63,64,65: adsorption onto solids, e.g., amine-based resins that adsorb CO2 when ambient air moves across them, followed by release of concentrated CO2 by hydration of the resins in an otherwise evacuated enclosure; and absorption into high-alkalinity solutions with subsequent heating-induced release of the absorbed CO2. While the environmental and societal impacts of these technologies could likely be much better constrained in comparison to the other CDR techniques, they are still important to consider, and include environmental impacts due to placement of the capture devices and CO2 storage sites, mining and preparation of materials like resins that would be used in the systems, and the possible release of amines and other substances used in the separation process66. The physical CO2 removal potential of DACCS far exceeds CDRref, provided the high energy requirements could be met; there are no significant principal limitations in terms of the material availability or CO2 storage capacity (see Box 2), and even the manufacture of millions of extraction devices annually would not be unfeasible (compared to, e.g., the annual global manufacturing of over 70 million automobiles). Large investments in DACCS might, however, be unlikely as long as large point sources (e.g., power or industrial plants) continue to be built and operated, since the same effective reduction of atmospheric CO2 levels via CCS applied to higher-concentration sources will generally be much less energy intensive and thus less expensive than CO2 capture from ambient air61. In general, for any possible longer-term application of CDR in climate policy, a major lynchpin will likely be development of CCS, both in terms of the carbon capture technologies and the storage infrastructure, since CCS is fundamental to both BECCS and DACCS, and since it is likely to be most economically favourable to first apply CCS to remaining large point sources. The estimated development and operational costs for both enhanced weathering (including ocean alkalinisation) and DACCS at scales comparable to CDRref vary widely, even though the involved processes, especially for enhanced weathering (mining, processing and distribution), are nearly all well-established industrial activities. Published estimates cover a similar range to the biomass-based techniques, from about$20/t(CO2) to over $1000/t(CO2)27,28,60,65. Better estimates of the costs are particularly important for DACCS, since it essentially represents the cost ceiling for viability of any CDR measure due to its potential scalability and its likely constrainable environmental impacts. These potentially high costs, and the array of other associated challenges for both the abiotic and the biomass-based CDR techniques, provide important context for the discussions around further proposed measures for addressing climate change, namely RFG. ## Radiative forcing geoengineering Numerous RFG techniques have been proposed, which can fundamentally be divided into three vertical deployment regions (see Fig. 1): space-based (mirrors), atmospheric (stratospheric aerosol injection – SAI; marine sky brightening – MSB; and cirrus cloud thinning – CCT); and surface-based (urban areas, agricultural land, grasslands, deserts, oceans, etc.). A key reason for interest in RFG techniques is that they might technically be able to stabilize or even reduce $$\Delta \bar T_{\mathrm{s}}$$ within a few years, although there would be technique-specific differences in regional cooling (see Box 3). Proposed CDR techniques, on the other hand, would likely physically require much longer (decades) before they could lead to a notable stabilization or decrease in $$\Delta \bar T_{\mathrm{s}}$$, due to limits on the maximum rate of CO2 removal that could be achieved. Furthermore, although the operational costs for all proposed RFG techniques are currently very uncertain, considerable interest has been raised by the possibility67,68,69,70,71 that the operational costs to achieve a certain degree of cooling, e.g., RFGref, might be much lower than the operational costs for a comparable amount of CDR (e.g., achieving CDRref by 2100). However, comparing costs is difficult due to the different time horizons: CDR has no further operational costs once the desired amount of CO2 has been removed, whereas RFG would have ongoing costs to maintain the same cooling as long as the elevated CO2 levels persist (potentially over centuries). RFG has been considered under various complementary framings, including determining the forcing that would be needed to reduce $$\Delta \bar T_{\mathrm{s}}$$ to zero72, and limiting the magnitude of future peaks in $$\Delta \bar T_{\mathrm{s}}$$ while mitigation measures are implemented and CDR capacity is being developed73,74. In the context of the Paris Agreement, we focus our discussion below on the three atmospheric RFG techniques (SAI, MSB, and CCT), which current literature indicates would have the most significant physical potential to contribute notably over the next few decades towards achieving the 1.5 or 2 °C temperature goals. Space mirror RFG could contribute considerable cooling from a climate physics perspective, based on model simulations using it as a proxy for RFG in general75,76; however, proposals for implementation77,78 rely on extensive future technology developments and a dramatic reduction in material transport costs from ~10,000$/kg79 to less than 100$/kg. Furthermore, there are significant, poorly understood risks including impacts from asteroids and space debris, and technical or communications failure. As such, while a future possibility, due to present challenges and associated times scales, space mirror RFG is not further considered here in the context of the Paris Agreement. Furthermore, for proposed surface-based RFG techniques, a recent literature assessment15 has shown that their potential maximum cooling effects are either too limited (i.e., well below RFGref), or are associated with substantial side effects, e.g., complete disruption of regional ecosystems such as in the deserts, so that it is also unlikely that any current proposed surface-based RFG techniques will be employed to contribute significantly to achieving the Paris Agreement goals. All of the proposed RFG techniques generally share several aspects in common in terms of the anticipated climate responses and the uncertainties and risks involved (see Box 3). Furthermore, all three of the atmospheric RFG techniques would require generating an enhanced aerosol layer or modified clouds with geographical, optical, microphysical and chemical characteristics capable of producing the desired radiative forcing. This in turn requires consideration of the technique-specific issues around how well different particle composition types would work, how much would need to be injected, when and where, and what the expected cooling would be, as discussed in the following sections. ### Stratospheric aerosol injection Injecting reflecting aerosol particles or gaseous particle precursors into the lower stratosphere could increase the planetary albedo (reflectivity), in turn reducing surface temperatures. Discussions of SAI have a long history26,80,81,82,83,84, with the earliest studies focusing on enhancing the natural stratospheric sulfate aerosol layer. This could be done via injection of either sulfate particles, or sulphuric acid (H2SO4), which condenses into particles, or precursor gases like sulfur dioxide (SO2), hydrogen sulfide (H2S) or carbonyl sulfide (COS), which would then be oxidized to H2SO4. Numerous other possible particle compositions have been proposed and analyzed85,86,87,88,89,90,91,92, including: calcite (CaCO3, the main component of limestone); crystal forms of titanium dioxide (TiO2), zirconium dioxide (ZrO2), and aluminium oxide (Al2O3); silicon carbide (SiC); synthetic diamond; soot; and self-lofting nanoparticles. Each proposed material has its specific advantages and challenges (see Fig. 4), e.g., calcite particles91 are non-toxic, would not cause significant stratospheric heating, and may counteract stratospheric ozone loss, but their microphysics and chemistry under stratospheric conditions are poorly understood. Based on fundamental physical considerations, the radiative forcing by SAI would be expected to have an asymptotic limit, due to the growth of stratospheric particles to larger radii at greater injection rates, decreasing the residence time (due to increased sedimentation rates) and the optical efficiency. Estimates of this limit vary widely, especially due to differences in the representations of microphysics and dynamics in climate models. Model studies93,94,95, as well as evidence from past volcanic eruptions2, indicate a maximum potential cooling (negative radiative forcing) ranging from 2 W/m2 to over 5 W/m2, i.e., well above RFGref, though the upper end of the range would require extremely large injection amounts (comparable to the current global anthropogenic sulfur pollutant emissions of about 100 Tg(SO2)/yr). SAI would require regular injections to maintain the aerosol layer, given the stratospheric particle residence time of about 1–3 years96,97,98. The injection amount needed would depend on the desired radiative forcing and the particle composition, size distribution, optical properties, and the vertical and horizontal injection location(s)94,96,98,99,100,101,102. Most model studies (as well as evidence from the volcanic record) show that the radiative forcing efficiency typically increases with the altitude of injection94,96,99, since a smaller fraction of the particle mass is lost due to sedimentation97; however, this is not found in all studies95, and may depend on the injection amount, even reversing sign for very large injection rates93. Geographically, injection in the tropics results in an effective dispersion towards the poles by the stratospheric Brewer–Dobson circulation, producing an aerosol layer with a broad global coverage99, but limited control over its regional distribution. On the other hand, model simulations have shown that high latitude injections aimed specifically at reducing Arctic warming would be relatively ineffective103,104,105,106,107, due to the shorter aerosol residence time and weaker solar radiation compared to the tropics. Proposed injection mechanisms for SAI are via high-flying aircraft, stratospheric balloons, artillery shells, and rockets68,69,108,109, with studies to date indicating the first two are likely the most effective and economically feasible. All are in very early stages of research and development. Aircraft injections would require a new fleet of dedicated high-flying aircraft69, since civil aircraft fly too low and mostly too far north to be effective for global cooling109. Tethered balloons would require extensive technology development and testing to determine the feasibility and safety issues involved in annually transporting megatons of particles or precursors through hoses of over 20 km length68. Furthermore, for both platforms, coagulating particles or precursors like H2SO4 would likely require some mechanism to create turbulence in order to have sufficient control over the resulting particle size distribution (see Fig. 4a), and a large number of aircraft or tethered balloons would thus be needed in order to limit the local injection rate and prevent rapid coagulation to oversized particles98. ### Marine sky brightening MSB would involve seeding low-altitude clouds with cloud condensation nuclei particles to cause condensed water to spread over a greater number of smaller droplets, increasing the optical cross section and thus the cloud’s reflectivity110,111,112,113. This effect has been observed over oceangoing ships due to particles in their pollution plumes114, and in plumes of effusive volcanic eruptions115. Clouds with lower background particle concentrations, such as maritime stratiform clouds, are particularly susceptible to this effect. Modeling studies111,116,117,118 indicate the injected particles would also likely increase the clear-sky reflectivity, by an amount comparable to the marine cloud brightening (MCB), which has led to the combined term MSB. Similar to SAI, the limited knowledge about key microphysical and dynamical processes involved results in a large uncertainty in the maximum cooling that could be achieved via MSB, with estimates111,113,117,119,120,121,122 ranging from 0.8 to 5.4 W/m2, i.e., likely well above RFGref. Analysis of satellite data113 and model simulations111,113,115 indicate that certain regions are more susceptible to MSB, in particular persistent stratocumulus cloud decks off the continental west coasts, especially South America, North America, southern Africa and Australia. However, there are considerable scientific uncertainties, such as the differences in responses of open and closed cell convection123. The cooling resulting from MSB would be more geographically heterogeneous than from SAI117,124, focused especially on the susceptible oceanic regions, leading to considerably different temperature and precipitation responses in comparison to more globally homogeneous forcing. For implementation, the focus has been on injecting sea salt due to its availability, especially from autonomous ships67. Several challenges would need to be overcome, including: development of spray nozzles to form appropriately sized particles125; compensating for reduced lofting in the marine boundary layer due to cooling following evaporation of injected seawater126,127; and an ability to target suitable meteorological conditions, including low solar zenith angles, unpolluted air, and few or no overlying mid to high altitude clouds112,113,128. Efforts would also be needed to minimize environmental effects (i.e., corrosion and detriment to vegetation129) and chemical and microphysical effects (on ambient gases and particles130) of the injected sea-salt. ### Cirrus cloud thinning Cirrus, in contrast to most other forms of clouds, warm the Earth’s surface by absorption and re-radiation of terrestrial radiation on average more than they cool by reflecting solar radiation back to space2. CCT would aim to reduce this net warming by injecting highly effective ice nuclei into cirrus, causing the freezing of supercooled water droplets and inducing growth to large ice particles that sediment rapidly out of the clouds, reducing the mean cirrus cloud thickness and lifetime131,132,133,134. Since CCT would primarily target terrestrial radiation, in contrast to SAI and MSB, it may more directly counteract radiative forcing by anthropogenic greenhouse gases, though the degree of compensation would be limited by the geographical distribution of susceptible cirrus135. The relatively close balance between a large gross warming and cooling by cirrus clouds, in contrast to the dominance of gross cooling for marine stratocumulus clouds and most aerosol particles under consideration for MSB and SAI, makes estimating a maximum radiative forcing potential even more challenging. A maximum net cooling in the range of 2–3.5 W/m2, considerably exceeding RFGref, has been computed based on model simulations131,132,135,136,137, though the high end of this range is accomplished by modifications in the models which are far removed from what could likely be achieved in reality (e.g., increasing the cirrus particle fall speeds 8–10-fold everywhere). On the other hand, some studies138,139 have found that CCT might not work at all, or might even produce a net warming. In particular, there is a risk of ‘over-seeding’, i.e., forming new cirrus clouds due to seeding material being released in cloud-free regions, which would have a warming effect, working against the desired cooling132,138,139. Furthermore, recent findings of extensive heterogeneous nucleation in tropical anvil cirrus140 possibly rules out tropical cirrus for seeding134, since seeding would only be effective in an environment where a significant fraction of the natural freezing occurs via homogeneous nucleation (i.e., freezing of supercooled droplets without ice nuclei). Thus the focus of CCT studies is on the middle and high latitudes, where model simulations and satellite data indicate it would likely be most effective132,133,134. Like SAI and MSB, CCT would require regular injection of seeding material, which would settle out with the cirrus cloud particles. Proposed seeding materials include bismuth tri-iodide (BiI3)131, which was historically investigated for weather modification programs and found to be a highly effective material for ice nuclei, though toxic. Sea salt may also be a candidate, as it is readily available and non-toxic, and has been found to function as an ice nuclei141, though considerably less effective than BiI3. The particle injections would likely require dedicated aircraft or unmanned drones to provide sufficient control over the seeding locations, which would need to be targeted at existing susceptible cirrus clouds. Due to the likelihood of over-seeding in cloud-free regions132, seeding via commercial aircraft can essentially be ruled out. ### Research needs for RFG While current scientific knowledge of the three atmospheric RFG techniques discussed above indicates that they might physically be able to contribute significantly towards reducing global mean temperatures, any large-scale implementation would likely require several decades, due to the considerable uncertainties and scientific research and development needs, along with the extensive considerations needed for a range of socio-political issues (see Box 1). Many of the research and development needs are generally in common across the atmospheric RFG techniques, in four broad categories. First, the associated geographical heterogeneities and side effects on various Earth systems (see Box 3) need to be much better characterized. Second, in terms of process understanding, perhaps the most significant general challenge in common to all three techniques is developing a greater understanding of the associated aerosol and cloud microphysics (Fig. 5). Third, a much better understanding is needed of the implementation costs, which have been proposed by some to be a factor of 10–1000 lower than the corresponding annual costs of CDR techniques. Initial estimates67,68,69,108 for development and installation costs for SAI via aircraft and tethered balloon injection systems and for MSB by unmanned ships are all in the range of$1–100 billion, with annual maintenance costs for SAI estimated at $20 billion or possibly even less. No published estimates are yet available for the operational costs of CCT by aircraft deployment, since the associated physical mechanism is still too poorly understood, pointing to an important future research need. An additional challenge to estimating the operational costs for RFG is the need to account for the long timescale over which it might be applied to uphold the Paris Agreement temperature goals, if not accompanied by simultaneous strong mitigation and CDR. And fourth, establishing a more robust knowledge base for any of the proposed techniques would require eventually moving beyond theoretical, modelling, satellite-based and proxy data studies to also including in situ field experiments. Thus far, only two scientifically rigorous, dedicated, in situ, perturbative field experiments have been conducted related to the atmospheric RFG techniques142,143, focusing on marine stratus microphysics, though not explicitly focused on MSB. However, considerable work has been done recently on developing numerous concepts for a variety of field experiments112,144,145. These proposals have been anticipated to raise considerable public concern, and thus have been closely accompanied by governance development efforts (see Box 1). ## Summary and outlook Among the CDR techniques in Fig. 1, BECCS, DACCS, enhanced weathering and ocean alkalinisation are likely physically capable of removing more than CDRref (650 Gt(CO2)) in this century, while afforestation, biochar production and burial, soil carbon enrichment and OIF all have an upper bound for physical removal capacity that is a significant fraction of CDRref, though all would involve significant implementation costs and in most cases substantial negative side effects. For RFG, in the context and timescales of the Paris Agreement, likely only SAI, CCT and MSB have the technical potential to physically provide a global cooling that significantly exceeds RFGref (0.6 W/m2). Space mirrors and surface-based techniques would be anticipated to face prohibitive constraints including logistics, costs, timescales, and ecosystem side effects. Any climate geoengineering technique would likely require several decades to develop to a scale comparable to CDRref or RFGref. For CDR, extensive global infrastructure development would be needed, along with resolving governance issues, including competition with other sectors like agriculture. For RFG, improving the scientific understanding (e.g., microphysical details) and developing delivery technologies and effective governance mechanisms would all be essential. For both CDR and RFG, these developments would require public and political support, especially for public investments given their technical and economic uncertainty at the scales of CDRref or RFGref. Given the meagre knowledge surrounding technique scalability, at present only indicative orders of magnitude can be given for costs: approximately$100/t(CO2) for CDR techniques (i.e., over $800 billion/yr to achieve CDRref between 2020 and 2100); and possibly as low as$10 billion/yr for the atmospheric RFG techniques to provide RFGref, though such low costs may never be achievable due to technological challenges upon scaling up. There are of course also numerous social and environmental impacts and associated costs (e.g., Figs. 34, and Box 1) that are currently only very roughly characterized in the literature.

In the context of their role in the Paris Agreement, and more generally in climate policy, climate geoengineering technologies may eventually become part of a significant socio-technical imaginary147,146,148, within which specific visions of the future are made to appear desirable, and which are influential on present political developments. Climate geoengineering is already entering the collective imagination149, e.g., as portrayed through media reports, and is also entering climate policy discussions, for instance through inclusion in the IPCC assessment reports2, and especially through the extensive reliance on CDR in low-carbon future scenarios analyzed by the IPCC18,19. Relatedly, the concept of the Anthropocene, with its emphasis on the planetary impact of human activities, may further normalize climate geoengineering technologies as potential tools for conscious planetary management. However, none of the proposed CDR or RFG techniques exist yet at a climate-relevant scale, and given the challenges discussed here, it is not yet certain that any of the individual techniques could ever be scaled up to the level of CDRref or RFGref. Avoiding a premature normalization of the hypothetical climate geoengineering techniques in science, society and politics would require actively opening up discussions to critical questioning and reframing.

We highlight three steps regarding future considerations of climate geoengineering in the context of the Paris Agreement. First, early development of effective governance—including for research—could be designed to reduce the likelihood and extent of potential injustices (see Box 1) and allow supporters and critics of climate geoengineering technologies to voice their concerns. Second, further disciplinary and interdisciplinary research could help to reduce the large uncertainties in the anticipated climate effects, side effects, costs, and technical implementation and societal aspects of the individual techniques. Legitimizing such research would require transdisciplinary processes involving stakeholders from the scientific and policy communities, civil society, and the public, especially in making decisions regarding potential large scale research programs. Ensuring such broad involvement is a major challenge for effective governance. National and international efforts to foster deliberation and coordinate any future large scale research may help to reduce some of the socio-political risks, especially the moral hazard risk of distracting from or deterring climate mitigation. Such coordination could also serve to reduce redundant work and channel research towards issues that are determined to be priorities for informing current and upcoming decision-making processes.

Finally, based on the current knowledge reviewed here, proposed climate geoengineering techniques cannot be relied on to be able to make significant contributions, e.g., at the levels of CDRref or RFGref, towards counteracting climate change in the context of the Paris Agreement. Even if climate geoengineering techniques were ever actively pursued, and eventually worked as envisioned on global scales, they would very unlikely be implementable prior to the second half of the century15. Given the rather modest degree of intended global mitigation efforts currently reflected in the NDCs (Fig. 2 and Supplementary Table 1), this would very likely be too late to sufficiently counteract the warming due to increasing levels of CO2 and other climate forcers to stay within the 1.5 °C temperature limit—and probably even the 2 °C limit—especially if mitigation efforts after 2030 do not substantially exceed the planned efforts of the next decade. Thus at present, the only reliable way to attain a high probability of achieving the Paris Agreement goals requires considerably increasing mitigation efforts beyond the current plans, including starting extensive emissions reductions much sooner than in the current NDCs.

## Methods

### Parameters in supplementary Table 1 and Fig. 2

Supplementary Table 1 provides values for four key parameters, based on the Paris Agreement Nationally Determined Contributions (NDCs) until 2030 and assumed annual decrease rates of the emissions beyond that (or starting in 2021 for one case): (1) the annual CO2 emissions rates in 2030 and 2100; (2) the cumulative CO2 emissions for 2015–2030, 2031–2100, and 2015–2100; (3) the gaps between the cumulative CO2 emissions and the remaining budgets of cumulative emissions (using 2015 as a reference starting date) that are consistent with the temperature limits of 1.5 and 2 °C; and (4) the approximate equivalent radiative forcing amounts that these emissions gaps represent. Figure 2 provides a graphical depiction of the cumulative CO2 emissions gaps and the equivalent radiative forcing gaps. The computations for these are described here, followed by a few overarching issues.

### Annual CO2 emissions

The annual CO2 emissions rates in 2030 are based on a recent reassessment of the current NDCs9, which takes a more direct approach than several previous studies that are based on analyses with integrated assessment models8,150, arriving at a best estimate value of 51 Gt(CO2)/yr, which is 10–20% higher than most previous studies, while the lower bound value of 43 Gt(CO2)/yr computed by ref. 9 is similar to the best estimate values of most previous studies. To reflect this range of estimated future emissions, in Supplementary Table 1 we give a ± range that represents these lower bound and best estimate values based on the data from ref. 9.

The annual CO2 emissions rates, especially in 2100, are relevant for considering the subsidiary Paris Agreement goal of achieving net zero CO2 emissions during the second half of the century. Since the natural sink of CO2 (0.8–1.1 Gt(CO2)/yr52) is small compared to current anthropogenic emissions, and already largely balanced over longer time scales by natural CO2 sources such as volcanic activity2, achieving net zero CO2 emissions would require sufficient CDR to essentially completely compensate the anthropogenic CO2 emissions rate.

If actual CO2 emissions in 2030 are outside the range expected based on the current NDCs (i.e., the NDCs are either not achieved, or efforts exceed current commitments), then for the first four cases in Supplementary Table 1 (with emissions reductions starting in 2031), the subsequent emissions rates and cumulative emissions will scale linearly with the relative difference in 2030 (e.g., 10% lower emissions in 2030 imply 10% lower emissions in 2100 and 10% lower cumulative emissions from 2031–2100).

### Cumulative CO2 emissions

The cumulative emissions for 2015–2030 in Supplementary Table 1 are computed based on the annual emissions data from ref.9, separately for the pathways based on the lower bound and best estimate values (from which the means and ± ranges are computed). For 2031–2100, for the case with constant annual emissions the cumulative emissions are simply computed as 70 times the lower and upper bound values for the annual emissions. For the cases with an annual decrease from 2031 onwards, individual pathways until 2100, starting from the lower and upper bound values in 2030, are calculated as Ey = (1−r) * Ey-1, where Ey is the emissions rate for the current year, Ey−1 for the previous year, and r is the annual emissions reduction factor (0.01, 0.03, or 0.05). The annual emissions along each pathway are then summed to obtain the cumulative emissions for 2031–2100. The same procedure is also applied to the fifth case in Fig. 2 and Supplementary Table 1, with a 3% annual reduction starting in 2021, for which the 2015–2030 cumulative emissions range is recalculated accordingly. In all cases, emissions reductions and the resultant cumulative emissions and implications for radiative forcing and global mean temperature increase are only considered until 2100.

### Gaps to the remaining CO2 budgets

Various approaches have been applied to determine the remaining budgets of cumulative emissions of CO2 (and non-CO2 forcers) which are consistent with likely limiting global warming to various temperature thresholds. Each of these has various drawbacks. We describe a few of these here, as a background to why we have developed a simple, novel approach that is suited for this analysis. In one approach, the IPCC3 found that a cumulative CO2 budget of 400 Gt(CO2) from 2011 onwards likely keeps 21st century $$\Delta \bar T_{\mathrm{s}}$$ below 1.5 °C, which can be adjusted to ~240 Gt(CO2) for 2015 onwards at the current global emissions rate of just over 40 Gt(CO2)/yr6. This would already be exhausted in 2020, which seems unlikely given that current global warming is approximately 1 °C, and the finding by the IPCC WG12 that if anthropogenic CO2 emissions were abruptly stopped, the global mean temperature would likely remain approximately constant for decades (due to a balancing of opposing factors). Using another approach3, the IPCC concluded that the cumulative emissions budget from 1870 onwards that is consistent with likely keeping $$\Delta \bar T_{\mathrm{s}}$$ below 1.5 °C is 2250 Gt(CO2). Comparing this directly with the Global Carbon Project’s current estimate of historical cumulative emissions, which is 2235 ± 240 Gt(CO2) for 1870–2017, would also imply that the 1.5 °C budget has already been or will very soon be exhausted. This comparison makes one of the main problems with this approach clear: it is based on the small difference between two large and uncertain numbers. Recognizing this problem, the Global Carbon Project concludes6: “…extreme caution is needed if using our updated cumulative emission estimate to determine the ‘remaining carbon budget’ to stay below given temperature limits4. We suggest estimating the remaining carbon budget by integrating scenario data from the current time to some time in the future as proposed recently5.” The application of this alternate approach by ref.5 results in much higher estimates than the IPCC approaches: likely more than 880 Gt(CO2) and 1870 Gt(CO2) (from 2015 onwards) for 1.5 and 2 °C5. However, this is associated with several assumptions, which have been strongly criticized7, as noted in the main text.

For the purpose of our analysis we apply a similar though simpler approach, which is independent of the historical emissions and is straightforward to apply to any moderate temperature difference (e.g., 0.5 or 1 °C), and allows us to apply an uncertainty range to the current value of $$\Delta \bar T_{\mathrm{s}}$$. We first consider the cumulative budgets from 1870 onwards that were found by the IPCC3 to be consistent with likely limiting global warming to three temperature thresholds: 2250 Gt(CO2) for 1.5 °C, 2900 Gt(CO2) for 2 °C, and 4200 Gt(CO2) for 3 °C, where the simulated warming includes effects of co-emitted non-CO2 forcers. These three cumulative budget values make the quasi-linear response of simulated temperature to cumulative CO2 emissions very clear, with a slope of 1 °C for every 1300 Gt(CO2) between any pair of these temperature thresholds.

This slope is then applied to determine the remaining CO2 budget between any two values of $$\Delta \bar T_{\mathrm{s}}$$. There is a notable uncertainty in the current value of $$\Delta \bar T_{\mathrm{s}}$$7, given interannual and interdecadal variability, as well as the uneven geographical coverage of the global observations network, and other factors such as the reference starting date (i.e., what counts as pre-industrial), etc. Here we make use of the analysis in ref. 7 and apply a current value of $$\Delta \bar T_{\mathrm{s}}$$ = 1.0 ± 0.1 °C, which accounts for the biased geographical coverage of the measurements network, especially the relatively few long-term temperature observations in the rapidly warming Arctic, and is thus higher than the value of $$\Delta \bar T_{\mathrm{s}}$$ = 0.9 °C applied by ref. 5. Note that a small additional uncertainty is present in further factors, such as using the mid-1700s rather than the late 1800s as a reference period for pre-industrial temperatures. We do not take these additional factors into account, in order to remain comparable to the IPCC and other analysis that apply the late 1800s reference period; however, we note that this and other unaccounted factors could increase the current value of $$\Delta \bar T_{\mathrm{s}}$$ by up to ~0.15 °C, reducing the remaining budgets by up to ~200 Gt(CO2), which is a comparatively small uncertainty in light of the broad ranges of values in Fig. 2 and Supplementary Table 1.

Applying a current value of $$\Delta \bar T_{\mathrm{s}}$$ = 1.0 ± 0.1 °C and the slope of 1 °C per 1300 Gt(CO2) cumulative emissions yields a value of 650 ± 130 Gt(CO2) from 2015 onwards for the remaining budget consistent with likely limiting $$\Delta \bar T_{\mathrm{s}}$$ to 1.5 °C, and 1300 ± 130 Gt(CO2) for 2 °C. These remaining budget values are then subtracted from the projected emissions for the different cases to determine the gaps in the cumulative emissions budgets, giving an indication of how much CDR might be invoked in an attempt to compensate the emissions gaps in order to still achieve the Paris Agreement temperature goals. These resulting values are depicted in Fig. 2 and listed in Supplementary Table 1.

Finally, in order to derive an indication of what these cumulative emissions gaps would imply for the amount of negative radiative forcing that would be needed to limit $$\Delta \bar T_{\mathrm{s}}$$ to a given threshold, we can make use of the climate sensitivity simulated by model ensembles to convert from the CO2 emissions budget gaps (in Gt(CO2)) to equivalent radiative forcing in W/m2. For this, we use the slope noted above of 1 °C for every 1300 Gt(CO2), or 7.7 × 10–4 °C/Gt(CO2), and combine this with the best estimate value from the IPCC2 for the equilibrium climate sensitivity of λ ≈ 0.8 °C/(W/m2) (corresponding to a mean equilibrium warming of 3 °C for a radiative forcing of 3.7 W/m2 from a doubling of CO2 since preindustrial times), or inverted, 1.25 (W/m2)/°C. Together these give 9.6 × 10–4 (W/m2)/Gt(CO2) (or approximately 1 W/m2 for every thousand Gt(CO2)), which we apply to obtain the values listed in the final row of Supplementary Table 1. We note that this is only an approximate conversion, since the individual climate sensitivity components were derived from different model ensemble simulations designed for different purposes, but it is adequate for the purpose of providing orientation values for the radiative forcing that may be called for from proposed RFG techniques in the context of achieving the Paris Agreement goals, particularly in comparison to the possible equivalent CO2 budget contributions from CDR.