Breaching 1.5°C: Give me the odds

Authors
Affiliation

Aalborg University

Olivia Kvist

Aalborg University

Published

December 17, 2024

Other Formats
Abstract
Climate change communication is crucial to raising awareness and motivating action. In the context of breaching the limits set out in the Paris Agreement, we argue that climate scientists should move away from point estimates and towards reporting probabilities. Reporting probabilities will provide policymakers with a range of possible outcomes, enabling them to make informed and timely decisions. To support this shift, we propose a method for calculating the probability of breaching the limits set out in the Paris Agreement. The method can be summarized as predicting future temperatures under different scenarios and calculating the number of possible outcomes that breach the limits as a proportion of the total number of outcomes. The probabilities can be computed for different time horizons and can be updated as new data become available. To illustrate the method, we performed a simulation study. Our results show that the probability of breaching the 1.5°C limit is already greater than zero for 2024. Furthermore, if no action is taken to reduce greenhouse gas emissions, the probability of breaching the limit exceeds 99% by 2041. Our methodology is simple to implement and can be easily adapted to more complex climate models. We encourage climate model developers to include probabilities of breaching these limits in their reports.
Keywords

Paris Agreement, global warming, climate change, 1.5°C, probability

The 1.5°C limit

The goals of the Paris Agreement (PA) have recently gained renewed media attention due to observed temperature anomalies that exceeded 1.5°C above pre-industrial levels for 12 consecutive months1,2. The importance of the 1.5°C threshold is that it was established in the PA as a limit to avoid the most severe consequences of climate change. Formally, the PA aims to limit global warming to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C.

An obstacle in assessing the success or failure of the PA is the lack of a clear definition of when temperature limits are breached. The definition is crucial for both scientific and political reasons.

If we defined the breaching of 1.5°C as the mean temperature for a year being above that limit, it has already been breached.

However, to avoid short-term fluctuations misleading the assessment of the limit being breached, the Intergovernmental Panel on Climate Change (IPCC) proposes to use a 20-year average to determine when the limit is exceeded3. The question remained on when inside that 20-year period the limit is breached.

A recent Nature article makes the case that defining the breach of the 1.5°C limit as the last year in a 20-year period where the global mean temperature is above that limit delays the conclusion of a breach by a decade4. The article advocates using the midpoint of the 20-year period as the year when the limit is breached. Thus, computing when the threshold will be breached entails averaging several years of observed temperature rise with a forecast of the following years up to the 20-year period. We extend this methodology to provide the probability of breaching the 1.5°C and 2°C limits.

Improving communication of climate change

One of the main challenges in communicating climate change is the complexity of the topic. This complexity makes it difficult to communicate the issue in a way that is easily understandable to the general public. In the context of breaching the limits set out by the PA, communication is crucial. The issue can become highly politicized if not communicated effectively. The public and policymakers need timely information about the urgency of the situation and the consequences of inaction.

One of the first steps in improving communication is to provide data in a clear and understandable way. Datasets report temperature anomalies as the difference between the observed temperature and the average temperature for a reference period58. Even though the PA states that the reference period should be pre-industrial levels, the datasets typically use a more recent reference period. For example, the HadCRUT5 dataset6, computed by the Met Office Hadley Centre, uses the 1961-1990 average temperature as the reference period.

The mismatch between the reference period used in the datasets and the reference period in the PA can lead to misunderstandings and misinterpretations. If we use the 1961-1990 average temperature as the reference period, the temperature anomalies reported in HadCRUT5 have not breached the 1.5°C limit yet. However, if we use the pre-industrial levels as the reference period, as indicated in the PA, the limit has already been breached several times. A skeptic reading a news article reporting temperature anomalies that exceed the 1.5°C limit can easily download and plot the data, potentially concluding that the headline is an exaggeration, unless they are aware of the reference period used.

All datasets should use the same reference period based on the pre-industrial levels. Agreement on using the pre-industrial levels will help avoid confusion and make it easier to compare the data. However, for historical reasons, data providers should also report temperature anomalies relative to their original reference period. This will help maintain compatibility with previous reports and models trained on the original data.

Predictions for the breaching of the PA limits

It should be stressed in any report that determining when the 1.5°C limit will be breached requires forecasting future temperatures. Forecasts can take many forms. The most common are physical models that simulate the climate system913. Physics-based models are computationally expensive and require high-performance computing. Hence, reduced-complexity models have been developed. These models are based on statistical methods and are trained on historical data of different climate variables1416.

Regardless of the method used to predict future temperatures, forecasts are uncertain. The climate system is complex and chaotic. This complexity is reflected in the confidence intervals associated with the forecasts. For example, the IPCC provides a range of possible outcomes for future temperatures. However, the uncertainty in the forecasts is not communicated effectively when discussing breaching the limits set out by the PA.

The media has recently reported new estimates on when the 1.5°C limit will be breached17,18. However, these estimates are often presented as point estimates without confidence intervals or without a clear description of the methodology used to make the predictions. In the current political environment, it is crucial to communicate the uncertainty in the predictions.

Recent point estimates of when the 1.5°C limit will be breached can be counterproductive if not accompanied by probability estimates. In case the limit is not breached in precisely the year predicted, it can give climate change deniers an argument to dismiss scientific evidence. In the past, extreme winters have been used as an argument against global warming due to the misunderstanding of the difference between weather and climate.

A new methodology to communicate when we will breach the Paris Agreement limits

We propose a novel way to communicate the uncertainty in the predictions of when the limits set at the PA will be breached. The methodology builds on the proposal in Nature to use a 20-year average temperature rise centered around a particular year4. We extend this methodology to provide the probability of breaching the limits set out by the PA. We use models to produce multiple scenarios of future temperature rise and compute the number of scenarios that breach the limits as a proportion of the total number of scenarios. The probabilities can be computed for different time horizons and can be updated as new data become available. Moreover, the methodology can be easily applied for different climate models and datasets.

There are already several examples of how probabilities can be used to communicate climate change effectively3,1922. By reporting probabilities, we can communicate uncertainty in the predictions and provide policymakers with a range of possible outcomes. This will allow policymakers to make more informed decisions on taking action to reduce greenhouse gas emissions. Reporting in 2024 a probability of 50% that the limit will be breached in 2030 will give an indication of the urgency of the situation. The probability distribution will also reflect how the odds of avoiding the breach decrease over time if no action is taken. This will provide a clear picture of the consequences of delaying action.

Odds of breaching the Paris Agreement limits

We develop a statistical model to predict future temperatures and calculate the probability of breaching the limits set out by the PA (Methods). The probabilities of breaching the limits are calculated by simulating different scenarios of future temperatures and calculating the proportion of scenarios that breach the limits. The use of Monte Carlo methods, as the one used in this study, is a common approach to estimate probabilities in complex systems, and is pursued by the IPCC23. The simulation study has two main steps.

First, we forecast the global mean temperature anomaly and calculate the 20-year moving average centered around a particular month for each simulated path (Figure 1). We repeat this process to simulate multiple paths of future temperatures (Figure 2).

Figure 1: Breaching of the 1.5°C threshold for realization 10 of HadCRUT5 (Methods).

 

Figure 2: Simulated forecast paths for HadCRUT5 (Methods).
Figure 3: Probability of breaching the 1.5°C and 2°C thresholds for HadCRUT5 (Methods).

In the second step, we recover the ratio of paths that breach the 1.5°C and 2°C limits each month to the total number of paths. We then plot this proportion of paths that crossed either threshold to obtain an estimate of the probability of breaching the PA limits. The results are presented in Figure 3 showing the whole distribution of the probabilities of breaching the PA limits. The distribution allows for a clear understanding of the uncertainty in the predictions and the dynamics of the probabilities over time.

Furthermore, some key results are presented in Table 1. The table shows the first month the 1.5°C and 2°C limits are breached at a given probability level.

Table 1: Months to breach the 1.5°C and 2°C thresholds for the HadCRUT5 temperature anomalies at a given probability level. Both 20-year and 30-year average temperature are considered. The dates are presented in the format YYYY-MM.
Probability level and period 1.5°C threshold 2°C threshold
Above 1%, 20-years avg. 2024-10 2042-06
Above 50%, 20-years avg. 2030-06 2055-08
Above 99%, 20-years avg. 2040-11 2067-10
Above 1%, 30-years avg. 2030-09 2047-03
Above 50%, 30-years avg. 2035-07 2060-07
Above 99%, 30-years avg. 2045-07 2072-06

The simulation study considered here shows that the probability of breaching the 1.5°C limit is already greater than zero for 2024.

This means that there are at least one percent of scenarios in which the 20-year average temperature rise breaches the 1.5°C limit in 2024. Even though this probability falls into the exceptionally unlikely category according to the IPCC likelihood scale24, it is an indication that the limit is at risk of breach in the near future for what can be considered the worst-case scenario. Moreover, note that there is a rapid increase in the probability of breaching the 1.5°C limit after 2030. The probability of breaching the limit is already greater than 50% by July 2030. This is in line with recent predictions that the goal will likely be breached in the second half of the 2030 decade17,18. Our simulation study provides an estimate of the monthly probabilities of breaching the goals. In addition, the results show that the probability of breaching the 1.5°C limit is greater than 99% by 2041 if no action is taken to reduce greenhouse gas emissions.

Regarding the 2°C limit, our simulation study finds that the probability of breach already starts increasing above 1% by the 2040 decade. In general, the simulation study shows the price of delaying action to reduce greenhouse gas emissions. The results suggest that climate change mitigation policies should be implemented as soon as possible to avoid breaching the limits set by the PA.

How have the probabilities changed since the Paris Agreement?

Figure 3 presents the probabilities of breaching the 1.5°C and 2°C limits at the start of the PA. The figure shows the effect that recent high temperatures have on the probabilities of breaching the limits. At the start of the PA, the probability of breaching the 1.5°C limit with a probability of 99% was not encountered until the second half of the 2040 decade. The results are in line with the exercise by Copernicus Climate Change Service on the time lost since the PA considering a point estimate25. We provide the probabilities of breaching the limits. The probabilities have increased significantly since the PA, highlighting that the urgency of the situation has increased since the PA.

Alternative data sources

The simulation study presented above is based on the HadCRUT5 dataset. However, the methodology is easily extended to include other datasets (Methods). Regardless of the method used to measure pre-industrial levels, the simulation study show that the probability of breaching the 1.5°C is more than 50% by the middle of the 2030 decade. Moreover, the probability of breaching the 1.5°C limit is greater than 99% by 2050 for all datasets.

Discussion and further work

We have presented a new way to communicate when we will breach the temperature limits set out by the PA. Our methodology is simple to implement. It requires predicting future temperatures under different scenarios and calculating the number of possible outcomes that breach the limits as a proportion of the total number of outcomes.

Our simulation study shows that the breaching of the 1.5°C limit is more than 50% likely by the middle of the 2030 decade and the result is robust across different datasets. Moreover, our results show that the probabilities of breaching the limits have increased significantly since the PA, highlighting that the actions taken so far have not been sufficient to avoid breaching the limits. Our results are based on the assumption that structural changes will not occur in the future. In that sense, our results could be interpreted as a scenario in which no action is taken to reduce greenhouse gas emissions.

Table 1 also shows the breaching probabilities considering the middle point of a 30-year average. The motivation for considering the 30-year average temperature is that the baseline periods for climate data are often defined as 30-year averages57. Moreover, some studies use the 30-year average temperature to determine when the limits are breached17. The results are broadly in line with the 20-year average temperature. The probabilities of breaching the limits considering the 30-year average temperature are shifted an average of 5 years to later dates compared to the 20-year average temperature. This coincides with the 30-year average implying having to wait longer to determine when the limits are breached. Furthermore, note that we have determined the middle point of the 20-year or 30-year periods to determine when the limit is breached. The dates shift deterministically 10 or 15 years to the future, respectively, if the last month of the period is used instead.

The methodology can be easily extended to include different scenarios of future emissions and more complex models of the climate system. Climate models already provide a range of possible outcomes for future temperatures; our methodology can be easily applied to these models. We encourage model developers to include the probabilities of breaching the limits in their reports.

References

1.
Copernicus Climate Change Service. Surface air temperature for January 2024. (2024).
2.
Thompson, A. Earth will exceed 1.5 degrees celsius of warming this year. Scientific American (2024).
3.
IPCC. Summary For Policymakers. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. 3–32 (Cambridge University Press, Cambridge, United Kingdom; New York, NY, USA, 2021). doi:10.1017/9781009157896.001.
4.
5.
6.
Morice, C. P. et al. An updated assessment of near-surface temperature change from 1850: The HadCRUT5 data set. Journal of Geophysical Research: Atmospheres 126, e2019JD032361 (2021).
7.
Rohde, R. A. & Hausfather, Z. The berkeley earth land/ocean temperature record. Earth System Science Data 12, 3469–3479 (2020).
8.
Huang, B., Yin, X., Menne, M. J., Vose, R. S. & Zhang, H.-M. NOAA global surface temperature dataset (NOAAGlobalTemp). (2024) doi:10.25921/rzxg-p717.
9.
Nath, S., Lejeune, Q., Beusch, L., Seneviratne, S. I. & Schleussner, C.-F. MESMER-m: An earth system model emulator for spatially resolved monthly temperature. Earth System Dynamics 13, 851–877 (2022).
10.
Eyring, V. et al. Overview of the coupled model intercomparison project phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development 9, 1937–1958 (2016).
11.
Held, I. M. et al. Structure and performance of GFDL’s CM4.0 climate model. Journal of Advances in Modeling Earth Systems 11, 3691–3727 (2019).
12.
Collins, M., Tett, S. F. B. & Cooper, C. The internal climate variability of HadCM3, a version of the hadley centre coupled model without flux adjustments. Climate Dynamics 17, 61–81 (2001).
13.
14.
Meinshausen, M., Raper, S. C. B. & Wigley, T. M. L. Emulating coupled atmosphere-ocean and carbon cycle models with a simpler model, MAGICC6 – part 1: Model description and calibration. Atmospheric Chemistry and Physics 11, 1417–1456 (2011).
15.
16.
Bennedsen, M., Hillebrand, E. & Koopman, S. J. A Statistical Reduced Complexity Climate Model for Probabilistic Analyses and Projections. arXiv (2024).
17.
Copernicus Climate Change Service, C. C3S global temperature trend monitor. European Centre for Medium-Range Weather Forecasts (2024).
18.
Rohde, R. October 2024 temperature update. Berkeley Earth (2024).
19.
Wigley, T. M. L. & Raper, S. C. B. Interpretation of high projections for global-mean warming. Science 293, 451–454 (2001).
20.
Schneider, S. H. What is ’dangerous’ climate change? Nature 411, 17–19 (2001).
21.
Schneider, S. H. & Mastrandrea, M. D. Probabilistic assessment of “dangerous” climate change and emissions pathways. Proceedings of the National Academy of Sciences 102, 15728–15735 (2005).
22.
Schneider, T. et al. Harnessing AI and computing to advance climate modelling and prediction. Nature Climate Change 13, 887–889 (2023).
23.
Abel, K., Eggleston, S. & Pullus, T. Quantifying uncertainties in practice. IPCC good practice guidance and uncertainty management in national greenhouse gas inventories. (2002).
24.
Mastrandrea, M. D. et al. Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. (2010).
25.
Copernicus Climate Change Service, C. We’ve “lost” 19 years in the battle against global warming since the paris agreement. European Centre for Medium-Range Weather Forecasts (2023).
26.
Bezanson, J., Edelman, A., Karpinski, S. & Shah, V. B. Julia: A fresh approach to numerical computing. SIAM review 59, 65–98 (2017).
27.
Bouchet-Valat, M. & Kamiński, B. DataFrames.jl: Flexible and fast tabular data in julia. Journal of Statistical Software 107, 1–32 (2023).
28.
Dadej, M. MarSwitching.jl: A julia package for markov switching dynamic models. Journal of Open Source Software 9, 6441 (2024).
29.
Vera-Valdés, J. E. LongMemory.jl: Generating, estimating, and forecasting long memory models in julia. arXiv preprint arXiv:2401.14077 (2024).
30.
Quinn, J. et al. JuliaData/CSV.jl: v0.10.15. Zenodo https://doi.org/10.5281/zenodo.13955982 (2024).
31.
Breloff, T. Plots.jl. Zenodo https://doi.org/10.5281/zenodo.14094364 (2024).
32.
Huang, B. et al. NOAA extended reconstructed sea surface temperature (ERSST), version 5. (No Title) (2017) doi:10.7289/V5T72FNM.
33.
Lee, J.-Y. et al. Future global climate: Scenario-based projections and near-term information. in Climate change 2021: The physical science basis. Contribution of working group i to the sixth assessment report of the intergovernmental panel on climate change 553–672 (Cambridge University Press, 2021).
34.
Jarvis, A. & Forster, P. M. Estimated human-induced warming from a linear temperature and atmospheric CO2 relationship. Nature Geoscience 1–3 (2024).
35.
Akaike, H. A new look at the statistical model identification. IEEE Transactions on Automatic Control 19, 716–723 (1974).
36.
Schwarz, G. Estimating the dimension of a model. The Annals of Statistics 6, 461–464 (1978).
37.
Environmental Information, N. N. C. for. Monthly global climate report for annual 2015. Annual 2015 Global Climate Report (2016).
38.
Wooldridge, J. M. Econometric Analysis of Cross Section and Panel Data. (MIT press, 2010).
39.
Thirumalai, K., DiNezio, P. N., Okumura, Y. & Deser, C. Extreme temperatures in Southeast Asia caused by El Niño and worsened by global warming. Nature Communications 8, 15531 (2017).
40.
41.
Thirumalai, K. et al. Future increase in extreme El Niño supported by past glacial changes. Nature 634, 374–380 (2024).
42.
Ham, Y.-G., Kim, J.-H. & Luo, J.-J. Deep learning for multi-year ENSO forecasts. Nature 573, 568–572 (2019).
43.
L’Heureux, M. L. et al. ENSO prediction. in El Niño Southern Oscillation in a Changing Climate 227–246 (American Geophysical Union (AGU), 2020). doi:10.1002/9781119548164.ch10.
44.
Hassanibesheli, F., Kurths, J. & Boers, N. Long-term ENSO prediction with echo-state networks. Environmental Research: Climate 1, 011002 (2022).
45.
Hamilton, J. D. A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica: Journal of the econometric society 357–384 (1989).
46.
Hamilton, J. D. Analysis of time series subject to changes in regime. Journal of Econometrics 45, 39–70 (1990).
47.
Hurst, H. E. The problem of long-term storage in reservoirs. International Association of Scientific Hydrology. Bulletin 1, 13–27 (1956).
48.
Bloomfield, P. & Nychka, D. Climate spectra and detecting climate change. Climatic Change 21, 275–287 (1992).
49.
Bloomfield, P. Trends in global temperature. Climatic Change 21, 1–16 (1992).
50.
Vera-Valdés, J. E. Temperature anomalies, long memory, and aggregation. Econometrics 9, 1–22 (2021).
51.
Granger, C. W. J. Long memory relationships and the aggregation of dynamic models. Journal of Econometrics 14, 227–238 (1980).
52.
Zaffaroni, P. Contemporaneous aggregation of linear dynamic models in large economies. Journal of Econometrics 120, 75–102 (2004).
53.
Haldrup, N. & Vera-Valdés, J. E. Long memory, fractional integration, and cross-sectional aggregation. Journal of Econometrics 199, 1–11 (2017).
54.
Shimotsu, K. & Phillips, P. C. Exact local whittle estimation of fractional integration. The Annals of Statistics 33, 1890–1933 (2005).
55.
Granger, C. W. J. & Joyeux, R. An introduction to long‐memory time series models and fractional differencing. Journal of Time Series Analysis 1, 15–29 (1980).
56.
Hosking, J. R. M. Fractional differencing. Biometrika 68, 165–176 (1981).
57.
Allen, M. et al. Special report: Global warming of 1.5 c. Intergovernmental Panel on Climate Change (IPCC) 27, 677 (2018).

Methods

Data

The temperature data used in this paper is the global mean temperature anomaly of the HadCRUT5 dataset computed by the Met Office Hadley Centre6. The data are available from 1850 and the last observation in the dataset at the time of writing was September 2024. As noted before, the dataset reports temperature anomalies as the difference between the observed temperature and the 1961-1990 average temperature. Hence, we first converted the data to anomalies compared to pre-industrial levels, defined as the average temperature from the earliest available data up to 1900.

HadCRUT5 provides 200 realizations to account for the uncertainty in the data. We use all realizations to fit the models and produce multiple scenarios of future temperature in the simulation study. This allows us to account for the uncertainty in the data and to provide a range of possible outcomes.

Data for El Niño effect is obtained from the Oceanic Niño Index (ONI) dataset provided by the National Oceanic and Atmospheric Administration (NOAA)32. The ONI data are available from 1854. The last observation in the dataset at the time of writing was September 2024. The ONI data are used as a covariate in the models to control for the El Niño effect as discussed below. Hence, the models are fitted to the temperature data and the ONI data from 1854 to September 2024.

Modeling scheme

Our modeling scheme consists of three components: a trend specification, an El Niño Southern Oscillation (ENSO) model, and a long-range dependent error term. We provide a detailed description of each component below.

Trend models

We consider three trend specifications for modeling the global mean temperature anomaly: a linear trend model, a quadratic trend model, and a linear trend that allows for a break. The models are given by:

  • Linear trend: y_t = \beta_0 + \beta_1 t + \gamma ONI_t + \epsilon_t,

  • Quadratic trend: y_t = \beta_0 + \beta_1 t + \beta_2 t^2 + \gamma ONI_t + \epsilon_t,

  • Trend with break: y_t = \beta_0 + \beta_1 t + \beta_2 I_{t > t_0} + \gamma ONI_t + \epsilon_t.

Above, y_t is the global mean temperature anomaly at time t, \beta_0, \beta_1, and \beta_2 are the trend coefficients, \gamma is the coefficient of the El Niño effect, ONI_t is the variable that models the El Niño events (see below), and \epsilon_t is the error term. As described in the following, the error term is assumed to have long-range dependence. The variable I_{t > t_0} is an indicator variable that takes the value 1 if t > t_0 and 0 otherwise. The break point t_0 is estimated from the data.

Although trend models can be too simplistic to capture the complexity of the climate system in the long term, the IPCC justifies the use of trend models by recognizing that the parts of the climate system that have shown clear increasing or decreasing trends in recent decades will continue these trends for at least the next twenty years33. Furthermore, linear models have the additional advantage of providing robustness, reduced uncertainty, and transparency34.

The models are estimated on the historical temperature data. The best model is selected according to the Akaike Information Criterion (AIC)35, and the Bayesian Information Criterion (BIC)36. For each realization, the model with the lowest AIC and BIC is considered the best model and is used to predict future temperatures.

For example, the AIC and BIC for the trend models fitted to realization 10 of the HadCRUT5 dataset are presented in Table 2.

Table 2: Information criteria for model selection for models fitted to realization 10 of the HadCRUT5 dataset.
Model AIC BIC
Linear Trend -5613.2 -5596.64
Quadratic Trend -6551.17 -6529.09
Trend with Break -6627.33 -6605.25

Hence, the trend with break model is selected as the best model for realization 10. The break point is estimated to be in 1976-08.

The trend with break model is selected for 190 out of the 200 realizations. The quadratic trend model is selected for the remaining 10 realizations. The break dates are estimated to be centered around the 1970 decade, which aligns with reports of a change in the climate system around the 1970 decade37. The dates are reported in the online notebook.

The confidence intervals for the estimated coefficients are used to simulate future values of the temperature anomaly. Under mild conditions on the error term, the coefficient estimators are asymptotically normally distributed38.

El Niño effect

We control for the El Niño effect as it is known to have an effect on the global mean temperature anomaly39,40. El Niño Southern Oscillation (ENSO) is a natural climate phenomenon characterized by periodic warming of sea surface temperatures in the central and eastern equatorial Pacific Ocean.

Modeling the El Niño effect is crucial for predicting future temperatures. To control for the El Niño effect, we include the Oceanic Niño Index (ONI) as a covariate in the models as described above. The ONI is an indicator for monitoring the ENSO. El Niño conditions are present when the ONI is +0.5 or higher. Oceanic La Niña conditions exist when the ONI is -0.5 or lower.

One complication with the El Niño effect is that it is difficult to predict. The El Niño events are highly variable and can have different intensities. The El Niño effect can also interact with other climate phenomena, such as the Indian Ocean Dipole and the Madden-Julian Oscillation. This makes it challenging to model the El Niño effect accurately4144. In this study, we use a simple model to capture the El Niño effect.

The dynamics of the ONI are modeled using a Markov-switching model45. The Markov-switching model is a regime-switching model that allows for the presence of different regimes in the data. The model is given by: ONI_t = \beta_{j} + \epsilon_{j,t},

where \beta_{j} is the coefficient for the j-th regime, and \epsilon_{j,t} is the error term with variance \sigma^2_j. A latent state at time t, s_t, indicates the regime. The dynamics of s_t are governed by a Markov process: Pr(s_t = j | s_{t-1} = i, s_{t-2}, \cdots, s_1) = Pr(s_t = j | s_{t-1} = i) = p_{ij}, where p_{ij} is the transition probability from state i to j.

Note that the probability distribution of s_t given the entire path \left\{s_{t−1}, s_{t−2}, \cdots, s_1\right\} depends only on the most recent state s_{t−1}.

In historical data, the effect can be estimated using maximum likelihood estimation and the expectation-maximization algorithm46. For forecasting, the effect is simulated using a stochastic process taking into account the probability of each regime.

We use the AIC and BIC to determine the number of regimes. We consider a range of possible regimes and select the number of regimes that minimize the AIC and BIC. Table 3 shows the AIC and BIC for the ONI data. Only odd numbers of regimes are considered to ensure that the model includes intensities of El Niño and La Niña events, and neutral conditions.

Table 3: Information criteria for model selection.
Regimes AIC BIC
3-regimes 2438 2504
5-regimes 2342 2507
7-regimes 1394 1703

Hence, the number of states in the Markov-switching model is seven. The seven states are chosen to correspond to the different phases of the ENSO cycle ranging from very strong El Niño, strong El Niño, moderate El Niño, neutral, moderate La Niña, strong La Niña, to very strong La Niña.

Long-range dependent error term

Finally, our modeling scheme allows for the error term to have long-range dependence. Long-range dependent models imply that past values of the series have a long-lasting effect on the current value. It describes the tendency for successive values to remain close to each other or to be dependent. Interestingly, the notion of long-range dependence originated in the analysis related to climate data47. A long-range dependent model can be written as: y_t = \sum_{j=1}^\infty \phi_j y_{t-j} + \epsilon_t, where \epsilon_t is an i.i.d. process. The coefficients \phi_j decay hyperbolically (slowly) to zero as j increases. In contrast, the coefficients of standard models decay exponentially to zero.

Temperature data are known to have long-range dependence, which means that the error terms are correlated over long periods4850. One likely explanation behind the presence of long-range dependence in the data is aggregation5153. The global mean temperature anomaly is an aggregate of temperature data from different regions, which can lead to long-range dependence in the data. In the context of breaching the limits set out by the PA, the long-range dependence in the data is crucial since it affects the predicted temperature rise.

We used the exact local Whittle estimator to estimate the long-range dependence in the data54. The exact local Whittle estimator minimizes the function given by: R(d) = \log\left(\frac{1}{m}\sum_{k=1}^{m}I_{\Delta^d}(\lambda_k)\right)-\frac{2d}{m}\sum_{k=1}^{m}\log(\lambda_k),

where I_{\Delta^d}(\lambda_k) is the periodogram of (1-L)^d x_t, where (1-L)^d is the fractional difference operator55,56, \lambda_{k} = e^{i2\pi k /T} are the Fourier frequencies, and m is the bandwidth parameter.

The exact local Whittle estimator is consistent and asymptotically normal. The long-range dependence parameter is estimated for each realization separately. The estimated parameter is then used to simulate the error term in the models.

Model validation

For model validation, we obtain the prediction intervals for temperature anomalies using our modeling scheme fitted to data up to November 2016, the month when the PA entered into force. All HadCRUT5 realizations are considered. The results, presented in Figure 4 and available in the online notebook, show that our models provide adequate coverage of the observed temperature anomalies up to the present day. We take this as validation of our modeling scheme. In contrast, the figure presents the temperature projections from the summary for policymakers of the IPCC Special Report: Global Warming of 1.5°C57. The paths show the projected temperature evolution according to the IPCC if CO_2 emissions gradually decrease to zero by 2055, while other greenhouse gas levels stop changing after 2030. The figure shows that recent temperatures are outside the IPCC projections. Hence, the coverage of the IPCC projections is lacking, and the projections are likely to be too optimistic.

Figure 4: Coverage of the prediction intervals for the HadCRUT5 dataset. The IPCC projections for the minimum and maximum temperature anomalies are shown as dashed lines.

Number of simulated paths

To ease computational burden, we fit the models to each realization separately and simulate five different scenarios of future temperature by simulating different paths for El Niño effect. This gives us a total of 1000 scenarios of future temperatures. The methodology can be easily extended to include more realizations and scenarios.

Alternative data sources

The simulation study presented in the manuscript is based on the HadCRUT5 dataset. In the following, we present the results of the simulation study using alternative temperature anomalies datasets.

Figure 5 presents the results of the simulation study for the 1.5°C limit using GISTEMP5, Berkeley Earth7, and NOAAGlobalTemp8, together with the HadCRUT5 dataset. GISTEMP is produced by the NASA Goddard Institute for Space Studies, Berkeley Earth is produced by the Berkeley Earth project, and NOAAGlobalTemp is produced by the National Oceanic and Atmospheric Administration. The information criteria for GISTEMP and Berkeley Earth are minimized by the trend with break model, while the information criteria for NOOAGlobalTemp are minimized by the quadratic trend model. The results considering the trend with break model for all datasets are shown to ease comparisons. All computations are based on the simulation studies presented in the online notebooks. The notebooks can be used to recover the results for the quadratic trend model for NOAAGlobalTemp.

Figure 5: Probability of breaching the 1.5°C limit for different datasets.

The datasets use different methodologies to produce the temperature anomalies data. The differences in the methodologies result in differences in the pre-industrial temperature estimates, which result in differences in the estimates of warming since pre-industrial times. Berkeley Earth shows more warming since pre-industrial compared to HadCRUT5, while NOAAGlobalTemp shows less warming since pre-industrial compared to HadCRUT5.

Data availability

All data considered in this paper are freely available in the repositories of the respective datasets cited in the manuscript.

Code availability

The code used to perform the simulation study is available in online notebooks. The code is written in Julia26. Additional packages used in the simulation study are the DataFrames.jl package for data manipulation27, the MarSwitching.jl package for Markov-switching models28, the LongMemory.jl package for long-range dependent models29, the CSV.jl package to read and write CSV files30, and the Plots.jl package for plotting31.

The code is open-source and can be freely used and modified. We encourage other researchers to use the code to reproduce our results and to extend the methodology to other datasets and models.

Author contributions

J.E.V.-V. conceived the analysis and wrote the original draft. J.E.V.-V. and O.K. co-developed the analysis and co-edited subsequent drafts.

Competing interests

The authors declare no competing interests.

Citation

BibTeX citation:
@article{vera-valdés2024,
  author = {Vera-Valdés, J. Eduardo and Kvist, Olivia},
  title = {Breaching {1.5°C:} {Give} Me the Odds},
  journal = {arXiv},
  date = {2024-12-17},
  url = {https://arxiv.org/abs/2412.13855},
  doi = {10.48550/arXiv.2412.13855},
  langid = {en},
  abstract = {Climate change communication is crucial to raising
    awareness and motivating action. In the context of breaching the
    limits set out in the Paris Agreement, we argue that climate
    scientists should move away from point estimates and towards
    reporting probabilities. Reporting probabilities will provide
    policymakers with a range of possible outcomes, enabling them to
    make informed and timely decisions. To support this shift, we
    propose a method for calculating the probability of breaching the
    limits set out in the Paris Agreement. The method can be summarized
    as predicting future temperatures under different scenarios and
    calculating the number of possible outcomes that breach the limits
    as a proportion of the total number of outcomes. The probabilities
    can be computed for different time horizons and can be updated as
    new data become available. To illustrate the method, we performed a
    simulation study. Our results show that the probability of breaching
    the 1.5°C limit is already greater than zero for 2024. Furthermore,
    if no action is taken to reduce greenhouse gas emissions, the
    probability of breaching the limit exceeds 99\% by 2041. Our
    methodology is simple to implement and can be easily adapted to more
    complex climate models. We encourage climate model developers to
    include probabilities of breaching these limits in their reports.}
}
For attribution, please cite this work as:
Vera-Valdés, J. E. & Kvist, O. Breaching 1.5°C: Give me the odds. arXiv (2024) doi:10.48550/arXiv.2412.13855.