The 1.5°C limit
The goals of the Paris Agreement (PA) have recently gained renewed media attention due to observed temperature anomalies that exceeded 1.5°C above preindustrial levels for 12 consecutive months according to Copernicus Climate Change Service (2024a). The importance of the 1.5°C threshold is that it was established in the PA as a limit to avoid the most severe consequences of climate change. Formally, the PA aims to limit global warming to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C.
An obstacle in assessing the success or failure of the PA is the lack of a clear definition of when temperature limits are breached (Betts et al. 2023). The definition of when the limits are breached is crucial for both scientific and political reasons.
If we defined the breaching of 1.5°C as the mean temperature for a year being above that limit, it has already been breached.
However, to avoid short-term fluctuations, the Sixth Assessment Report of Working Group I of the Intergovernmental Panel on Climate Change (IPCC) proposes to use a 20-year average temperature rise to determine when the limit is exceeded (IPCC 2021). The question remained on when inside that 20-year period the limit is breached.
Betts et al. (2023) argue that defining the breach of the 1.5°C limit as the last year in a 20-year period where the global mean temperature is above that limit delays the conclusion of a breach by a decade. They propose using the midpoint of the 20-year period as the year when the limit is breached. Thus, computing when the threshold will be breached entails averaging several years of observed temperature rise with a forecast of the following years up to the 20-year period. We extend this methodology to provide the probability of breaching the 1.5°C and 2°C limits with the aim of improving the communication of climate change.
Improving communication of climate change
One of the main challenges in communicating climate change is the complexity of the topic. This complexity makes it difficult to communicate the issue in a way that is easily understandable to the general public. In the context of breaching the limits set out by the PA, communication is crucial. The issue can become highly politicized if not communicated effectively. The public and policymakers need timely information about the urgency of the situation and the consequences of inaction.
One of the first steps in improving communication is to provide data in a clear and understandable way. Datasets report temperature anomalies as the difference between the observed temperature and the average temperature for a reference period (GISTEMP 2020; Morice et al. 2021; R. A. Rohde and Hausfather 2020). Even though the PA states that the reference period should be pre-industrial levels, the datasets typically use a more recent reference period. For example, the HadCRUT5 dataset uses the 1961-1990 average temperature as the reference period.
Figure 1 shows temperature anomalies as reported by the HadCRUT5 dataset. The figure shows that if we use the 1961-1990 average temperature as the reference period, as presented in the dataset, the temperature anomalies have not breached the 1.5°C limit yet. However, if we use the pre-industrial levels as the reference period, as indicated in the PA, the limit has already been breached several times. This mismatch between the reference period used in the datasets and the reference period in the PA can lead to misunderstandings and misinterpretations. A sceptic reading a news article reporting temperature anomalies breaching the 1.5°C limit above pre-industrial levels can easily download and plot the data getting the impression that the headline is an exaggeration if they are not aware of the reference period used.
All datasets should use the same reference period based on the pre-industrial levels. This will help to avoid confusion and to make it easier to compare the data. However, for historical reasons, data providers should also report temperature anomalies relative to their original reference period. This will help maintain compatibility with previous reports and models trained on the original data.
Predictions for the breaching of the PA limits
It should be stressed in any report that determining when the 1.5°C limit will be breached requires forecasting future temperatures. Forecasts can take many forms. The most common are physical models that simulate the climate system [see e.g.; Nath et al. (2022); Eyring et al. (2016); Held et al. (2019); Collins, Tett, and Cooper (2001); Orbe et al. (2020)]. Physics-based models are computationally expensive and require high-performance computing. Hence, reduced-complexity models have been developed. These models are based on statistical methods and are trained on historical data of different climate variables [see e.g.; Meinshausen, Raper, and Wigley (2011); Smith et al. (2024); Bennedsen, Hillebrand, and Koopman (2024)].
Regardless of the method used to predict future temperatures, forecasts are uncertain. The climate system is complex and chaotic. This complexity is reflected in the confidence intervals associated with the forecasts. For example, the IPCC provides a range of possible outcomes for future temperatures. However, the uncertainty in the forecasts is not communicated effectively when discussing breaching the limits set out by the PA.
The media has recently reported new estimates on when the 1.5°C limit will be breached (Copernicus Climate Change Service 2024b; R. Rohde 2024). However, these estimates are often presented as point estimates without confidence intervals or without a clear description of the methodology used to make the predictions. In the current political environment, it is crucial to communicate the uncertainty in the predictions.
Recent point estimates of when the 1.5°C limit will be breached can be counterproductive if not accompanied by probability estimates. In case the limit is not breached in precisely the year predicted, it can give climate change deniers an argument to dismiss scientific evidence. In the past, extreme winters have been used as an argument against global warming due to the misunderstanding of the difference between weather and climate. Where weather refers to something more local and only observed over short-time periods, climate is more long-termed. The distinction between weather and climate must be clear in any communication to avoid misrepresentation of the results.
A new methodology to measure when we will breach the limit of 1.5°C
We propose a way to communicate the uncertainty in the predictions of when the limits set at the PA will be breached. The methodology builds on the proposal by Betts et al. (2023) to use a 20-year average temperature rise centered around a particular year. The 20-year average is then compared with the 1.5°C and 2°C limits. We use models to produce multiple scenarios of future temperature rise and compute the number of scenarios that breach the limits as a proportion of the total number of scenarios. The probabilities can be computed for different time horizons and can be updated as new data become available. Moreover, the methodology can be easily applied for different climate models and datasets.
There are already several examples of how probabilities can be used to communicate climate change effectively [see e.g.; IPCC (2021); Wigley and Raper (2001); S. H. Schneider (2001); S. H. Schneider and Mastrandrea (2005); T. Schneider et al. (2023)]. By reporting probabilities, we can communicate the uncertainty in the predictions and provide policymakers with a range of possible outcomes. This will allow policymakers to make more informed decisions on taking action to reduce greenhouse gas emissions. Reporting in 2024 a probability of 50% that the limit will be breached in 2030 will give an indication of the urgency of the situation. The probability distribution will also reflect how the odds of avoiding the breach decrease over time if no action is taken. This will provide a clear picture of the consequences of delaying action.
To illustrate our methodology, we developed a simulation study. We simulate multiple scenarios of future temperature rise and calculate the probability of breaching the 1.5°C and 2°C limits. The simulation study is presented next.
A statistical model to predict future temperatures
Data. The data used in this paper is the global mean temperature anomaly of the HadCRUT5 dataset computed by the Met Office Hadley Centre Morice et al. (2021). The data are reported as the difference between the observed temperature and the 1961-1990 average temperature and are available from 1850. We first convert the data to anomalies compared to pre-industrial levels. The pre-industrial levels are defined as the average temperature from the earliest available data up to 1900. The data is presented in Figure 1.
HadCRUT5 provides 200 realizations to account for the uncertainty in the data. We use all realizations to fit the models and produce multiple scenarios of future temperature rise. This allows us to account for the uncertainty in the data and to provide a range of possible outcomes. We fit the models to each realization separately and produce five different scenarios of future temperatures for each realization. This gives us a total of 1000 scenarios of future temperatures. The methodology can be easily extended to include more realizations and scenarios.
Modeling scheme. Our modeling scheme consists of three components: a trend specification, an El Niño Southern Oscillation (ENSO) model, and a long-range dependent error term. We provide a brief overview of the models. Further technical details on the models are presented in the supplementary material in the appendix, and the code used to perform the simulation study is available in a Jupyter notebook in the supplementary material.
We consider three trend specifications for modeling the global mean temperature anomaly: a linear trend model, a quadratic trend model, and a linear trend that allows for a break. The models are estimated on the historical temperature data. The best model is selected on the basis of the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) (Akaike 1974; Schwarz 1978). For each realization, the model with the lowest AIC and BIC is considered the best model and is used to predict future temperatures.
Furthermore, we control for the El Niño effect as it is known to have an effect on the global mean temperature anomaly (Thirumalai et al. 2017; Jiang et al. 2024). To control for the El Niño effect we include the Oceanic Niño Index (ONI) as a covariate in the models. The ONI is an indicator for monitoring the ENSO. El Niño conditions are present when the ONI is +0.5 or higher. Oceanic La Niña conditions exist when the ONI is -0.5 or lower.
For forecasting purposes, we fit a Markov-switching model to the ONI data to predict future values (Hamilton 1989, 1990). The motivation for using a Markov-switching model is that the ONI data naturally exhibit regime changes over time. The number of states in the Markov-switching model is 7, which is selected on the basis of the AIC and BIC. The seven states correspond to the different phases of the ENSO cycle, ranging from very strong El Niño, strong El Niño, moderate El Niño, neutral, moderate La Niña, strong La Niña, to very strong La Niña.
Finally, our modeling scheme allows for the error term to have long-range dependence. Long-range dependence has its origin in the analysis of climate data (Hurst 1956). Temperature data are known to have long-range dependence, which means that the error terms are correlated over long periods (Bloomfield and Nychka 1992; Bloomfield 1992; J. Eduardo Vera-Valdés 2021). The long-range dependence parameter is estimated using the exact local Whittle method (Shimotsu and Phillips 2005).
Model validation. We obtain the prediction intervals for temperature anomalies using our modeling scheme fitted to data up to November 2016, the month when the PA entered into force. All HadCRUT5 realizations are considered. The results, presented in the supplementary material, show that our models provide adequate coverage of the observed temperature anomalies up to the present day. We take this as validation of our modeling scheme.
Model fitting. As an illustration, we present a fitted model and its forecast for realization 10 of the HadCRUT5 dataset. Realization 10 is chosen arbitrarily. The model is fitted to the data up to the last observation. The model is then used to forecast future temperatures. The results are presented in Figure 2.
Figure 2 highlights the different components of the model: the trend, the long-range dependence, and the El Niño effect.
The trend component captures the long-term increase in the temperature anomaly, all other things being equal. The long-range dependence captures the persistence of the temperature anomaly over time. Given that recent temperatures are high, the long-range dependence in the data implies that future temperatures are likely to remain high. This directly affects the forecasted temperature and the probability of breaching the limits. Finally, the El Niño effect captures the short-term fluctuations in the temperature anomaly. The forecasted temperature anomaly is the sum of the trend, the long-range dependence, and the El Niño effect.
Breaching the limits. For each simulated path, we calculate the average temperature for 20 years using a moving average. We began the process in 2004 to obtain a 20-year average temperature rise centered around 2014 and with an end point in the current year. The moving average is then calculated for each month. We repeat this process until the end of the forecasted period. We then find the first month where the 20-year average temperature rise breaches the 1.5°C and 2°C limits.
Figure 3 shows that the 20-year average temperature for the simulated path of realization 10 first breaches the 1.5°C limit in July 2031. The gray box indicates the 20-year period used to calculate the average temperature rise, while the black dashed line indicates the average temperature over 20 years.
The month in which the limit is breached for this path is highly dependent on the El Niño effect. Hence, we conduct a simulation study to estimate the probability of breaching the limits.
Simulation study
Using the modeling scheme described above, we detail a way to compute the probability of breaching the limits set out by the PA using a simulation study. The use of Monte Carlo methods, as the one used in this simulation study, is a common approach to estimate probabilities in complex systems, and it is pursued by the IPCC (Abel, Eggleston, and Pullus 2002). The simulation study has two main steps.
First, we forecast the global mean temperature anomaly using the best model selected using the information criteria. For each realization of the HadCRUT5 dataset, we simulate 5 different scenarios of future temperature rise by simulating different paths for El Niño effect. This gives us a total of 1000 scenarios of future temperatures. Figure 4 shows the simulated temperature anomalies for a subset of the realizations to simplify visualization and plot rendering.
In a second step, we calculate the 20-year moving average centered around a particular month for each simulated path. We repeat this process for all simulated paths and recover the ratio of paths that breach the 1.5°C and 2°C limits each month to the total number of paths. We then plot this proportion of paths that crossed either threshold to obtain an estimate of the probability of breaching the limits. Figure 5 presents the results of the simulation study.
Some key results from the simulation study are presented in Table 1. The table shows the first month the 1.5°C and 2°C limits are breached at a given probability level. The results are based considering the 20-year average temperature.
Probability level and period | 1.5°C threshold | 2°C threshold |
---|---|---|
Above 0%, 20-years avg. | 2024-09-01 | 2033-11-01 |
Above 50%, 20-years avg. | 2030-07-01 | 2055-11-01 |
Above 99%, 20-years avg. | 2042-02-01 | 2068-04-01 |
Above 0%, 30-years avg. | 2029-09-01 | 2040-04-01 |
Above 50%, 30-years avg. | 2035-08-01 | 2060-11-01 |
Above 99%, 30-years avg. | 2046-12-01 | 2072-12-01 |
The simulation study considered here shows that the probability of breaching the 1.5°C limit is already greater than zero for 2024.
This means that there is at least one scenario in which the 20-year average temperature rise breaches the 1.5°C limit in September 2024. Moreover, note that there is a rapid increase in the probability of breaching the 1.5°C limit after 2030. The probability of breaching the limit is already greater than 50% by July 2030. This is in line with recent predictions that the goal will likely be breached in the second half of the 2030 decade (Copernicus Climate Change Service 2024b; R. Rohde 2024). Our simulation study provides an estimate of the monthly probabilities of breaching the goals. They show that the probability of breaching the 1.5°C limit is greater than 99% by 2042 if no action is taken to reduce greenhouse gas emissions.
Regarding the 2°C limit, our simulation study finds that the probability of breach already starts increasing above zero by the 2030 decade. In general, the simulation study highlights that climate change mitigation policies should be implemented as soon as possible to avoid breaching the limits set by the PA.
Furthermore, Table 1 shows the breaching probabilities considering a 30-year average. The motivation for considering the 30-year average temperature is that baseline periods for climate data are often defined as 30-year averages (Morice et al. 2021; GISTEMP 2020; R. A. Rohde and Hausfather 2020). Moreover, some studies use the 30-year average temperature to determine when the limits are breached (Copernicus Climate Change Service 2024b). The results are
How has the probabilities changed since the Paris Agreement
As model validation, Figure 6 presents the prediction intervals for temperature anomalies for the modeling scheme described above starting in November 2016, the month when the PA entered into force. The results using the data up to the PA are presented in the supplementary Jupyter notebook.
The prediction intervals are based on the historical data up to the start of the PA and the models fitted to the data. The prediction intervals are used to assess the uncertainty in the forecasts. In general, the prediction intervals provide adequate coverage of the observed temperature anomalies. However, note that recent high temperatures fall outside the 99% prediction intervals. This further signals the abnormality of the recent temperature observations. Several theories have been proposed to explain recent high temperatures, including decreased cloud coverage and international shipping regulatory changes (Goessling, Rackow, and Jung; Quaglia and Visioni 2024). Regardless of the cause, the high temperatures highlight the urgency of the situation.
In contrast, the figure presents the temperature projections from the summary for policymakers of the IPCC Special Report: Global Warming of 1.5°C (Allen et al. 2018). The paths show the projected temperature evolution according to the IPCC if CO_2 emission gradually decrease to zero by 2055, while other greenhouse gas levels stop changing after 2030. The figure shows that recent temperatures are outside the IPCC projections. Hence, the IPCC projections coverage is lacking, and the projections are likely to be too optimistic.
Furthermore, Figure 7 presents the probabilities of breaching the 1.5°C and 2°C limits at the start of the PA.
The figure allows us to assess how the probability of breaching the limits has changed since the PA. At the start of the PA, the probability of breaching the 1.5°C limit with a probability of 99% was not encountered until 2051. The probability of breaching the 2°C limit at a probability of 99% was not encountered in the forecast period ending in 2083. The results are related to the exercise of Copernicus Climate Change Service (2023) on the time lost since the PA considering a point estimate, while we provide the probabilities of breaching the limits. Probabilities have increased significantly since the PA, which highlights that the urgency of the situation has increased since the PA.
Discussion and further work
We have presented a new way to communicate when we will breach the temperature limits set out by the PA. Our methodology is simple to implement. It requires predicting future temperatures under different scenarios and calculating the number of possible outcomes that breach the limits as a proportion of the total number of outcomes. The probabilities can be computed for different time horizons and datasets and can be updated as new data becomes available. Additional simulation exercises considering alternative datasets and sub-samples of realizations are presented in the supplementary material. They show that the breaching dates are robust to the choice of dataset. Moreover, and additional analysis of the probabilities of breaching the limits since the PA is presented in the supplementary material. It shows that the probabilities have increased significantly since the PA, highlighting that the actions taken so far have not been sufficient to avoid breaching the limits.
We have illustrated the methodology in a simulation study. The simulation study is based on statistical models trained on historical temperature data to predict future temperatures. Our results are based on the assumption that no structural changes will occur in the future. In that sense, our results could be interpreted as a scenario in which no action is taken to reduce greenhouse gas emissions from the current levels.
The methodology can be easily extended to include different scenarios of future emissions and more complex models of the climate system. Climate models such as MAGICC already provide a range of possible outcomes for future temperatures; our methodology can be easily applied to these models. We encourage climate model developers to include the probabilities of breaching the limits in their reports.
References
fair-calibrate
V1.4.1: Calibration, Constraining, and Validation of the FaIR Simple Climate Model for Reliable Future Climate Projections.” Geoscientific Model Development 17 (23): 8569–92. https://doi.org/10.5194/gmd-17-8569-2024.
Supplementary material
The supplementary material contains additional information on the models used in the simulation study. The components of the models are described in detail.
Trend models
We consider three trend specifications for modeling the global mean temperature anomaly: a linear trend model, a quadratic trend model, and a linear trend allowing for a break. The models are given by:
Linear Trend: y_t = \beta_0 + \beta_1 t + \gamma ONI_t + \epsilon_t,
Quadratic Trend: y_t = \beta_0 + \beta_1 t + \beta_2 t^2 + \gamma ONI_t + \epsilon_t,
Trend with Break: y_t = \beta_0 + \beta_1 t + \beta_2 I_{t > t_0} + \gamma ONI_t + \epsilon_t.
Above, y_t is the global mean temperature anomaly at time t, \beta_0, \beta_1, and \beta_2 are the trend coefficients, \gamma is the coefficient of the El Niño effect, ONI_t is the variable that models the El Niño events, and \epsilon_t is the error term. As described in the following, the error term is assumed to have long-range dependence. The variable I_{t > t_0} is an indicator variable that takes the value 1 if t > t_0 and 0 otherwise. The break point t_0 is estimated from the data.
The models are estimated on the historical temperature data. The best model is selected based on the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) (Akaike 1974; Schwarz 1978). For each realization, the model with the lowest AIC and BIC is considered the best model and is used to predict future temperatures.
For example, the AIC and BIC for the trend models fitted to realization 10 are presented in Table 2.
Model | AIC | BIC |
---|---|---|
Linear Trend | -5613.2 | -5596.64 |
Quadratic Trend | -6551.17 | -6529.09 |
Trend with Break | -6627.33 | -6605.25 |
The estimated coefficient confidence intervals are used to simulate future values of the temperature anomaly. The confidence intervals are obtained from the coefficients’ (asymptotic) distribution. Under normally distributed error term, the coefficient estimators are normally distributed with mean and variance given by the following formula:
\hat{\beta} \sim N(\beta, \sigma^2(X'X)^{-1}),
where \hat{\beta} are the estimates, \beta are the true coefficients, \sigma^2 is the variance of the error term, and X is the design matrix. In case of non-normal error term, the coefficient estimators are asymptotically normal using the central limit theorem under mild conditions (Wooldridge 2010).
El Niño Southern Oscillation (ENSO) model
El Niño Southern Oscillation (ENSO) is a natural climate phenomenon that influences global temperature. It is characterized by periodic warming of sea surface temperatures in the central and eastern equatorial Pacific Ocean. It is observed every 2-7 years and can last from 9 months to 2 years.
Modeling the El Niño effect is crucial for predicting future temperatures. To control for the El Niño effect, we include the Oceanic Niño Index (ONI) as a covariate in the models as described above. The ONI is an indicator for monitoring the ENSO. El Niño conditions are present when the ONI is +0.5 or higher. Oceanic La Niña conditions exist when the ONI is -0.5 or lower.
One complication with the El Niño effect is that it is difficult to predict. The El Niño events are highly variable and can have different intensities. The El Niño effect can also interact with other climate phenomena, such as the Indian Ocean Dipole and the Madden-Julian Oscillation. This makes it challenging to model the El Niño effect accurately [see e.g.; Thirumalai et al. (2024); Ham, Kim, and Luo (2019); L’Heureux et al. (2020); Hassanibesheli, Kurths, and Boers (2022)]. In this study, we use a simple model to capture the El Niño effect. The model is based on the historical ONI data and is used to simulate future ONI values.
The dynamics of the ONI are modeled using a Markov-switching model (Hamilton 1989). The Markov-switching model is a regime-switching model that allows for the presence of different regimes in the data. The model is given by:
ONI_t = \beta_{j} + \epsilon_{j,t},
where \beta_{j} is the coefficient for the j-th regime, and \epsilon_{j,t} is the error term with variance \sigma^2_j. A latent state at time t, s_t, indicates the regime. The dynamics of s_t are governed by a Markov process: Pr(s_t = j | s_{t−1} = i, s_{t−2}, \cdots, s_1) = Pr(s_t = j | s_{t−1} = i) = p_{ij}, where p_{ij} is the transition probability from state i to j.
Note that the probability distribution of s_t given the entire path \left\{s_{t−1}, s_{t−2}, \cdots, s_1\right\} depends only on the most recent state s_{t−1}.
In historical data, the effect can be estimated using maximum likelihood estimation and the expectation-maximization algorithm (Hamilton 1990). For forecasting, the effect is simulated using a stochastic process taking into account the probability of each regime.
To determine the number of regimes, we use the AIC and BIC. We consider a range of possible regimes and select the number of regimes that minimize the AIC and BIC. Table 3 shows the AIC and BIC for the ONI data. Only odd numbers of regimes are considered to ensure that the model includes both El Niño and La Niña events and neutral conditions.
Regimes | AIC | BIC |
---|---|---|
3-regimes | 2438 | 2504 |
5-regimes | 2342 | 2507 |
7-regimes | 1394 | 1703 |
Hence, the number of states in the Markov-switching model is seven. The seven states are chosen to correspond to the different phases of the ENSO cycle ranging from very strong El Niño, strong El Niño, moderate El Niño, neutral, moderate La Niña, strong La Niña, to very strong La Niña.
Long-range dependent error term
Long-range dependent models imply that past values of the series have a long-lasting effect on the current value. It describes the tendency for successive values to remain close to each other or to be dependent. Interestingly, the notion of long-range dependence originated in the analysis related to climate data in the pioneering work of Hurst (1956) on the Nile River minima. Hurst determined that a dam built to control river flow should be designed to withstand the worst-case scenario. The worst-case scenario was determined by the long-range dependence in the data. Years with high minima were likely to be followed by years with high minima. This phenomenon is known as the Joseph effect. This is due to Joseph’s interpretation in the Old Testament of Pharaoh’s dream, which predicted that seven years of plenty would be followed by seven years of famine.
A long-range dependent model can be written as: y_t = \sum_{j=1}^\infty \phi_j y_{t-j} + \epsilon_t, where \epsilon_t is an i.i.d. process. The coefficients \phi_j decay hyperbolically (slowly) to zero as j increases. In contrast, the coefficients of standard models decay exponentially to zero.
The temperature series exhibit long-range dependence. In the context of breaching the limits set out by the PA, the long-range dependence in the data is crucial since it affects the forecasted temperature rise.
One likely explanation behind the presence of long-range dependence in the data is aggregation (Clive W. J. Granger 1980; Zaffaroni 2004; Haldrup and Vera-Valdés 2017). The global mean temperature anomaly is an aggregate of temperature data from different regions. The aggregation process can lead to long-range dependence in the data. To account for this property, we model the error term in the trend models as a long-range dependent process.
We used the exact local Whittle estimator to estimate the long-range dependence in the data (Shimotsu and Phillips 2005). The exact local Whittle estimator is a semi-parametric estimator that estimates the long-range dependence parameter by maximizing the modified Whittle likelihood function originally proposed by Künsch (1987).
The exact local Whittle estimator minimizes the function given by: R(d) = \log\left(\frac{1}{m}\sum_{k=1}^{m}I_{\Delta^d}(\lambda_k)\right)-\frac{2d}{m}\sum_{k=1}^{m}\log(\lambda_k),
where I_{\Delta^d}(\lambda_k) is the periodogram of (1-L)^d x_t, where (1-L)^d is the fractional difference operator (C. W. J. Granger and Joyeux 1980; Hosking 1981), \lambda_{k} = e^{i2\pi k /T} are the Fourier frequencies, and m is the bandwidth parameter.
The exact local Whittle estimator is consistent and asymptotically normal. The long-range dependence parameter is estimated for each realization separately. The estimated parameter is then used to simulate the error term in the models.
Alternative data sources
The simulation study is based on the HadCRUT5 dataset. However, the methodology can be easily extended to include other datasets. For example, the GISTEMP and Berkeley Earth datasets (GISTEMP 2020; R. A. Rohde and Hausfather 2020) provide alternative temperature anomalies data.
The GISTEMP dataset is produced by the NASA Goddard Institute for Space Studies and provides global temperature anomalies data from 1880. The results using the GISTEMP dataset are presented in Figure 8 and are summarized in Table 4 (a). The results are based on the simulation study presented in the supplementary Jupyter notebook.
The results for the GISTEMP dataset show that the probability of breaching the 1.5°C limit is already greater than zero for May of 2027. Moreover, the probability of breaching it is greater than 99% by 2043. The results are in line with the results obtained using the HadCRUT5 dataset.
The Berkeley Earth dataset is produced by the Berkeley Earth project and provides global temperature anomalies data from 1850. The results using the Berkeley Earth dataset are presented Figure 9 and are summarized in Table 4 (b). The results are based on the simulation study presented in the supplementary Jupyter notebook.
The results for the Berkeley Earth dataset show that the probability of breaching the 1.5°C limit is already greater than zero for September of 2024. Moreover, the probability of breaching it is greater than 99% by 2036. The results show a more rapid increase in the probability of breaching the 1.5°C limit compared to the HadCRUT5 dataset.
Probability level and period | 1.5°C threshold | 2°C threshold |
---|---|---|
Above 0%, 20-years avg. | 2027-05-01 | 2047-09-01 |
Above 50%, 20-years avg. | 2033-06-01 | 2060-01-01 |
Above 99%, 20-years avg. | 2043-12-01 | 2071-02-01 |
Above 0%, 30-years avg. | 2033-02-01 | 2052-09-01 |
Above 50%, 30-years avg. | 2039-06-01 | 2065-01-01 |
Above 99%, 30-years avg. | 2048-10-01 | 2075-12-01 |
Probability level and period | 1.5°C threshold | 2°C threshold |
---|---|---|
Above 0%, 20-years avg. | 2024-09-01 | 2041-10-01 |
Above 50%, 20-years avg. | 2028-06-01 | 2053-08-01 |
Above 99%, 20-years avg. | 2036-01-01 | 2063-08-01 |
Above 0%, 30-years avg. | 2029-09-01 | 2046-05-01 |
Above 50%, 30-years avg. | 2033-12-01 | 2058-10-01 |
Above 99%, 30-years avg. | 2040-01-01 | 2068-11-01 |
Reproducibility
The code used to perform the simulation study is available in a Jupyter notebook in the supplementary material. The code is written in Julia (Bezanson et al. 2017). The Julia programming language is a high-level and high-performance language for technical computing. Additional packages used in the simulation study are the DataFrames.jl
package for data manipulation (Bouchet-Valat and Kamiński 2023), the MarSwitching.jl
package for Markov-switching models (Dadej 2024), the LongMemory.jl
package for long-range dependent models (J. E. Vera-Valdés 2024), the CSV.jl
package to read and write CSV files (Quinn et al. 2024), and the Plots.jl
package for plotting (Breloff 2024).
The code is well documented and includes comments to explain the different steps of the simulation study. The code is open-source and can be freely used and modified. We encourage other researchers to use the code to reproduce our results and to extend the methodology to other datasets and models.
Citation
@article{vera-valdés2024,
author = {Vera-Valdés, J. Eduardo and Kvist, Olivia},
title = {Breaching {1.5°C:} {Give} Me the Odds},
journal = {arXiv},
date = {2024-12-17},
url = {https://arxiv.org/abs/2412.13855},
doi = {10.48550/arXiv.2412.13855},
langid = {en},
abstract = {Climate change communication is crucial to raising
awareness and motivating action. In the context of breaching the
limits set out by the Paris Agreement, we argue that climate
scientists should move away from point estimates and towards
reporting probabilities. Reporting probabilities will provide
policymakers with a range of possible outcomes and will allow them
to make informed timely decisions. To achieve this goal, we propose
a method to calculate the probability of breaching the limits set
out by the Paris Agreement. The method can be summarized as
predicting future temperatures under different scenarios and
calculating the number of possible outcomes that breach the limits
as a proportion of the total number of outcomes. The probabilities
can be computed for different time horizons and can be updated as
new data become available. As an illustration, we performed a
simulation study to investigate the probability of breaching the
limits in a statistical model. Our results show that the probability
of breaching the 1.5°C limit is already greater than zero for 2024.
Moreover, the probability of breaching the limit is greater than
99\% by 2042 if no action is taken to reduce greenhouse gas
emissions. Our methodology is simple to implement and can easily be
extended to more complex models of the climate system. We encourage
climate model developers to include the probabilities of breaching
the limits in their reports.}
}