Even before the study of human-induced global warming became fashionable, tax dollars had funded a major portion of that research. Government organizations continue to supply the vast majority of the moneys for those research efforts. Yet with the tens of billions of dollars expended over the past couple of decades, there has been little increase in our understanding of what the future might bring.
The recent 5th Assessment Report from the Intergovernmental Panel on Climate Change (IPCC) proclaims that global surface temperatures are projected to increase through the year 2100, that sea levels will continue to rise, that in some regions rainfall might increase and in others it will decrease, etc. But those were the same basic messages with the 4th Assessment Report in 2007, and the 3rd Assessment Report in 2001, and the 2nd Assessment Report in 1995. So we’ve received little benefit for all of those tax dollars spent over the past few decades.
Those predictions of the future are based on simulations of climate using numerical computer programs known as climate models. Past and projected factors that are alleged to impact climate on Earth (known as forcings) serve as inputs to the models. Then the models, based on more assumptions made by programmers, crunch a lot of numbers and regurgitate outputs that are representations of what the future might hold in store, with the monumental supposition that the models properly simulate climate.
But it is well known that climate models are flawed, that they do not properly simulate climate metrics that are of interest to policymakers and the public—like surface temperatures, precipitation, sea ice area. And in at least one respect the current generation of climate models performs more poorly than the earlier generation. That is, climate models are getting worse, not better, at simulating Earth’s climate.
With that in mind, the following are sample questions that policymakers should be asking climate scientists and agencies who receive government funding for research into human-induced global warming—along with information to support the questions.
Much of the text in the following is taken from my book Climate Models Fail. I have expanded on many of the discussions here.
1. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY DOES THE CURRENT GENERATION OF CLIMATE MODELS SIMULATE GLOBAL SURFACE TEMPERATURES MORE POORLY THAN THE PRIOR GENERATION?
Background Information: In the following peer-reviewed papers, CMIP stands for Coupled Model Intercomparison Project, which serves as archives of climate model outputs used by the Intergovernmental Panel on Climate Change (IPCC). The CMIP5 archive was used for the recent IPCC 5th Assessment Report (AR5), while the CMIP3 archive was used for their 4th Assessment Report (AR4) from 2007.
# # #
The reference for this question is Swanson (2013) paper “Emerging Selection Bias in Large-scale Climate Change Simulations.” The preprint version of the paper is here. It is a remarkable paper inasmuch as Swanson explains why the current generation of climate models (CMIP5) is in better agreement among themselves than the previous generation (CMIP3) but, as a result, they perform worse. In other words, the models are growing closer to a consensus answer, but, in doing so, they do not simulate global surface temperatures as well outside of the Arctic.
In the Introduction, Swanson writes (my boldface):
Here we suggest the possibility that a selection bias based upon warming rate is emerging in the enterprise of large-scale climate change simulation. Instead of involving a choice of whether to keep or discard an observation based upon a prior expectation, we hypothesize that this selection bias involves the ‘survival’ of climate models from generation to generation, based upon their warming rate. One plausible explanation suggests this bias originates in the desirable goal to more accurately capture the most spectacular observed manifestation of recent warming, namely the ongoing Arctic amplification of warming and accompanying collapse in Arctic sea ice. However, fidelity to the observed Arctic warming is not equivalent to fidelity in capturing the overall pattern of climate warming. As a result, the current generation (CMIP5) model ensemble mean performs worse at capturing the observed latitudinal structure of warming than the earlier generation (CMIP3) model ensemble. This is despite a marked reduction in the inter-ensemble spread going from CMIP3 to CMIP5, which by itself indicates higher confidence in the consensus solution. In other words, CMIP5 simulations viewed in aggregate appear to provide a more precise, but less accurate picture of actual climate warming compared to CMIP3.
In other words, in an effort to better capture the polar amplification taking place in the Arctic, the current generation of climate models (CMIP5) agrees better among themselves than the prior generation (CMIP3); that is, they are coming together toward the same results so there is less of a spread between climate model outputs. Overall, unfortunately, the CMIP5 models perform worse than the CMIP3 models at simulating global surface temperatures outside of the Arctic.
I’ve read that quote from Swanson (2013) a number of times, because I’ve included it in my book Climate Models Fail and in at least one other blog post. The portion that reads “by itself indicates higher confidence in the consensus solution” stood out for me this time. Is this “marked reduction in the inter-ensemble spread going from CMIP3 to CMIP5” one of the bases for the increased confidence exhibited by the IPCC in their 5th Assessment Report? If so, then the climate scientists associated with the IPCC are fooling themselves, because the current generation of models performs worse than the prior generation.
It’s also remarkable that Swanson (2013) presented why, not if, model performance had grown worse. Apparently, it is common knowledge among the climate science community that CMIP5 models perform worse than the prior generation.
2. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS PROPERLY SIMULATE SEA ICE LOSSES IN THE ARCTIC OCEAN OR SEA ICE GAINS IN THE SOUTHERN OCEAN SURROUNDING ANTARCTICA?
Arctic sea ice loss outpaced the predictions of an earlier generation of climate models (those stored in the CMIP3 archive). Even though that was a very obvious failing of the models, alarmists broadcast that failure at every opportunity, claiming global warming was worse than anticipated. And with the adjustments noted above, the latest generation of models (CMIP5) still has difficulties simulating Arctic sea ice loss. These are discussed in Stroeve, et al. (2012) “Trends in Arctic sea ice extent from CMIP5, CMIP3 and Observations” [paywalled]. The abstract reads (my boldface):
The rapid retreat and thinning of the Arctic sea ice cover over the past several decades is one of the most striking manifestations of global climate change. Previous research revealed that the observed downward trend in September ice extent exceeded simulated trends from most models participating in the World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (CMIP3). We show here that as a group, simulated trends from the models contributing to CMIP5 are more consistent with observations over the satellite era (1979–2011). Trends from most ensemble members and models nevertheless remain smaller than the observed value. Pointing to strong impacts of internal climate variability, 16% of the ensemble member trends over the satellite era are statistically indistinguishable from zero. Results from the CMIP5 models do not appear to have appreciably reduced uncertainty as to when a seasonally ice-free Arctic Ocean will be realized.
The press and global warming enthusiasts have been hyping the loss of Arctic sea ice, but the models simulate it so poorly that the authors concluded that natural variability causes the bulk of the ice loss.
And at the other end of the globe: It is well known that Southern Hemisphere sea ice extent has grown since satellite-based measurements began in 1978. Yet climate models simulate the opposite, a loss, in sea ice there. For the model failings at simulating sea ice extent in the Southern Ocean surrounding Antarctica, we’ll refer to Turner et al. (2013) An Initial Assessment of Antarctic Sea Ice Extent in the CMIP5 Models. Again, the CMIP5 archive is the latest generation of climate models. [Full paper is paywalled.] The Turner et al abstract reads (my boldface and brackets):
We examine the annual cycle and trends in Antarctic sea ice extent (SIE) for 18 Coupled Model Intercomparison Project 5 models that were run with historical forcing for the 1850s to 2005. Many of the models have an annual SIE [sea ice extent] cycle that differs markedly from that observed over the last 30 years. The majority of models have too small a SIE at the minimum in February, while several of the models have less than two thirds of the observed SIE at the September maximum. In contrast to the satellite data, which exhibits a slight increase in SIE, the mean SIE of the models over 1979 – 2005 shows a decrease in each month, with the greatest multi-model mean percentage monthly decline of 13.6% dec-1 in February and the greatest absolute loss of ice of -0.40 × 106 km2 dec-1 in September. The models have very large differences in SIE over 1860 – 2005. Most of the control runs have statistically significant trends in SIE over their full time span and all the models have a negative trend in SIE since the mid-Nineteenth Century. The negative SIE trends in most of the model runs over 1979 – 2005 are a continuation of an earlier decline, suggesting that the processes responsible for the observed increase over the last 30 years are not being simulated correctly.
Basically, according to Turner et al. (2013), the current generation of climate models cannot simulate the annual seasonal cycle in Antarctic sea ice extent, and the climate models show a decrease in Antarctic sea ice extent since 1979, while satellite-based observations show an increase in sea ice extent there.
The closing clause of Turner et al. (2013) is worth repeating and expanding: “…the processes responsible for the observed increase [in Antarctic sea ice extent] over the last 30 years are not being simulated correctly [by the current generation of climate models].”
Obviously, all of the model-based predictions of gloom and doom about sea ice have no basis in the real world.
3. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS PROPERLY SIMULATE ATMOSPHERIC RESPONSES TO EXPLOSIVE VOLCANIC ERUPTIONS?
Background Information: The paper presented in this section refers to the North Atlantic Oscillation, which is a much-studied natural variation in sea level pressure (and interdependent wind patterns) that impacts climate in the Northern Hemisphere. You’ll often hear your local weather forecaster referring to the North Atlantic Oscillation…or its sibling the Arctic Oscillation.
# # #
The atmospheric responses to aerosols ejected by explosive volcanic eruptions have been studied for decades. Many persons are aware that lower troposphere temperatures and surface temperatures cool temporarily following an explosive eruption. This cooling is caused by a kind of short-term umbrella effect, while the volcanic aerosols are blocking sunlight. But there are other well-studied atmospheric responses. Not too surprisingly, climate models poorly simulate all of the atmospheric responses to volcanic eruptions.
These failures were discussed in Driscoll, et al. (2012) “Coupled Model Intercomparison Project Phase 5 (CMIP5) Simulations of Climate Following Volcanic Eruptions”. They wrote in the abstract (my boldface):
The ability of the climate models submitted to the Coupled Model Intercomparison Project 5 (CMIP5) database to simulate the Northern Hemisphere winter climate following a large tropical volcanic eruption is assessed. When sulfate aerosols are produced by volcanic injections into the tropical stratosphere and spread by the stratospheric circulation, it not only causes globally averaged tropospheric cooling but also a localized heating in the lower stratosphere, which can cause major dynamical feedbacks. Observations show a lower stratospheric and surface response during the following one or two Northern Hemisphere (NH) winters, that resembles the positive phase of the North Atlantic Oscillation (NAO). Simulations from 13 CMIP5 models that represent tropical eruptions in the 19th and 20th century are examined, focusing on the large-scale regional impacts associated with the large-scale circulation during the NH winter season. The models generally fail to capture the NH dynamical response following eruptions. They do not sufficiently simulate the observed post-volcanic strengthened NH polar vortex, positive NAO, or NH Eurasian warming pattern, and they tend to overestimate the cooling in the tropical troposphere. The findings are confirmed by a superposed epoch analysis of the NAO index for each model. The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings. This is also of concern for the accuracy of geoengineering modeling studies that assess the atmospheric response to stratosphere-injected particles.
In other words, according to Driscoll, et al. (2012), climate models simulate too much temporary cooling in response to volcanic aerosols (that is, they’re too sensitive) and they fail to produce the warming that takes place in the Northern Hemisphere for the first few winters after the eruption.
The final sentence in the abstract of Driscoll, et al. (2012) is also important. Basically, they’re saying that climate models perform so poorly at simulating the atmospheric response to volcanic aerosols that they question the accuracy of climate model studies of geoengineering proposals that shade the Earth by injecting aerosols into the stratosphere.
4. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY DO CLIMATE MODELS CONTINUE TO POORLY SIMULATE PRECIPITATION AND DROUGHT?
Background Information: The term downscaling is used in a quote from one of the papers that serve as reference for this question. The UNFCC (United Nations Framework Convention on Climate Change) here defines downscaling as:
…a method for obtaining high-resolution climate or climate change information from relatively coarse-resolution global climate models (GCMs).
In simpler terms, downscaling is a method that theoretically allows “coarse-resolution” global climate models to be used to simulate the regional climate at more finite “high-resolution” levels.
# # #
We’ll provide two papers to serve as references for how poorly climate models simulate precipitation and drought.
The first is Stephens, et al. (2010), “Dreary State of Precipitation in Climate Models.” The title definitely indicates that the paper does not praise the models. The abstract reads:
New, definitive measures of precipitation frequency provided by CloudSat are used to assess the realism of global model precipitation. The character of liquid precipitation (defined as a combination of accumulation, frequency, and intensity) over the global oceans is significantly different from the character of liquid precipitation produced by global weather and climate models. Five different models are used in this comparison representing state-of-the-art weather prediction models, state-of-the-art climate models, and the emerging high-resolution global cloud “resolving” models. The differences between observed and modeled precipitation are larger than can be explained by observational retrieval errors or by the inherent sampling differences between observations and models. We show that the time integrated accumulations of precipitation produced by models closely match observations when globally composited. However, these models produce precipitation approximately twice as often as that observed and make rainfall far too lightly. This finding reinforces similar findings from other studies based on surface accumulated rainfall measurements. The implications of this dreary state of model depiction of the real world are discussed.
In other words, the models seem to do a relatively good job of simulating some factors if the models are compared to data on a global basis, but when the models are looked at regionally, they perform poorly—producing “precipitation approximately twice as often as that observed and make rainfall far too lightly”.
In their closing paragraph, Stephens, et al. (2010) continued with:
The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system…
And:
This implies little skill in precipitation calculated at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer-scale resolution has little foundation and relevance to the real Earth system.
In other words, climate model-based predictions of regional changes in precipitation have little basis in reality.
Drought is the topic of the second paper: Taylor, et al. (2012) “Afternoon rain more likely over drier soils”. [Paywalled.] The abstract reads:
Land surface properties, such as vegetation cover and soil moisture, influence the partitioning of radiative energy between latent and sensible heat fluxes in daytime hours. During dry periods, soil-water deficit can limit evapotranspiration, leading to warmer and drier conditions in the lower atmosphere. Soil moisture can influence the development of convective storms through such modifications of low-level atmospheric temperature and humidity, which in turn feeds back on soil moisture. Yet there is considerable uncertainty in how soil moisture affects convective storms across the world, owing to a lack of observational evidence and uncertainty in large-scale models. Here we present a global-scale observational analysis of the coupling between soil moisture and precipitation. We show that across all six continents studied, afternoon rain falls preferentially over soils that are relatively dry compared to the surrounding area. The signal emerges most clearly in the observations over semi-arid regions, where surface fluxes are sensitive to soil moisture, and convective events are frequent. Mechanistically, our results are consistent with enhanced afternoon moist convection driven by increased sensible heat flux over drier soils, and/or mesoscale variability in soil moisture. We find no evidence in our analysis of a positive feedback—that is, a preference for rain over wetter soils—at the spatial scale (50–100 kilometres) studied. In contrast, we find that a positive feedback of soil moisture on simulated precipitation does dominate in six state-of-the-art global weather and climate models—a difference that may contribute to excessive simulated droughts in large-scale models.
That’s a very interesting data-based observation. Taylor, et al. (2012) found that afternoon rains in the real world tended to fall in areas where soils were dry, which would tend to suppress drought. But they also found the opposite occurs in climate models, that models simulated a preference to rain over wetter soils, which would tend to exaggerate droughts.
The presentation of the Taylor, et al. (2012) is here. The summary reads:
- Afternoon rain favoured over drier soils across globe
- Large-scale models exhibit opposite behaviour
- Erroneous depiction of feedback may “lock-in” drought conditions in climate simulations
Roger Pielke Sr. quoted the final sentence of Taylor, et al. (2012) in his post here:
…the erroneous sensitivity of convection schemes demonstrated here is likely to contribute to a tendency for large-scale models to `lock-in’ dry conditions, extending droughts unrealistically, and potentially exaggerating the role of soil moisture feedbacks in the climate system.
Bottom line of both papers: Climate models simulate precipitation poorly in a number of ways and, as a result, they exaggerate the length of droughts.
5. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS SIMULATE MULTIDECADAL VARIATIONS IN SEA SURFACE TEMPERATURES?
Background Information: We have discussed the naturally occurring multidecadal variations in the sea surface temperatures of the North Atlantic and North Pacific in a number of blog posts over the years—most recently in the post Multidecadal Variations and Sea Surface Temperature Reconstructions. For further information about the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation and about how climate models fail to simulate them properly, see the post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming, under the heading of “After decades of efforts, why can’t the climate models used by the IPCC simulate coupled ocean-atmosphere processes that cause multidecadal variations in sea surface temperatures and, in turn, land surface air temperatures?”
Further, the combined impacts of the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation on regional United States climate have been identified through data analysis. McCabe et al (2004) Pacific and Atlantic Ocean influences on multidecadal drought frequency in the United States is an examination of the impacts of the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation on drought in the United States. Full paper is here. Another paper that describes the influence of the Atlantic Multidecadal Oscillation on precipitation in the United States is Enfield et al (2001) The Atlantic multidecadal oscillation and its relation to rainfall and river flows in the continental U.S.
# # #
The first paper that serves as reference for this question is Ruiz-Barradas, et al. (2013) “The Atlantic Multidecadal Oscillation in Twentieth Century Climate Simulations: Uneven Progress from CMIP3 to CMIP5.” The full paper is here. After explaining the sample of climate models used for their study, Ruiz-Barradas, et al. (2013) state in the abstract (my boldface and brackets):
The structure and evolution of the SST [sea surface temperature] anomalies of the AMO [Atlantic Multidecadal Oscillation] have not progressed consistently from the CMIP3 to the CMIP5 models. While the characteristic period of the AMO (smoothed with a binomial filter applied fifty times) is underestimated by the three of the models, the e-folding time of the autocorrelations shows that all models underestimate the 44-year value from observations by almost 50%. Variability of the AMO in the 10–20/70–80 year ranges is overestimated/underestimated in the models and the variability in the 10–20 year range increases in three of the models from the CMIP3 to the CMIP5 versions. Spatial variability and correlation of the AMO regressed precipitation and SST anomalies in summer and fall indicate that models are not up to the task of simulating the AMO impact on the hydroclimate over the neighboring continents. This is in spite of the fact that the spatial variability and correlations in the SST anomalies improve from CMIP3 to CMIP5 versions in two of the models. However, a multi-model mean from a sample of 14 models whose first ensemble was analyzed indicated there were no improvements in the structure of the SST anomalies of the AMO or associated regional precipitation anomalies in summer and fall from CMIP3 to CMIP5 projects.
In other words, climate models do not properly simulate the Atlantic Multidecadal Oscillation in any timeframe, and as a result, they fail to capture its impact on precipitation and drought over the adjacent continents.
At the beginning of their “Concluding Remarks”, Ruiz-Barradas, et al. (2013) explain why it’s important for climate models to be able to accurately simulate the Atlantic Multidecadal Oscillation (my boldface):
Decadal variability in the climate system from the AMO is one of the major sources of variability at this temporal scale that climate models must aim to properly incorporate because its surface climate impact on the neighboring continents. This issue has particular relevance for the current effort on decadal climate prediction experiments been analyzed for the IPCC in preparation for the fifth assessment report. The current analysis does not pretend to investigate into the mechanisms behind the generation of the AMO in model simulations, but to provide evidence of improvements, or lack of them, in the portrayal of spatiotemporal features of the AMO from the previous to the current models participating in the IPCC. If climate models do not incorporate the mechanisms associated to the generation of the AMO (or any other source of decadal variability like the PDO) and in turn incorporate or enhance variability at other frequencies, then the models ability to simulate and predict at decadal time scales will be compromised and so the way they transmit this variability to the surface climate affecting human societies.
I don’t believe I need to translate the final sentence of that quote.
The second paper is Van Haren, et al. (2012) paper “SST and Circulation Trend Biases Cause an Underestimation of European Precipitation Trends.” “SST” stands for Sea Surface Temperature. The authors write (my boldface):
To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones. These mismatches are responsible for a large part of the misrepresentation of precipitation trends in climate models. The causes of the large trends in atmospheric circulation and summer SST are not known.
For further information about how poorly models simulate sea surface temperatures, see the posts:
- CMIP5 Model-Data Comparison: Satellite-Era Sea Surface Temperature Anomalies
- IPCC Still Delusional about Carbon Dioxide
The third paper is Tung and Zhou (2012) “Using Data to Attribute Episodes of Warming and Cooling in Instrumental Records.” They studied the longest surface air temperature record, Central England Temperature, and also the HADCRUT4 land-plus-ocean surface temperature record. Both contained an Atlantic Multidecadal Oscillation signal. The last sentence of the abstract for Tung and Zhou (2012) reads:
Quantitatively, the recurrent multidecadal internal variability, often underestimated in attribution studies, accounts for 40% of the observed recent 50-y warming trend.
40% is a sizable contribution to global warming from the Atlantic Multidecadal Oscillation—a contribution that is ignored by the models prepared for the IPCC.
The climate science community typically presents the variability of North Pacific sea surface temperatures in a very abstract form known as the Pacific Decadal Oscillation. By doing so they overlook the naturally occurring multidecadal variations in the sea surface temperatures of the North Pacific that are of similar magnitude, but of a slightly different frequency, to those in the North Atlantic. Refer again to the post Multidecadal Variations and Sea Surface Temperature Reconstructions.
Summary for this heading: because climate models cannot simulate the mechanisms associated with the Atlantic Multidecadal Oscillation (and the multidecadal variations in the sea surface temperatures of the North Pacific) or simulate the frequencies and magnitudes at which those variations in sea surface temperatures occur, the models have little value at being able to simulate and predict future climate in the Northern Hemisphere on decadal, multidecadal or longer timeframes.
And last for the discussion of multidecadal variations in surface temperatures, refer to the blog post Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?
6. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS SIMULATE THE BASIC OCEAN-ATMOSPHERE PROCESSES THAT DRIVE EL NIÑO AND LA NIÑA EVENTS?
Background Information: “Phaselock” in the following paper refers to the fact that El Niño and La Niña events are tied to the seasonal cycle.
“Bjerknes feedback,” very basically, means how the tropical Pacific and the atmosphere above it are coupled; i.e., they are interdependent, a change in one causes a change in the other and they provide positive feedback to one another. The existence of this positive “Bjerknes feedback” suggests that El Niño and La Niña events will remain in one mode until something interrupts the positive feedback.
# # #
El Niño and La Niña events are the dominant mode of natural ocean-atmosphere variability on Earth. They have long-term impacts on temperature and precipitation patterns globally. Those long-term impacts have been one of the primary focusses of my research over the past 5 years. An introduction to those findings are presented in the illustrated essay “The Manmade Global Warming Challenge” and they are discussed in minute detail in my ebook Who Turned on the Heat?
That aside, climate models fail to properly simulate the most basic processes that drive El Niño and La Niña events. These basic failings were presented in Bellenger, et al. (2013): “ENSO Representation in Climate Models: From CMIP3 to CMIP5.” Preprint copy is here. The section titled “Discussion and Perspectives” begins:
Much development work for modeling group is still needed in order to correctly represent ENSO, its basic characteristics (amplitude, evolution, timescale, seasonal phaselock…) and fundamental processes such as the Bjerknes and surface fluxes feedbacks.
Bellenger, et al. (2013) was, in many respects, a follow-up paper to Guilyardi, et al. (2009) “Understanding El Niño in Ocean-Atmosphere General Circulation Models: Progress and Challenges.” Guilyardi, et al. (2009) is a detailed overview of the many problems climate models have in their attempts to simulate El Niños and La Niñas. The authors of that study cite more than 100 other papers. The following is the most revealing statement in Guilyardi, et al. (2009):
Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power, et al. 2006).
In other words, because climate models cannot accurately simulate El Niño and La Niña processes, the authors of that paper have little confidence in climate model projections of regional climate or of extreme events.
7. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY DO CLIMATE MODELS SIMULATE GLOBAL SURFACE TEMPERATURES SO POORLY THAT THEY FAILED TO ANTICIPATE AND PREDICT THE HALT IN GLOBAL WARMING?
Background Information: There are obvious answers to this question. The first, already discussed, is climate models cannot simulate the multidecadal variations in sea surface temperatures or the coupled ocean-atmosphere processes that drive them (the Atlantic Multidecadal Oscillation and the decadal and multidecadal variations in the strength, frequency and duration of El Niño and La Niña events). If climate models had included those multidecadal variations in ocean processes that contribute to or halt the warming of global surface temperatures, then the projected warming would have to be reduced, thereby minimizing any urgency to respond to the mostly naturally occurring global warming. We presented and discussed this in the post Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?
There is also another major flaw in the climate models that has been avoided by the climate science community. It is the fact that the modelers have to double the observed rate of the warming for global sea surfaces over the past 3+ decades in order to have the modeled warming of global land surfaces fall into line with observations. This was presented and discussed in Models Fail: Land versus Sea Surface Warming Rates. Also see the graphs here.
# # #
There are two papers that serve as reference for the failure of climate models to simulate the halt in the warming of global surface temperatures. The first is Von Storch, et al. (2013) “Can Climate Models Explain the Recent Stagnation in Global Warming?” The one-word answer to the title question of their paper is, “No”. They stated:
However, for the 15-year trend interval corresponding to the latest observation period 1998-2012, only 2% of the 62 CMIP5 and less than 1% of the 189 CMIP3 trend computations are as low as or lower than the observed trend. Applying the standard 5% statistical critical value, we conclude that the model projections are inconsistent with the recent observed global warming over the period 1998- 2012.
According to Von Storch, et al. (2013), both generations of models (CMIP3 and CMIP5) cannot explain the recent slowdown in surface warming. The models show continued surface warming, while observations do not.
The second paper is Fyfe et al. (2013) “Overestimated global warming over the past 20 years.” Fyfe et al. (2013) write:
The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the slowdown in global warming over the past fifteen years.
Bottom line: If the climate models cannot be used to explain the current halt in the warming of global surfaces, then they cannot be used to explain the warming that had occurred from the mid-1970s to the late-1990s.
CLOSING
Climate models are portrayed by the media and by political entities like the IPCC as splendid tools for forecasting future climate. The climate science community, however, is well aware that climate models are deeply flawed. Rarely, if ever, are the models’ chronic problems presented to the public and policymakers. In this post, I cited scientific studies that showed that the models are flawed at simulating, for example:
- The coupled ocean-atmosphere processes of El Niño and La Niña, the world’s largest drivers of global temperature and precipitation.
- Responses to volcanic eruptions, sometimes powerful enough to counteract the effects of even strong El Niño events.
- Sea surface temperatures
- Precipitation—globally or regionally
- Influence of El Niño events on hurricanes
- The coupled ocean-atmosphere processes associated with decadal and multidecadal variations is sea surface temperatures, which strongly impact land surface temperatures and precipitation on those timescales.
I’ll close with a quote from Dr. Judith Curry, who is the chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. Dr. Curry is also the proprietor of the very popular blog Climate Etc. Her recent post, Climate Model Simulations of the AMO, discusses two papers. The first, also discussed in this chapter, is Ruiz-Barradas, et al. (2013) “The Atlantic Multidecadal Oscillation in Twentieth Century Climate Simulations: Uneven Progress from CMIP3 to CMIP5.” The second was the recent scientific study by Von Storch, et al. (2013) “Can Climate Models Explain the Recent Stagnation in Global Warming?” Both of those papers were discussed in this post. Dr. Curry concludes her blog post with the following:
Fitness for purpose?
While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results. The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multidecadal timescales. In view of the climate model underestimation of natural internal variability on multidecadal time scales and failure to simulate the recent 15+ years ‘pause’, the issue of fitness for purpose of climate models for detection and attribution on these time scales should be seriously questioned. And these deficiencies should be included in the ‘expert judgment’ on the confidence levels associated with the IPCC’s statements on attribution.
That is, to paraphrase Dr. Curry, it is highly questionable whether climate models are able to tell whether any given indicator of climate change is due to natural or to human causes on decadal to multidecadal timescales.
Apparently, the climate science community, with their much-trumpeted numerical models, is no closer than they were decades ago at being able to detect any human fingerprint in global warming or climate change.
A FINAL QUESTION
In light of all the climate model failings outlined above, there is a final question that policymakers should be asking. But this is a question they need to ask themselves about the climate scientists and agencies they fund. It’s pretty obvious. Should they continue to throw good money at an effort that has provided zero results?
UPDATE
I corrected a couple of typos.