the history of climate models

We are diving into our archives and republishing some paid content for free. This long read was originally published in the March 2024 Cosmos print magazine. You can read more amazing long reads if you subscribe now.

From modest beginnings, weather and climate models have come to dominate our days. Drew Rooke takes a tour across nearly two centuries to report on the scientists from a range of disciplines who have brought us to an extraordinary modelling moment.

The computer screen radiates line after line of strange text: black letters, numbers and symbols on a light grey background. Scrolling through, it seems to be never-ending and – at least for me – mostly unintelligible; I’m not fluent in Fortran, the programming language it’s written in. But there’s a plain English preface to this gobbledegook text.

“Many people have contributed to the development of this code,” it reads. “This is a collective effort and although individual contributions are appreciated, individual authorship is not indicated.”

I’m visiting the Climate Change Research Centre (CCRC) at the University of New South Wales (UNSW), Sydney, and what I’m looking at is the backend of a modern-day climate model – specifically, the University of Victoria Earth System Climate Model (UVic ESCM).

This is just one of many models that scientists use nowadays to peer into Earth’s past and make projections about its future. For example: that our planet is likely to warm 1.5°C above pre-industrial levels in the “near-term” and that “every increment of global warming will intensify multiple and concurrent hazards”, as the IPCC Sixth Assessment report says.

Like many people, I strongly believe these projections. But my belief is based on little more than blind faith in the models that produce them. In fact, I know so little of the inner workings of climate modelling that it seems to me like something of a dark art – a form of planetary-scale fortune telling. The reason I’m here is to break the spell; to see for myself exactly what is behind the curtain of a contemporary climate model.

My guide is Katrin Meissner, director of the CCRC and a professor at UNSW, who melds a brown bob, warm smile and more than 25 years’ experience in climate modelling. She first started developing models while completing her PhD in the mid-1990s at the Alfred Wegener Institute for Polar and Marine Research in Germany, and initially “hated” the computer coding involved. Over time, it grew on her and she came to appreciate its beauty.

“You have to use your brain to try to work out why things don’t work; it’s like a puzzle you have to solve,” she says.

In 2000, Meissner accepted a postdoc position in the School of Earth and Ocean Sciences at the University of Victoria, Canada, where her coding experience and knowledge of the Earth’s climate system combined to help her build the very model I’m looking at now.

Earth’s climate is best described as chaos: a system that’s highly responsive to the smallest perturbations.

In climatologist parlance , the UVic ESCEM is a model of “intermediate complexity”: it couples key processes that influence the Earth’s climate system – including the atmosphere, ocean, sea ice and land surface – but has a low resolution in order to run long-term simulations of large-scale processes that allows its driving computer hardware to run efficiently.

Other types of climate models include more basic zero-dimensional “energy balance models”, which have a very coarse resolution and only simulate incoming and outgoing radiation at the Earth’s surface. Much more sophisticated “general circulation models” have a much finer resolution and simulate the whole climate system on a global scale in three dimensions.

Meissner describes the use of any one of these models as “a very dry scientific approach” to studying the Earth’s climate, which mirrors the mindset of their makers – a point often overlooked by their detractors. “We are actual physicists and mathematicians who just want to understand the science.”

Earth Capture

Earth’s climate is best described as chaos: a system that’s highly responsive to the smallest perturbations – or, to speak more scientifically, that exhibits “a sensitive dependence on initial conditions”. As pioneering meteorologist Edward Lorenz, who made famous this profound phenomenon, said in 1972, it’s possible that even the flap of a single butterfly’s wings in one location can create a positive feedback loop that ultimately leads to a tornado forming faraway in the future – or “can equally well be instrumental in preventing a tornado”.

But even though this system is inherently distempered, it is also governed by fundamental physical laws, such as the conservation of mass, energy and momentum – which is why Gavin Schmidt, director of the NASA Goddard Institute for Space Studies (GISS), says that developing a virtual representation of it is: “Quite straightforward. Well, compared to general relativity, it’s straightforward.”

Long before he became a world-leading climate scientist, Schmidt “bummed around” in Australia for a year, “working in restaurants, picking grapes, running a youth hostel in Perth. It was a lot of fun, but I wasn’t doing anything terribly intellectual and I got a little bored.”

He cut his modelling teeth shortly after he completed his PhD in applied mathematics in 1994 at University College London. “A friend of my supervisor was looking for people researching climate, constructing models of the thermohaline circulation in the ocean. And I said, ‘Oh, that’s absolutely fascinating.’ But I had no clue what that meant.

“I remember putting the phone down and then asking my supervisor, ‘What the hell is the thermohaline circulation?’ ”

Lack of knowledge managed, Schmidt took the job at McGill University in Canada, which turned into “a big crash course in climate writ large”. Then, in 1996, he moved to New York – “because I had a girlfriend there” – and began at GISS, developing climate models. These models are based on the same laws of physics as govern Earth. But in models, physics laws are represented mathematically by the “primitive equations” first formulated by Norwegian physicist and meteorologist Vilhelm Bjerknes at the turn of the 20th century, and later encapsulated in code.

Typically containing enough code to fill 18,000 pages, modern global climate models are actually a composite of separate models of individual components of Earth’s climate, such as the atmosphere, ocean, land surface and sea ice. These are carefully combined – an intricate process that Simon Marsland, the leader of the National Environmental Science Program Climate Systems Hub, likens to “doing brain surgery on many different brains at the same time and connecting wires in between them”.

They work by dividing the entire globe into a three-dimensional grid of thousands of interconnected cells. “Kind of like a chess board,” says Marsland, who is based at the CSIRO.

Cirlce
Clockwise from top: Vilhelm Bjerknes, Eunice Foote, Svante Arrhenius, Lewis Fry Richardson and Edward Lorenz. Credit: Clockwise from top: University of Bergen. Wikipedia. Hulton Archive / Getty Images. Kurt Hutton / Getty images. UCAR.

This gigantic grid’s individual cells represent segments of the atmosphere, ocean, land surface or cryosphere. Their size defines a model’s spatial resolution and therefore its computational cost. Nowadays, they are typically 50–200 square kilometres in size and up to 90 layers tall in the atmosphere and 60 in the ocean.

Scientists “initialise” each cell – that is, they encapsulate in code its initial temperature, air-pressure, humidity and other important physical processes, using the trove of observational climate data gathered over the years by ships, satellites and atmospheric balloons. But this is an imperfect art; there are many key processes that are either not fully understood or act on sub-grid scales, such as the formation of clouds, which have to therefore be approximated.

After “external forcings” – like solar radiation and greenhouse gas emissions – have been included, the model can then begin the protracted process of calculating the huge number of equations coded into every cell at discrete time intervals, typically one hour.

Inevitably errors appear. But once the model has been carefully refined and “spun-up” – that is, run over an extended period of time – something remarkable happens. A realistic, albeit simplified, simulation of the Earth emerges, complete with what Schmidt calls “emergent properties” that haven’t even been explicitly coded, such as the Hadley circulation (air rising near the equator and flowing polewards) or large storms in the mid latitudes.

“We’re not coding those properties; they just emerge out of the interaction of the individual cells,” says Schmidt. “Yet, they match what the real world does. And you go, ‘Well, that’s incredible! Who would have guessed that this would have worked?’ ”

A model needs to reach this state of equilibrium for scientists to have confidence to start using it to simulate, for example, the long-term consequences of a huge volcanic eruption, or a dramatic spike in the level of carbon dioxide in the atmosphere.

But in order to account for the inherent chaos in Earth’s climate system, they run multiple simulations with slight changes in the initial conditions to get a comprehensive understanding of the probability of different outcomes occurring. This process, in the case of the most sophisticated models looking far into the future, can take up to a year to complete on the most powerful supercomputer. Meissner best sums up these models: “They’re beasts.”

Laying the foundations

Climate-modelling beasts are the culmination of more than 100 years of evolution, the roots of which tap into the work of Swedish chemist Svante Arrhenius.

In 1896 – seven years before he won the Nobel Prize in Chemistry for his work on the conductivities of electrolytes – Arrhenius published the seminal climate science paper “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”.

It was well known by then that the carbon dioxide – or, as it was more commonly known at the time, carbonic acid – in Earth’s atmosphere trapped incoming solar radiation and thus had a potent warming effect on its surface temperature.

Arrhenius’s paper was the first to quantify the contribution of carbon dioxide to the greenhouse effect – and even though it was crude, it laid the foundations for much of the climate modelling work that followed.

This had first been demonstrated by an unpaid American scientist named Eunice Foote in 1856; it took more than 150 years for the scientific community to recognise her achievement.

In a series of simple experiments at her home in Seneca Falls, New York, Foote measured the temperature of small glass cylinders filled with different gases and placed in the sun and shade. Most notably, she found that the cylinders containing carbon dioxide not only warmed the most in sunlight, but also took the longest to cool on being removed from it.

As she wrote in a short paper titled “Circumstances Affecting the Heat of Sun’s Rays”: “An atmosphere of that gas would give to our Earth a high temperature; and if as some suppose, at one period of its history the air had mixed with it a larger proportion than at present, an increased temperature …must have necessarily resulted.”

Arrhenius wanted to find out just how much warmer – or colder – the Earth would be if the proportion of carbon dioxide in its atmosphere drastically changed. In his paper he presented the results of a very simple theoretical model he had used to come up with an answer.

His model divided the Earth into 13 latitudinal sections from 70°N to 60°S. Armed with only pencil and paper and grieving the end of a brief marriage with one of his former students, Arrhenius then calculated for each section the temperature under six different carbon dioxide scenarios, using the best available information about how much incoming solar radiation was absorbed by atmospheric carbon dioxide and water vapour.

Newsletter

These “tedious calculations”, as he described them, took him the better part of a year to complete. As well as demonstrating that a significant reduction in atmospheric carbon dioxide could trigger an ice age, they showed that a doubling of this potent greenhouse gas would warm Earth’s average temperature by 5–6°C – an estimate he hoped would materialise. As he told the crowd at a lecture at Stockholm’s Högskola (now Stockholm University) in 1896, global warming would permit “our descendants … to live under a warmer sky and in a less harsh environment than we were granted”.

Arrhenius’s paper was the first to quantify the contribution of carbon dioxide to the greenhouse effect – and even though it was crude, it laid the foundations for much of the climate modelling work that followed. As NASA’s Gavin Schmidt says: “Our understanding of the climate is subtler now. We have more data. But scientists at the turn of the 20th century – they were mostly correct about how things worked and how things would change. They were extremely insightful. They had it all worked out.”

Time to complexify

A key step in the evolution of climate models occurred in 1922, when British mathematician and meteorologist Lewis Fry Richardson published Weather Prediction by Numerical Process. In this now-classic book, Richardson described a radical new scheme for weather forecasting: divide the globe into a three-dimensional grid and for each grid cell, apply simplified versions of the primitive equations to deduce its weather.

Richardson imagined a fantastical solution to this time-consuming problem: a ‘forecast factory’.

This was a far more scientific method of forecasting the weather than looking for similar patterns in records and extrapolating forwards, as was conventionally done, but it also consumed much more time. In fact, it took Richardson six weeks performing calculations by hand just to produce an eight-hour forecast for one location (which, due to issues with his data, was wildly inaccurate).

Richardson imagined a fantastical solution to this problem: a “forecast factory”. A large circular theatre, it would have a gridded map of the world painted on its wall. Inside, 64,000 mathematicians – or human computers – each armed with a calculator would solve the equations in their assigned cell and develop a weather forecast in real time. In the centre of the factory, the man in charge of the whole operation would sit on a large pulpit, coordinating the human computers like an orchestra conductor with coloured signal lights.

Richardson’s fantasy became something of a reality in 1950, when a team at Princeton University in the US adopted his grid-cell scheme and produced the world’s first computerised, regional weather forecast using the first general-purpose electronic computer, known as ENIAC. But scientists knew that the same scheme could be used to construct a three-dimensional model that simulated not just regional weather, but the global climate as well.

With this goal in mind, the United States Weather Bureau established the General Circulation Research Section in 1955, under the direction of US meteorologist Joseph Smagorinsky. Fresh from working as a weather observer during World War II, Smagorinsky was eager to recruit the finest scientific minds from around the world.

Nafems
In his 1922 book, Lewis Fry Richardson described his fantastical vision for a “forecast factory” to predict the weather, filled with 64,000 mathematicians acting as human computers. This 1986 rendition by Irish artist Stephen Conlin shows a director of operations standing in the central tower, a pneumatic messaging system through the factory and even a large hemispheric bowl on a turntable for geophysical experiments. Credit: Stephen Conlin, Courtesy University College Dublin

There was one young scientist from Japan in whom he sensed prodigious potential: Syukoro Manabe. Born in 1931 on the island of Shikoku, Manabe had survived his wartime upbringing to become one of the leading students of meteorological physics at the University of Tokyo – and upon joining Smargonisky’s team in 1958, he was tasked with managing the completion of its establishment goals.

Manabe knew this wouldn’t be easy, but he had a powerful new tool in his kit: an IBM 7030, also known as ‘Stretch’ – the world’s first supercomputer. Larger than a small house and weighing roughly 35 tonnes, it had initially been developed (at the Pentagon’s request) for modelling the effects of nuclear weaponry. It could perform more than 100 billion equations in about a day, but this wasn’t enough computing power to run a three-dimensional climate model.

So with his colleague Richard Wetherald, Manabe built a simplified one-dimensional version that incorporated key physical processes such as convection and radiation to simulate the transfer of heat throughout a single column of air 40 kilometres high in a number of different carbon dioxide scenarios.

In May 1967 – by which time the General Circulation Research Section had been renamed the Geophysical Fluid Dynamics Laboratory (GFDL) – Manabe and Wetherald published their findings. Their paper “Thermal Equilibrium of the Atmosphere with a given Distribution of Relative Humidity” didn’t sound particularly exciting. But it proved to be a game changer; in fact, in a 2015 survey, the world’s leading climate scientists voted it as the most influential climate science paper of all time.

As scientists were increasing climate-model resolution they were adding more and more important components.

There are numerous reasons. For one, the paper marked the first time that the fundamental elements of Earth’s climate had been represented in a computer model. But it also provided the first reliable prediction of what doubling carbon dioxide would do to overall global temperature. As Manabe and Wetherald wrote: “According to our estimate, a doubling of the CO2 content in the atmosphere has the effect of raising the temperature of the atmosphere (whose relative humidity is fixed) by about 2°C.”

Manabe would ultimately go on to win the Nobel Prize in Physics in 2021 for his contribution to the physical modelling of Earth’s climate. In 1969, he and Kirk Bryan – an oceanographer who also worked at the GFDL – produced the first-ever coupled climate model. It combined Manabe and Wetherald’s famous atmosphere model with one Bryan had previously developed of the ocean, and although it was highly simplified and took roughly 1,200 hours to run on a UNIVAC 1108 computer, it successfully produced many realistic features of Earth’s climate.

By 1975, computing technology advanced enough for Manabe and Bryan to construct a more detailed version of their coupled model prototype. With a grid size of 500 sq. km, it included “realistic rather than idealised topography” and successfully simulated “some of the basic features of the actual climate”. But it also simulated many “unrealistic features”, underscoring the necessity of increasing climate model resolution in the future.

This became possible towards the end of the 20th century. Computer processing power dramatically increased, enabling them to complete larger and larger calculations. By 1995, for example, the typical horizontal resolution of global climate models was 250km; six years later, it was 180km.

At the same time scientists were increasing climate-model resolution they were adding in more and more important components. By the mid 2000s, some models were integrating not only the atmosphere, land surface, ocean and sea ice, but also aerosols, land ice and the carbon cycle.

But even though climate models have become infinitely more complex over the decades, the physics and mathematics at their core has, as the CSIRO’s Simon Marsland says, “essentially remained the same”.

The proof is in the volcano

On 15 June 1991, after months of intensifying seismic activity, Mount Pinatubo – located on Luzon, the Philippines’ largest and most populous island – exploded. As rocks and boiling mudflows surged down the volcano’s flanks, a toxic ash cloud soared roughly 40km into the sky and, over time, completed several laps around the globe.

The eruption – the second largest of the 20th century – killed nearly 900 people and left roughly 10,000 homeless. It also released 17 megatons of sulphur dioxide into the atmosphere, which drew the attention of climate scientists.

It had long been known that volcanic aerosols reflect sunlight back to space and thus temporarily reduce Earth’s temperature, and almost immediately after the eruption climate scientist James Hansen and several of his colleagues at NASA’s GISS started running experiments with their climate model to estimate how much – and for how long – it would cool the planet.

In January 1992, they published their findings. The eruption, they wrote, was “an acid test of climate models” as the “global shield of stratospheric aerosols” would have a strong enough cooling effect that could easily be simulated.

According to their model, “intense aerosol cooling” would begin late in 1991 and maximise late in 1992. By mid 1992, it would be large enough to “even overwhelm global warming associated with an El Niño that appears to be developing”. Critically, their model also showed that by the later 1990s, the cooling effect would wear off and there would be “a return to record warm levels”.

As it turned out, this is exactly what happened. But this is far from the only evidence of the reliability of climate models.

Cirlce2
Clockwise from top: Syukuro Manabe with Joseph Smagorinsky, Mount Pinatubo eruption, Syukuro Manabe, IBM 7030 Stretch supercomputer, and James Hansen testifying to the US Senate in 1988 about the greenhouse effect. Credit: clockwise from top: GFDL. ARLAN NAEG / Getty Images. GFDL. IBM. NASA Image Collection / Alamy Stock Photo.

In 2019, a team led by climate scientist Zeke Hausfather published an extensive performance evaluation of the climate models used between 1970 and 2007 to project future global mean surface temperature changes. After comparing predictions with observational records, they found that “climate models published over the past five decades were generally quite accurate in predicting global warming in the years after publication, particularly when accounting for differences between modelled and actual changes in atmospheric CO2 and other climate drivers”.

The researchers added: “We find no evidence that the climate models evaluated in this paper have systematically overestimated or underestimated warming over their project period. The projection skill of the 1970s models is particularly impressive given the limited observational evidence of warming at the time.”

In fact, one such model, developed by Manabe and Wetherald, not only produced a robust estimate of global warming, it also accurately forecast how increased carbon dioxide would affect the distribution of temperature in the atmosphere. As they wrote in a 1975 paper describing the model: “It is shown that the CO2 increase raises the temperature of the model troposphere, whereas it lowers that of the model stratosphere.”

Stratospheric cooling is now recognised as the fingerprint of global warming. But as Schmidt explains: “There was no reason to think that was right other than the mathematics suggested it. Remember, this was before anybody had any temperature measurements in the stratosphere, let alone trends.

“But now we’ve had 40 odd years of satellite information from the stratosphere and troposphere and surface, and we can see that what’s happened is more or less exactly what was predicted.”

This is the basis of Schmidt’s confidence in these models – “that from the beginning, they have made predictions that have come true”.

But even as computers become more powerful, allowing for finer resolution, and scientists reduce the biases and drift in models, there is one limitation that can never be overcome. The Earth remains inherently chaotic.

But climate models – even the most advanced ones – do have limitations. Many important small-scale processes are not accurately represented in them, due to ongoing computational constraints which prevent finer resolutions. One such process is the formation of clouds, which play a key role in regulating the planet’s temperature.

Models are also inevitably biased – they might be either slightly too cold or too warm, for example. Over time, these biases, no matter how small, will amplify and can cause a model to drastically move away from its initial conditions – a phenomenon referred to as “drift”.

But even as computers become more powerful, allowing for finer resolution, and scientists reduce the biases and drift in models, there is one limitation that can never be overcome. The Earth remains inherently chaotic, containing an infinite multitude of interacting conditions that range from the microscopic to planetary in size; it’s impossible to build a perfect replica of it.

“We’re never going to have a perfect model; our models are always going to be wrong,” Schmidt says, echoing every one of his peers interviewed for this story. “But they are demonstrably useful. It’s kind of incredible: even though they are imperfect, incomplete and still have bugs in them, they nonetheless make useful, good predictions. And those predictions have been getting better over time.”

And even though the work that’s entailed in both developing and using these models can be, according to Marsland, “very slow”, “very frustrating”, and “prone to many failures”, it has also been unpredictably life-changing for him. “I think the more years you work in this field, the more you get exposed to processes happening in the world that you would have no idea about otherwise. Your eyes are continually being opened and you’re always being reminded that the world is a beautiful place and that we have a responsibility to make sure it remains a beautiful place for the people who come after us.”



Related Content

We Could Be Just 10 Seconds Away From Discovering Dark Matter : ScienceAlert

Incredible 50,000-Year-Old Baby Mammoth Found ‘Perfectly Preserved’ in Siberian Ice

Astronauts Share Most Breathtaking Images of Earth From Space in 2024 : ScienceAlert

Leave a Comment