Anthropogenic Climate Change (ACC), man’s influence on the climate, has been with us for millennia. Ever since Homo sapiens sapiens started farming, there has been a consequent effect on the climate. Generally this has been limited to the local area where forests have been cleared, leading to raised summer temperatures and stronger winds. However, since the Industrial Revolution, we have been adding more and more CO2 into the atmosphere, changing the climate of the whole world rather than small areas of it.
Anthropogenic Global Warming (AGW) is what I’d call a subset of ACC and refers solely to the effect increased CO2 has had on the Earth and its atmosphere.
Personally, I think it was in the late ’80s that I became aware of the concerns over the effect that CO2 was having on our climate but it wasn’t until the beginning of this century that I really began to be concerned about it. This was when I saw maps of Arctic Ice conditions and saw how they’d changed since I’d been producing ice charts of the area in the late ’60s and early ’70s. By about 2005, I’d come to the conclusion that, judging by the accelerating decline in ice extent, the Arctic would be virtually ice-free in summer by 2020. The official line at the time was that it wouldn’t happen until 2100. At moment (March 2018), it looks as I may have been too pessimistic but we;ll see what happens.
The added worry I had with this is that when I’d been studying Arctic Ice in the ’60s, I’d read that if the ice were to melt completely during one summer, it was possible that it would not return during the following winter due to fiercer Arctic storms and resultant turbulent mixing of the Ocean destroying the surface fresh-water layer. It shocks me that when I look at the Barents Sea now, in the Arctic mid-winter, it reminds me of some of the light ice conditions I occasionally saw in the mid-summer nearly fifty years ago.
A Brief History:
I’m providing this item as it seems to me that some people seem to think that the science behind AGW theory was invented as a political exercise after global warming became evident. This history show that attitude to be sheer bunkum.
For more detail than this, see History of Climate-change Science
1824: Joseph Fourier found that the Earth’s atmosphere easily allowed light from the sun to be transmitted to the Earth’s surface but the consequent outgoing infra-red radiation could not get through so easily and so the Earth’s surface became warmer.
1856: Work by Eunice Newton Foote was published explaining how the warming of the atmosphere was enhanced by carbon dioxide and that larger amounts of the gas would lead to a warmer planet.
1864: John Tyndall added methane and water vapour to carbon dioxide as gases that block outgoing infra-red radiation.
1896: Svante Arrhennius published the paper On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground in which he calculated how changes in the CO2 (Carbonic acid) content of the atmosphere affected surface temperatures. In this, he was trying to explain the ice ages rather than warn of global warming. However, he did say that the “temperature of the Arctic regions would rise about 8 degrees or 9 degrees Celsius, if the carbonic acid increased 2.5 to 3 times its present value.” About eighty years later, early computer forecasts raised this figure to more than 10 DegC for a doubling of CO2.
1938: G S Callendar tried to resurrect Arrhenius’s theory but his words mostly fell on stony ground. His assessment of the effect of a doubling of the amount of atmospheric carbon dioxide was that it would lead to a 2 DegC rise in global temperature. This calculation, of course, was made without the benefit of computer models.
1950: CEP Brooks wrote an article in Weather relating to reduction in size of glaciers during the previous 100 years and particularly in the 20th century. In this he argued that change in solar activity could not be blamed for the unusual warming of the Arctic. He went on to explain why the expected weakening of upper winds due to the warming of the Arctic had not occurred and why they had, in fact, increased. He also pointed out that strengthened upper winds lead to large, slow-moving or quasi-stationary upper troughs and ridges. (This is contrary to the latest fad which predicts that a weaker jet-stream will behave in this way.)
1958: The USS Skate surfaces in a polynya in the Arctic Ocean next to a research station on the floating ice that was set up to do research into the reported reduced extent and volume of ice during the first half of the century. It’s likely that a photograph from this mid-summer surfacing was used by climate-change deniers as evidence that when the Skate surfaced at the Pole in March the following year, it was surrounded by open water.
1972: John Sawyer forecast in his article Man-made Carbon Dioxide and the “Greenhouse” Effect that, due to increasing CO2 levels in the atmosphere, the global temperature would rise by 0.6C by the end of the century.
1975: Manabe and Wetherald published The Effects of Doubling the CO2 Concentration on the climate of a General Circulation Model in which a 2C rise in temperature for a doubling of CO2 was predicted from a computer climate model, the same as Callendar had done before the war. The model also showed a greater than 10C warming for the Arctic and that, above 17-20 km, the atmosphere would cool. Here’s the relevant diagram from the above paper:
1975: The Global Atmospheric Research Project (GARP) published Understanding Climatic Change: A Program for Action which contained assessments of past and possible future changes in climate based on all known knowledge at that time including the latest predictions based on increasing CO2 and analyses of cyclical changes in climate over many millennia.
I’ll stop with the history at this point as, although climatologists were becoming more convinced that CO2 should be causing the Earth to warm, they hadn’t noticed that it was already happening. What I intend now is to show the predictions made at about this time, based either on the effects expected from increased CO2 or from extensive work on natural cycles in the climate. As for the latter, there’s no point in looking back from where we are now for cycles as it’s too easy to get distracted by the most recent data. What is needed is to compare the forecasts made forty years ago with what actually happened since then.
The following graph uses some data from that report as a basis for comparing forecasts with the actual temperature rise. The two forecasts from CO2 data assume a sensitivity of 2.0, the same as Callendar and Manabe & Wetherald, whilst the one with sensitivity of 3.0 added an allowance for other factors such as changes in water vapour. The flattening of the CO2 curve that precedes the more marked flattening of the actual temperature curve after 1940 is due to reduced emissions resulting from the Depression.
The analysis of natural cycles in the report shows that a strong fall in global temperature from 1935 to 1990 might have been predicted from them instead of the rise of 0.4C which actually occurred.
The fall in temperature in the 1880s could be associated with the eruption of Krakatoa in 1883 and that during the 1900s may have resulted from the eruption of Santa Maria, Guatemala, in 1902.
The graphs for sunspot activity (based on sunspot numbers) and the Pacific Decadal Oscillation (PDO) both show a fairly steady fall from just before 1985 but none of this is reflected in the temperature curve. Although both showed a pre-war rise vaguely corresponding to the temperature curve at the time, the upward trend in the PDO began a decade after the air temperature started rising and sunspot activity was well above normal for a couple of decades after temperatures had begun falling.
The theory behind AGW predicted:
(1) the rate of warming of the troposphere
(2) the stratosphere would cool and at a faster rate than the troposphere
(3) the Arctic would warm faster than the rest of the planet
‘Natural cycles’ predicted the planet would cool during the second half of C20 to early C19 values.
Changes in solar activity do not tally with changes in Earth’s temperature.
Changes in the PDO explain some, but not all of the pre-WWII rise in air temperature and the later fall. There is also some tie-up with the post-1970 rise although it lags behind it. However, it has not fared too well since 1990, suggesting there should have been a fall in temperature which failed to materialise.
AGW theory is the only one that predicted all the major changes apart from the sudden dip after 1900 and the quicker than expected rise after about 1910. Here, therefore, I resort to natural causes. On October 24th, 1902, there was a major eruption of the Santa Maria volcano in Guatemala. This eruption emitted a similar amount of tephra as Krakatoa and roughly double that of Pinatubo. This event could explain some, if not all, of the subsequent decline in global temperatures. The “rebound” effect may also explain the rapidity of the rise a decade later. However, I don’t believe it would have continued until WWII and I wonder whether man-made causes may be responsible for that. The Great Depression of the thirties caused a pause in the rise in CO2 emissions but did it also affect the atmosphere in another way? Could the slow-down in industry have resulted in a cleaner atmosphere and hence an increase in solar radiation reaching the surface of the Earth? This might explain the steep rise in temperature immediately before WWII. I suspect the response of the atmosphere to the effect of reduced particulates would have been more immediate than that to the reduced output of CO2 so that the cooling effect did not kick in until WWII. WWII would also have seen an increase in atmospheric pollution which would have added a cooling effect to the levelling out of temperature rise caused by the delayed response of the CO2 effect.
Apart from the PDO, which shows some useful predictive qualities, though not as good as AGW, the other ‘natural’ predictors were worse than useless.
If I were asked whether increased CO2 is responsible for 100% of global warming, I’d have to say that it might be higher than that as natural forcing seems to have been operating in the opposite direction.
Here are the latest temperature anomalies expressed as a rolling 12-month mean with an 11-year rolling average superimposed. Why 11-year? Simply to avoid any possible numerical problems resulting from the 11-year solar cycle (not that I’ve noticed any).
Air temperature anomalies are now expressed here as differences from the pre-industrial era. Please note that the value used for that era does not agree with that used by the Met Office and IPCC, being 0.2C lower. This means that the 1.5 and 2.0C limits from the Paris Agreement are equivalent here to 1.7 and 2.2C.
The Met Office uses the period 1850-1900 as being the pre-industrial era CO2 had already begun to climb well before that time and would account for 0.1 to 0.15C warming, depending on whether a sensitivity of 2.0 or 3.0 is assumed. Also, the difference between the 1981-2010 normals and their pre-industrial era is about 0.7C whereas they take the figure of 0.6C.
The normal period used in the GISS data is 1951-80 which is 0.3C warmer than the Met Office pre-industrial era. Adding the lower value of 0.1C to allow for warming since the true pre-industrial era gives a correction of 0.4C.
Also, here’s the latest PDO Index, also using rolling 12-month means. The reason for adding a 44-year sine wave is simply that it looked as if it fitted – roughly – and I liked the idea of it being a multiple of the solar cycle. No science involved in the choice. I’ve no idea why there should be such a cycle and I doubt it has any predictive properties whatsoever.