Friday, January 29, 2010
As temperature increase, however, the percentage of evaporation tends to increase (there are limiting factors to this - if the atmosphere already contains lots of H2O, evaporation will be slower). Examining the recent evaporation history of the Canberra region, I found that evaporation rates have increased to around four per cent above the long-term average (we have data from 1967 to 2007 inclusive).
Based on the Australian average, this would mean a seven per cent decline in runoff (assuming constant rainfall, of course). Instead of a ratio of evaporation to rainfall of 65:35, we would have a ratio of 67.5:32.5.
I had a look at what this means for the trend in runoff. Over the last 20 years, when changes in evaporation are taken into account, runoff has decreased by around 9 mm per year. The long-term average is 224 mm of runoff per year. At present, we have an average runoff of 166 mm, which is the lowest recorded (note that we only have records for evaporation going back to 1967). This is two standard deviations below the mean.
If the trend continues, effectively we will have no runoff by 2030. None.
The error margins for this estimate are quite large, as we only have a relatively small amount of information. Further, while evaporation rates rise with temperature, they are also affected by things such as the amount of sunlight that reaches the earth. Global dimming, probably caused by aerosols, reduced evaporation between 1970 and 1990, and then as aerosols started to decline somewhat evaporation increased.
However, these trends should be closely examined. And there should be more discussion from government about them and what they are doing to protect Canberra from the effects of climate change. (And I should say here that these effects are almost inevitable at this point, as politically it is unlikely that we are going to significantly slow greenhouse gas emissions between now and 2030.)
The above publication, from 2004, discusses options for the future of Canberra's water supply. There are two things that stand out for me: firstly, the area that they are counting on for future water supplies has declined since 2003 from a five-year annual average of 830 mm to 630 mm (and 830 mm was already 90 mm below the long-term average ...); and secondly, the worst-case scenario prediction for rainfall decline for Canberra by 2030 is 9 per cent.
The data for the Brindabella region is here:
Now, it is possible that we are only talking about a short-term fluctuation in rainfall over the last little while, and that there will be a rebound in the near future that will take us back up to that 9 per cent decline or better position by 2030. But the evidence is that this will not be the case: it looks as though rainfall patterns in Australia have altered.
It should be noted that the predictions from the CSIRO and the IPCC are based on models that have very poor resolution at local levels. They can predict global climate quite well, but for regional climate - and regional rainfall/precipitation in particular - they are not able to do very well.
In the period 1895-1904 - the Federation drought - rainfall was 11 per cent below the long-term average. Runoff during this period was 31 per cent below the long-term average.
In the period 1937 to 1946, rainfall was 14 per cent below the long-term average. Runoff was 22 per cent below the long-term average.
In the period 1997-2006, rainfall was 13 per cent below the long-term average. Runoff was 39 per cent below the long-term average.
While there is a lack of consistency in the periods, what is clear is that a one per cent decrease in rainfall does not equate to a single per cent decline in runoff. Indeed, for the current period, a one per cent decrease in average rainfall has equated to a three per cent decline in runoff. This has potentially disastrous implications for areas such as Canberra, where the projections suggest a decrease in rainfall of 50 per cent by 2050.
I am hoping to get more information from ACTAGL regarding historical dam levels. This should enable me to specifically look at Canberra's situation.
Thursday, January 28, 2010
What is important to take away from this is that much of the uncertainty is on the bad side - in other words, it is more likely than not that our current knowledge of the science underestimates the negative effects of climate change.
As an example, the article talks about precipitation. In the 2007 IPCC report, there are some attempts at estimating the effects of climate change on precipitation in Australia. The table is at: http://www.ipcc.ch/publications_and_data/ar4/wg2/en/ch11s11-3.html#11-3-1. What is says is that rainfall in south and west Australia is predicted to decline by between 0 and 15 per cent by 2020.
Unfortunately, as the Nature article suggests for other regions, the rainfall predictions are already proving somewhat optimistic.
First, it should be pointed out that we obviously only have three years of data to examine since the IPCC report came out - 2007, 2008 and 2009 - and the statistical value of three years is very small indeed. However, the results may be preliminary indicators that the IPCC has been optimistic.
In the south-east of Australia, the rainfall for the last three years was 12 per cent lower than the average for all the years preceeding 2007. Further, it remains on a trend line indicating that rainfall is declining at the rate of 1.5 per cent of the long-term average per year. By 2020, this predicts that rainfall in south-eastern Australia will be 27 per cent below the long-term average, and 15 per cent lower than the long-term average than it is today. So, even if the IPCC figure is measured as a percentage decline from today, the evidence is that the very top of the range for the south of Australia is likely to be hit or exceeded.
Monday, January 25, 2010
The error on glaciers was the statement in one section of the IPCC report that the Himalayan glaciers would be gone by 2035 at current rate of melting. This is not a claim supported by the evidence. While Himalayan glaciers are melting rapidly, these glaciers are huge. There is no known physical mechanism related to atmospheric temperature that could melt them all by 2035.
This was a bad error, and one that should have been caught earlier. However, it was caught - that it was the scientific process is all about, so it is not bad science that a mistake was made and then corrected.
It is bad politics, unfortunately. This is why politicians rarely admit an error even when they have made a blatant one. They understand that the public is not very forgiving of mistakes. And it is even less forgiving of mistakes by scientists. There is obviously a need for better checking of the material that goes into IPCC reports. Hopefully, the next one will not contain any errors approaching this magnitude. But in the meantime, we will have to deal with increasingly strident calls by those who disbelief AGW theory for the IPCC to be disbanded or some such. And the public may well listen now that an error has been admitted to. It makes our efforts more difficult, which is a sad thing. (Admitting the error is not a sad thing; it making things more difficult is sad.)
The other issue is the one to do with claims regarding increases in damage caused by extreme weather events as the world warms. There are accusations that the IPCC used one paper that made a claim that there was evidence that damage had increased over the past 30 years and that it was linked with global warming. However, while the IPCC did use this paper, it also looked at others that did not show an increase. The IPCC was balanced in its call for more examination of this issue. It put the view that there were risks associated with this, but that there was not enough evidence to quantify them.
Friday, January 22, 2010
If I explained to you that there was a 97.5 per cent chance that Canberra would become a desert by 2170, what would your likely response be? I think it would be something like: 'The year 2170 is a long way off. There is not much point taking action at the moment - a lot will change over that time.' And that could be considered a reasonable response.
But what if I told you that because of the distribution of the data, the chance that Canberra will be a desert before 2050 is around 55 per cent? That would make it a bit more urgent. Further, what if I told you that the whole 'desert' thing is simply an arbitrary point of interest and that prior to becoming a desert Canberra will necessarily experience a decline in rainfall? In other words, we will not be going along fine until 2050 and then suddenly become a desert: we will feel the effects of climate change long before then - and in fact we are feeling the effects now. If we were not feeling the effects now - the decline in rainfall marching in lockstep with the rise in temperature - then there would be no data to extrapolate from.
My point is that some ways of talking about data are not effective at convicing people that action needs to be taken, while other ways of talking about the same data are effective. What I would like scientists in general to do is to explain what they mean when they talk about margin of error when discussing things with politicians and the general public. It is difficult: science is not about certainty. But if we want to manage the risks of climate change, then we are going to have to make critical decisions before we reach certainty of outcome. Humans do this all the time. We just need to convince large numbers of them to move in the same direction on this one.
Thursday, January 21, 2010
What the graph is saying is that for every full degree rise in the 10-year average day time temperature there is a fall in the 10-year average rainfall of 78.8 mm.
At current rates of day time temperature increase in Canberra, which for the last 18 10-year periods have been rising at .087 degrees per year, this means that in 39 years - by the end of 2048 - the 10-year average rainfall in Canberra will be less than 250 mm, which would make Canberra a technical desert.
The period examined is from when records begin, which is 1940, until 2009. The correlation is quite good: an R^2 value of 0.6448.
At present, we are struggling for water, with the current 10-year value more than 100 mm below the average of the whole period. Imagine how bad things will be with rainfall less than half its present value ...
Wednesday, January 20, 2010
What I want to rant about here is my belief that it is in general progressives who have damaged the Democrat cause and the cause of left-wing progress more than the Tea Party or Beck or Hannity or anyone from the right.
The issue is one of the left always making the perfect the enemy of the good while ignoring political reality, which is the perfect will never pass a Senate that, while dominated in theory by Democrats, is in reality a conservative body with many members from fairly conservative districts.
Progressives love to talk about Obama failing to provide leadership on issues that are key to them, such as health care, climate change and the war in Afghanistan. It seems as if they believe that Obama can somehow force conservatives who feel a bit nervous about their re-election chances to suddenly change their positions and vote for a progressive agenda. It is a fantasy.
But when Obama fails to live up to the fantasy, he gets the blame. And thus the assault on Obama is from the right (which it always was going to be) and from the left.
Further, when Obama does make modest steps towards progressive goals, he is condemned for the deals he has to make in order to make those modest steps - it is almost as if the left think that Obama is betraying them by succeeding. The reason for this seems to be is that what they want is for him to try to things that are guaranteed to fail.
While over the long-term the progressive agenda is moving forward, in the short-term it seems as though progressives are determined to sabotage anything that is less than what they hoped for. And in the process they seem to want the Republicans to regain power, and are doing almost everything in their power to make that happen.
But I guess they can feel good about themselves: after all, they didn't sell out their principles. I hope that that keeps them warm at night - or, rather, cool when temperatures soar because they were not prepared to make some concessions.
Tuesday, January 19, 2010
Does this mean that my initial estimate of climate sensitivity of 2 plus or minus 1 degree celsius needs to be increased? Perhaps. It first needs to be recognised that the well-mixed greenhouse gases include more than just CO2. CO2 makes up around 60 per cent of the forcings here. So, if we take our initial number 2 degrees and mulitply it by 2.5 (we need to do this because only 40 per cent of the forcings from greenhouse gases is not countered by other negative forcings) that becomes 5 degrees. If we then take 60 per cent of that, we get a climate sensitivity of 3 degrees.
That for me is scary. If the observed non-equilibrium climate sensitivity is that high, then the equlibrium sensitivity must be at towards the high end of the IPCC range.
However, it must be recognised that there is a significant range in the observed sensitivity here - it ranges from 1.5 to 4.5, like the IPCC figure, and there may be a larger error margin in there simply because of the deviation in the reduction in forcings over time. By this I mean that while the average is 0.6, it ranges from around 0.3 to around 0.9.
Now, I am new at this so there may be some fundamental mistake I am making here in increasing the value. Any help would be appreciated.
Monday, January 18, 2010
In this post, I will continue my examination of climate sensitivity using the forcing estimates published here: http://data.giss.nasa.gov/modelforce/RadF.txt
In the previous post, I published a graph of total forcings versus temperature. These forcings are from multiple sources: carbon dioxide, methane, nitrous oxides, ozone, stratospheric H2O, the sun, land use, snow albedo (reflectivity), stratopspheric aerosols, black carbon, reflective aerosols and what are called 'aerosol indirect effects', which are mainly implications to do with how aerosols affect cloud formation.
It should be noted that the first three of those - carbon dioxide, methane and the nitrous oxides - are included in the GISS data as one entity called 'well-mixed greenhouse gases'.
When we examine the relationships between temperature and forcings for individual components, we find that there is only one that tracks closely the rate of temperature increases for total forcings, and that is the well-mixed greenhouse gases component.
If we look at well-mixed greenhouse gases alone, we can see that the slope of this line is 24.137, indicating that for every full point increase in forcings from well-mixed gases the temperature increase .24137 degrees celsius.
In a previous post, http://evilreductionist.blogspot.com/2010/01/climate-sensitivity.html, I showed a graph of the natural logarithm of atmospheric CO2 concentrations versus yearly temperature. This was in an attempt to work out the sensitivity of the climate to increases in atmospheric CO2. I came up with a figure of 2 degrees celsius per doubling of CO2 concentrations.
The question is: is this a good way of determining climate sensitivity?
The first issue is that there are many things that affect global temperature: the solar cycle, ENSO variations, atmospheric aerosols, ozone, orbital variations, other cycles et cetera. So maybe the variation that we see in temperature over time can be explained by things other than CO2, and thus a linear graph of CO2 v temperature is not a good way of working out the climate sensitivity.
However, in response to this point one of the benefits of doing this graph over a relatively long period - 130 years - is that many of these variations will have been included. There will have been about a dozen or so solar cycles over that time. ENSO will have gone from El Nino to La Nina on numerous occassions. Atmospheric aerosols will have risen and fallen in concentration, along with ozone. Cycles of length shorter than 130 years will have had all their various stages included.
The argument here is that all of these things will have averaged out over the 130 year period and that the only thing not taken into account will have been increases in CO2 concentrations. Thus, the climate sensitivity derived will be a reasonable estimate.
We can test this assumption by examining changes in forcings over this period. Carrick posted a link in the previous thread to data on forcings over this period. It is here: http://data.giss.nasa.gov/modelforce/RadF.txt.
What do I mean by 'forcings'? A forcing is the energy received by the earth from some particular source. They are measured in watts per square metre. As an example, the forcing from CO2 and other greenhouse gases in 2003 was 2.7487 watts per square metre. Totalled over the entire surface of the earth, this is a fair bit of energy.
Using the data, we can create a graph of total forcings versus temperature. This is what I have done above.
The graph has a slope of 21.682. This means that for every full point of increase in forcings, the earth increases in temperature by .21682 degrees celsisus.
I will examine what this means for my estimate of climate sensitivity to changes in CO2 in my next post.
Wednesday, January 13, 2010
In this post, I will provide an estimate of the 95 per cent confidence interval for that climate sensitivity.
The data is autocorrelated. This means that I cannot use the normal method for calculating the standard deviation of a slope, which is given here:
What I need to do is make an estimate of the autoregression coefficient and then use this to substitute a new effective N into the standard deviation equation.
The effective N will equal N*(1-ARC)/(1+ARC), with ARC being the autoregression coefficient.
I examined the autocorrelation of the data and found the ARC to be .882. However, I do not think that it justifies that level of accuracy, as it might even be as high as .95, although that is unlikely.
An ARC of .882 yielded a 95 per cent confidence interval for the observed climate sensitivity over the past 130 years of 2.02 +/- .76 in degrees celsius. If it is as high as .95, then the 95 per cent confidence interval would be 2.02 +/- 1.64 in degrees celsius. This is a big range.
I have now re-examined the ARC and determined a 95 per cent confidence interval for it. This interval is from .84 to .99, with a middle value of close to .92. This middle value gives a range for the observed climate sensitivity of two plus or minus one degrees celsius.
Tuesday, January 12, 2010
This is a graph of the natural logarithm of observed atmospheric carbon dioxide concentrations versus observed global annual temperatures since 1880, with the temperatures taken from GISS. Natural logarithm is used because the relationship of CO2 to temperature is not linear but logarithmic.
Using this graph, we can take a shot at working out what the climate sensitivity of the earth is.
First, what is climate sensitivity? Climate sensitivity in this case is basically how sensitive the climate is to changes in atmospheric concentrations of carbon dioxide. The standard way of describing it is how much the temperature will rise for a doubling of CO2.
The climate sensitivity usually suggested is 3 degrees plus or minus 1.5 degrees (centigrade). These numbers are the ones put forward by the IPCC.
The IPCC figure is for climate sensitivity at equilibrium - in other words, they are saying that the climate will have increased in temperature by somewhere between 1.5 and 4.5 degrees per doubling of CO2 once the earth settles into a stable state. This would presumably have to be some time after humans have ceased unsustainably pumping CO2 into the atmosphere.
The climate sensitivity that I will be examining here is the climate sensitivity when the earth is not yet at equilibrium. To do this, we need to look at the slope of the above graph.
The slope is 288. Given that GISS publishes its figures in 100ths of degrees celsius, we need to divide by 100. This gives us a figure of 2.88 degrees celsius.
However, this is an increase of 2.88 degrees per full point of increase in the natural logarithm of CO2. To determine the increase per doubling of CO2, we need to multiply the slope by .7. This is because the natural logarithm of 2 is .7.
The result is 2.01 - we may as well round to 2.
So, the observed non-equilibirum temperature sensitivity from 1880 to 2009 was 2 degrees per doubling.
I will post in a little while on whether or not this is a good way of calculating temperature sensitivity.