Tuesday, May 31, 2011

Rainfall and runoff: year to date

The last two months have seen low levels of rainfall in Canberra and thus a decline in runoff. However, we have still had 246 mm of rainfall for the year, about average, and runoff of close to 33,000 megalitres.

The runoff total needs to be explained, however, as it is likely that we have had significantly less than this - perhaps as low as 28,000 megalitres. The reason is that I have factored in releases of 130 megalitres a day from Canberra dams over the last month or so. ACTEWAGL were definitely making releases of that magnitude over some of that period, and there were additional releases from smaller dams over one weekend. However, I am pretty certain that these releases stopped a couple of weeks or so ago. To ensure that I overestimate rather than underestimate runoff, I am working on the assumption that releases ceased as of 31 May.

It should be pointed out that even this overestimated runoff total is well below Canberra's average runoff for the first five months of the year. But it is better than last year: at the same point, we had had 21,000 megalitres of runoff.

Thursday, May 19, 2011

Arctic ice volume

I have been doing some work on Arctic sea ice volume, trying to determine whether a second order polynomial function had a physical basis. And I have discovered that it does. While others have obviously already worked this out, it is new to me, and thus at least a little bit exciting. :)

To look at this, what I did was sit and think about what would happen in an Arctic that was melting, and write down a few things.

The first thing that I thought of was that there are two significant parts to the Arctic year - the melt and the freeze. Using the values generated by Frank http://snipt.org/xwgn, I determined that over the period of the model (and, yes, PIOMAS is *not* data, but a model, but it does not matter for the purposes of this exercise) there was an increase in the amount of ice melting each year and a decrease in the amount of ice freezing each year. This increase and decrease were each moving in a linear fashion. It was difficult for me to see how a second order polynomial function could emerge from these linear functions. Silly me, as we will see.

So I set up a model that mirrored these linear changes in melt and freeze, and then looked at the yearly totals at maximum and minimum that resulted. Graphing these totals, I found that the declines in each perfectly followed a second order polynomial function ... What an earth was going on here?

I tried various values for the change constants in both the melt and the freeze periods, but always ended up with second order polynomial functions. So I decided to investigate this function a little more by differentiating it and seeing if the resultant function related in any way to the change constants.

And, of course, it did. What I found was that the differentiated function for the decline in ice volume at the end of the melt season was  - with X years - always:
- (Melt Constant + Freeze Constant)* X + (Melt Constant + Freeze Constant)/2

The differentiated function for the decline in ice volume at the end of the freeze season was  - with X years - always:
- (Melt Constant + Freeze Constant)* X + (3*Melt Constant + Freeze Constant)/2

Why these particular functions? The constants in them result from the 1/2 years offset between the two seasons. The Melt Constant + Freeze Constant is simply the total yearly change - the two constants added.

So integrating this returns us to our second order polynomial. And why do we integrate? Because the reductions in ice volume in any year are *summed* to the reductions in ice volumes of all previous years. And a sum function is an integral.

In other words, we do not start from scratch each year: each year, we are melting from a lower volume of ice and freezing from a lower volume of ice.

Basically, what it means is that if melting and freezing change in a linear fashion then we get a second order polynomial function for the ice volume totals.

And is there a physical basis for such a linear increase and decrease? Of course: the linear increase in energy, as measured through linear temperature change, in the Arctic due to rising CO2.

Which points to a dramatic crash in Arctic ice volume, and thus area and extent, over the next few years. Indeed, using PIOMAS, further modelling suggests that zero volume will be reached at the end of the melt period in 2018 at the latest, with it occurring possibly as early as 2013.

My projections are:
Year      Volume (cubic kilometres)
2011 ->  3744
2012 ->  2853
2013 ->  1935
2014 ->    990
2015 ->      18
2016 ->   -981
2017 -> -2007
2018 -> -3060

(all values have a two deviation error range of +/- 2445)

Tuesday, May 10, 2011

Aerosol evolution: two scenarios


This is a post inspired by SteveF's work at Lucia's blog here:

http://rankexploits.com/musings/2011/a-simple-analysis-of-equilibrium-climate-sensitivity/#comment-75758

The above table from excel uses (I hope) SteveF's method to look at the evolution of aerosol forcings over time. In his simple analysis of equilibrium climate sensitivity, SteveF looked at the situation now and worked out what aerosol forcing would have to be if forcing caused an increase of .4207 degrees per watt per square metre and if forcing caused an increase of .81 degrees per square metre (and another higher scenario).

I have extended his analysis to cover the period 1970 to 2010. One of the thing that I noted in the comments to that thread was that the aerosol forcings under the higher sensitivity scenario are currently the same as they were after the Mount Pinatubo eruption. This seems unlikely. More reasonable is the lower sensitivity scenario, in which current sensitivity is about half of that after Mount Pinatubo erupted.

One interesting fact is that under the higher sensitivity scenario there is quite an upward trend over time in aerosol forcings. This does to some extent seem reasonable, imo, as the increase in CO2 emissions is directly associated with an increase in sulphur emissions. In fact, the correlation between well mixed greenhouse gas (WMGHG) forcings is high (r^2 value of 0.81). This makes sense to me.

Still not sure what it all means, but it is interesting to play with. :)

And I have realised that I may have missed one important component: solar forcings. I will check into that.

*Done a little checking. SteveF seems to simply use one value, but that could be because he is only looking at one year - he might change that value for each year.

*Re correlation, the lowest value for a statistically significant correlation, ignoring possible autocorrelation, which is relatively small, is 0.55 degrees per watt per square metre.

Tuesday, May 3, 2011

Hansen by logarithm

As I have been unable to find the linear graphs that I thought Hansen was using, I have recreated his numbers using the logarithmic model I described previously. After some fiddling around with the parameters, I have managed to create a reasonable match with observed temperatures and the observed rate of warming over the last 40 years using a climate sensitivity of 3.3 degrees per doubling. I homed in on this number because of a priori knowledge that Hansen's model E matches observations the best when such a sensitivity is used, so this is not an independent test.

I should again point out here that lower sensitivities require a faster response time and higher sensitivities require a lower response time.

My model predicts a rate of warming of .0187 degrees per year for the next 25 years, which equates to a bit less than half a degree of warming. At that point, we would be committed to a further one degree of warming, most of which would occur this century. If all human greenhouse gas emissions ceased at that point, total warming from preindustrial would be around 2.3 degrees by the time warming ceased.

I will be interested to see how my model compares with reality over the next little while.

Thursday, April 28, 2011

Using Hansen's linear response times

Testing my model using Hansen's linear response times (which leaves me with only one variable to play with, climate sensitivity) I need to use a climate sensitivity somewhere between five and 5.5 to get a match with the observed temperature trend between 1970 and 2010.

This could indicate that my model is wrong in other respects - I will need to read Hansen's paper carefully to check this, as he does mention a long-term sensitivity of around six degrees.

I should also note here that I am aware that my model is attributing all of the temperature increase between 1970 and 2010 to CO2, making the assumption that other forcings cancel out over that period. This is likely true for things like solar forcings, ENSO and so forth. However, aerosols are still an issue.

More on my temperature model

My temperature model - which is really a test of the climate sensitivity, as it is looking backwards over the last 50 years of data - has two basic variables.

The first is the climate sensitivity. I imput that against the Manua Loa CO2 data since 1959, which then generates the set of temperatures that we would expect were the climate response time instantaneous.

The second variable is the climate response time. As I stated previously, I have set this up as a logarithmic function that can be 'stretched' or 'squeezed'.

I have chosen to ignore the first 10 years of data and thus the absolute temperature value for the whole time period. The reason for this is that the first CO2 level seemingly makes the earth have a sudden jump above 280 ppm, instead of the slow rise that there was in reality. I believe that this must distort the temperature data, although I have not yet investigated as to in what fashion it does so.

This means that I cannot directly compare measured historical temperatures with the temperatures outputted by my model. I do not think that this is a problem, however, as what I can do is compare trends (which is another way of saying that I am measuring the difference, or anomaly, between the temperature my model shows for 1970 and the temperature my model shows for 2010).

Using GISS data, the trend between 1970 and 2010 is .0163 per year. I can fiddle with the response time parameter to make any climate sensitivity provide a match for this trend. However, the response times required for any particular sensitivity to do so are give us an interesting picture of the realistic sensitivities.

Using my model, a sensitivity of two degrees requires 70 per cent of the expected total temperature increase from a given rise in CO2 to occur in the first 10 years. Further, as we move past 30 years, more than 100 per cent must occur. This would seem to rule out two degrees as a viable sensitivity value under this model. (I am not yet claiming that my model is of use).

If we examine a sensitivity of six degrees, however, we get a different picture. Early on, it seems okay, with a bit over 30 per cent of the expected temperate rise occuring in the first decade. But to get the next 30 per cent takes a further 170 years. And then the next 15 per cent takes close to a further 700 years ... And that leaves a further 25 per cent of the response still to come. That does not seem plausible, either, leaving six degrees as not a viable sensitivity value under this model.

Three degrees sensitivity forces me to use a pretty fast response time to get a match - over 50 per cent in the first decade and 75 per cent after a touch over 30 years.

Four degrees sensitivity requires over 40 per cent in the first decade and around a total of 75 per cent after 85 years.

A sensitivity of 4.5 degrees has just under 40 per cent in the first decade and around 70 per cent after 100 years.

The question then becomes: which is plausible. I would suggest that the last is the most plausible using my model.

However, now the question becomes: is a logarithmic model realistic? Hansen et al use a linear model, with one line for the first decade and another line for the next 90 years, so maybe a logarithmic model is not realistic.

I will test the linear method in my model and report back.

Wednesday, April 27, 2011

Climate sensitivity revisited

I have been working with a simple model for temperature that has the earth responding logarithmically to CO2 forcing (for example, depending on the parameters that I use, it might warm by 40 per cent of the expected total warming in the first 10 years and then by another 30 per cent of the expected total warming in the next 90 years) and then running that model using different climate sensitivities.

Climate sensitivity is commonly defined as the predicted climate response to a forcing and in the case of CO2 it is put as X degrees per doubling.

The values for X that I have tried range from one to 10.

The CO2 data I am taking from Manua Loa.

At the moment I have having some difficulty getting my model to come close to matching observations if I use a low climate sensitivity. I can almost do it if I have a very fast response time. For example, if I choose a climate sensitivity of two degrees per doubling and I have the vast majority (80 per cent) of the temperature response occuring within 50 years, with more than 50 per cent of that in the first decade, I can fit the model to the current observed temperature. But the rate of warming that this produces for the last 50 years is still too low.

However, even here there is a problem: the rate of observed change is still faster than my model shows.

The better fits are with higher climate sensitivities, but even there things are not perfect. (Note: I would not expect them to be so, as my model is leaving out climate variability, but they are still not good enough for my purposes).

This seems reasonable: based on our observations of temperature and atmospheric CO2 concentrations over the last 130 years and the linear fit between the two, a sensitivity of two degrees would seem to be implied. However, this would seem to suggest an almost instantaneous response to CO2. If instead some kind of logarithmic fit was used, I wonder what result we would end up with?

I am assuming that there is a major problem with a model such as this. Hansen seems to use a linear model, with different slopes at different periods of time (for example, four per cent of the response per year for the first decade, followed by about .4 per cent of the response per year for the rest of the century). According to him, other models use much longer response times, at least for the second half of the response.

If anyone has any advice on this, that would be appreciated. I can obviously provide the full model (which is not very full or large) to anyone who wishes to see it.