You know, those wonderful computer models, which tell us we're doomed, and the earth will turn into Venus in 100 years. They've been around for a while now, haven't any of you wondered how well they've been able to predict the 90's and thus far into the 2000's?

Apparently Demetris Koutsoyiannis (from the Department of Water Resources and Environmental Engineering, National Technical University of Athens) has wondered that. So did some digging and compared the output of a couple GCMs to observed temperature data.

What did he find? Here's the link

Since I love giving out's the conclusions grin
All examined long records demonstrate large overyear variability (long‐term fluctuations) with no systematic signatures across the different locations/climates.
GCMs generally reproduce the broad climatic behaviours at different geographical locations and the sequence of wet/dry or warm/cold periods on a mean monthly scale.
However, model outputs at annual and climatic (30‐year) scales are irrelevant with reality; also, they do not reproduce the natural overyear fluctuation and, generally, underestimate the variance and the Hurst coefficient of the observed series; none of the models proves to be systematically better than the others.
The huge negative values of coefficients of efficiency at those scales show that model predictions are much poorer that an elementary prediction based on the time average.
This makes future climate projections not credible.
The GCM outputs of AR4, as compared to those of TAR, are a regression in terms of the elements of falsifiability they provide, because most of the AR4 scenarios refer only to the future, whereas TAR scenarios also included historical periods.

Here's another quote, which I particularly enjoyed, found on page 18
Climatic models generally fail to reproduce the long‐term changes on temperature and precipitation.

Gee, good thing we're not basing some multi-trillion dollar policy on the output of these things, eh??