Quote:
Are we talking about Bayes's theorem or not? Sorry dude, completly different matters.
No, it's not. You ignore this when you turn your results around and make statements like:

Quote:
Australia hasn't even warmed up: stats prove it.
This statement suggests that given your data, the probability that Australia is heating up as fast as the rest of the world is very low. But you never calculated that probability.

Suppose that if temperatures increase according to some rate r the probability (in the followiung probablity = probability density where appropriate) that you measure some data set D is:

P(D|r)

Your results are about the function

P(D|0)

I.e. the probability that you observe data D given that the rate r is zero. Now, no one cares about this function! What we want to know is the probability as a function of the rate r. I.e. what is:

P(r|D)

This is the probability that the rate is r, given that you have observed data set D in your experiment. How do we compute one from the other? In general you can reason as follows:

P(x)*P(y|x)=P(x,y) (1)

Here P(x) is the a priori probability of x, i.e. the proability that x has a given value before you do any measurements, P(y|x) is the probability that you find variable y (say your observed data set) given x, and P(x,y) is the joint probability that you find both x and y at their respective values. This joint probability is, of course, symmetric in x and y, so you can also write:

P(x,y) = P(y)*P(x|y) (2)

From (1) and (2) you find:

P(x|y) = P(x)*P(y|x)/P(y) (3)

P(y), the a priori probability of y, can be written as:

P(y)= Integral over x of P(x,y) dx =

Integral over x of P(x)*P(y|x) dx

So, we find:

P(x|y) = P(x)*P(y|x)/[Integral over x of
P(x)*P(y|x) dx]

If we take x to be the rate r and y the data set D:

P(r|D) = P(r)*P(D|r)/[Integral over r of
P(r)*P(D|r) dr]

Hypothesis testing like you have done is basically putting high odds on the null hypothesis, in your case this amounts to assuming that P(r) is strongly peaked around r = 0. Then, for P(r|D) to shift away from r = 0 you need data for which P(D,0) is very low compared to P(D,r) for some larger r.

In general we don't know what P(r) is. However, we can reason as follows. If, under the assumption that P(r), Bayes's formula implies that P(r|D) is not at all strongly peaked around r = 0 then that is strong evidence that it isn't strongly peaked around zero. If we had made a more reasonable assumption about P(r), i.e. starting out with a less sharply peaked distribution about r = 0, then P(r,D) would have shifted even further away from a sharply peaked distribution about r = 0. So, we certainly cannot be accused of having "planted" the result we found.

But if the data does not lead to a P(r|D) that extends to signifiantly larger r when you assume a sharply peaked P(r) about zero, you cannot conclude that you have found proof that P(r|D) is indeed sharply peaked around zero, simply because you started out with that assumption. Here you do put in the result you find back.

Observations of climate change suggest that r is 0.6?C +/- 0.2 ?C per century. If you say that your data for Australia would compell one to believe that this is not the case for Australia, you have to show that if you start out with P(r) peaked around 0.6 and a standard deviation of 0.1 the function P(r|D) becomes peaked around much lower values for r, such that the probability that r is in the range 0.6 +/- 0.2 is pretty low.

You have done nothing of the sort. I'm not sying that your data would not yield such a strong result if you analyze it this way. All I'm saying is that you haven't used your data to prove anything other than you already put in from the start. In science, that isn't considered to be a strong result, especially if there are other results that suggest otherwise and if you want to dispute those results.