Now the results are in. Here are Johnson's predictions of the top ten medal winners, compared to the actual total these countries won:
Country | Johnson's prediction | Actual result |
USA | 103 | 110 |
Russia | 95 | 72 |
China | 89 | 100 |
Germany | 66 | 41 |
Japan | 37 | 25 |
Hungary | 31 | 10 |
Italy | 29 | 28 |
Great Britain | 28 | 47 |
France | 27 | 40 |
Australia | 26 | 46 |
Rating a prediction p as good if .75r ≤ p ≤ 1.25r, where r is the actual result, I'd say Johnson made 3 good predictions out of his top 10: China, USA, and Italy. And he made some really bad ones, including Hungary, Great Britain, and Australia. Altogether, Johnson's predictions don't deserve a place at the podium.
8 comments:
That's certainly one valid way of looking at it.
Alternatively, you could take the entire list of 79 countries predicted and find that there is a 0.93 correlation between the predictions and the actual medal counts. It appears to be a 0.92 correlation for gold medals alone. If you can find anyone who's done better, by any means including crystal ball or Ouija board, I'd love to hear about it. ---Dan Johnson.
you're right, but I still like the medals per capita or medals per GDP view, very nicely illustrated in this chart: http://www.youcalc.com/apps/1219242654520
Thanks for visiting, Prof. Johnson.
How did your predictions compare to those of your competitor, Hawksworth at Price Waterhouse?
http://tinyurl.com/3t4lwl
I get r of 0.92 and 0.9 respectively for linear regressions. However, linear regressions hide a lot of sins. The order of the predictions is important, not that there is a linear trend, you can get a good linear trend with significant disordering of the list. Most of the linear trend is driven by the US, Russia and China having high scores regardless of their order (if you invert the scores so that US has Russia's scores, and China has the highest score, the correlation is still 0.9).
If we look at the 95% confidence intervals of the correlation curve, then the picture looks quite different, a whole swag of points fall well outside the 95% confidence interval (Australia, Great Britain, France and Kenya being notably under predicted, while Germany, Japan, Hungary, Bulgaria and Sweden significantly over predicted). Only 30% of the predictions fall within the 0.75-1.25r range (which is the same proportion Jeffrey found for the top 10).
So while you get a good line, you don't get a good prediction of medal numbers for the majority of participants (predicting that the US, China and Russia will get lots of medals is pretty much certain no matter what).
Dan Johnson said: "If you can find anyone who's done better, by any means including crystal ball or Ouija board, I'd love to hear about it."
Well, in terms of r values, simply assuming that a countries medal tally in the Athens Games will predict the results of the Beijing games gives you an r value of 0.94 (see the graph here of the Beijing games vs Athens games correlation, the black line is the regression, the red line is the regression forced through zero, the dotted lines are the 95% confidence intervals.
Compare this with the graph for the Beijing Olympics predicted from Dr. Johnson's method).
Of course, as I said, getting a straight line is not the same as predicting something. We have a few methods of defining succces in prediction.
For example, if we count countries, in the top 10, middle 10 and bottom 10 WITHOUT REGARD TO ACTUAL RANK, then Dr. Johnson's method gets 9 of 10 countries in the top 10, 6 of 10 countries in the middle 10 and 7 of 10 countries in the bottom 10, not too shabby, but this performance is just about equal to assuming that countries will do as well in Beijing as they do in Athens (8, 6 and 7 respectively).
If we look at rank order of medals (which is what most people would be interested in), and count being one out of position a success (eg America, Russia, China is for all practical purposes equivalent to America, China, Russia) the Dr. Johnson's method gets 3 out of 10 right in the top 10, 3 out of 10 in the middle 10 and 1 out of 10 in the bottom 10 (not different from predicting that countries will do as well at Beijing as they did at Athens).
Again, if we chose the number of medals within +/- 25% of the actual medal count, only 30% of the medal number predictions fall in this range. The simple "assume the same medal count as in Athens" gets 39% of the medal tallies in this region.
So if anything, simply assuming that countries will do as well at Beijing as they did at Athens marginally outperforms Dr. Johnson's method.
I concur that a correlation is of limited value in testing the prediction. Maybe the Bland-Altman method or some sort of residual analysis would be better.
Thanks, Dr Musgrave, for two very useful comments here!
I guess they need to hire someone new.
By the way I made a blog, it's
www.originalboredom.blogspot.com
Post a Comment