Friday, February 12, 2010

Andrew Gelman and Climate Change Concern Trolling

New Troll and Old Troll (Front)Image by Dunechaser via Flickr
Andrew Gelman's blog [ ] has some of the best writing on climate change when Phillip Price submits a post [ see ] and has some of the worst writing when Andrew Gelman himself posts.

Andrew Gelman seems to be sympathetic to conspiratorial thinking about the scientific culture around climate change.  My feeling is that in his own field of statistics, Bayesian techniques have been actively suppressed and misrepresented, so Gelman is open to the idea that investigators who don't see human global warming could be actively suppressed and misrepresented too.

Which is fine, if the arguments were not so lame.
(Anonymous concern troll says any paper contradicting Arrhenius' 1896 climate model is likely to be self suppressed. And if this is not the point of the anonymous question, what is?)
(Before Andrew Gelman steps onto a subway train, he ponders that the civil engineers that designed the train might be laboring under thoughts so stupid that they can only come from some aspect of the civil engineering consensus in a particularly ugly undigested form. And he is paralyzed by fits of panic. And if this is not the point of Gelman's "Beyond my limited sphere of scientific compresension, I dunno", what is?)

It could be more convincingly argued that:

(1) human activity is making an ice age less likely

(2) global warming skepticism and preference for inaction plays a semi-rational role in the debate, not because of the poor quality of their arguments, but as a brake against premature solutions:

365 Days - Day 71 - Hippy Tree HuggersImage by Auntie P via Flickr
Al Gore holding up a mercury light bulb in "An Inconvenient Truth",
Ed Begley, Jr. installing semiconductor solar panels to all exposed surfaces of his home,
the US government giving loans to Tesla electric cars

... all of which are very likely contributing to burning *more* fossil fuels, not less, because the total costs over those product's entire lifetime (including manufacturing plants that themselves burn fossil fuels, including safe disposal of wastes) are unrepresented by the relatively small price charged to consumers.  Better to tax fossil fuels in rich countries and spend that money on research placed in the public domain so even the poorest countries can benefit.  At a gradual rate of increase, so as to have minimum harm to productivity.

But, instead, we get lame concern trolling about scientific conspiracies.  I get the tiresome feeling that the global warming skeptics need to be saved from themselves, to have the people convinced by the scientific consensus to make their best arguments for them, because they cannon help but simply echo the last thing they heard from someone with a direct financial link to oil companies.  Tiresome.

My reply to the anonymous concern troll:

> If the prediction of a climate model is very much outside the consensus predictions, it is not likely to be published.

More arguing that climate science is a nonesuch science.  Taken to the logical extreme, we can argue that Einstein's papers on Special and General Relativity are not likely to be published (and that is why we still use epicycles today).  Taken to the logical extreme, we can posit that Alex Rodriguez is not likely to swing for the fences.  The blockbuster behavior of the players in the 99.999% percentile is poorly predicted by the tentative behavior of the average player.

Secondly, science publishing is not the only market for climate modeling.  Commodities traders and the reinsurance market for hedging risk on multi-year massive construction projects have a need for accurate climate modeling, because on those time scales a long range weather report would be worthless.  Those players are willing to leave millions on the table just so their hired gun scientists can parrot safe results that are unlikely to rattle tea cups at the next faculty function?  Unlikely.

I beg your forgiveness for the following snarkiness.  Can your anonymous concern troll name a single branch of science that has remained on a strictly linear trajectory since 1896?  Besides phrenology.

[ Edit 2/15/10 ]

The goalposts have been moved in the comments is Andrew Gelman's post on February 12.  It moved to always having the models subjected to a growing set of data, never casting out past data (good, good).  It moved to meta-analysis of all available models, over time (good, good).  It moved to the variance of published models compared to the subjective guessed distributions of the individual practicing scientists (fine, fine).  But where happened to the original claim "If the prediction of a climate model is very much outside the consensus predictions, it is not likely to be published."?

My comment submitted:
Marc Levy

> They find that if you ask climate experts to characterize their subjective best guess as to the distribution of key climate change parameters, you observe far more variance ... than you observe when you look at the distribution of all the climate model outputs.

This is to be expected, because no one would represent *any* model as perfectly describing reality - if it was perfect, it would no longer be a model, anymore.  Only pure mathematics has the benefit of being able to switch the analysis to a proved isomorphism that is easier to compute.  Every model is an adequate simplification, over a domain, and it is hoped the failure modes are understood so the model is not misused.  But the option of "proving" the model a perfect representation of reality is not available.

Useful scientific models typically give sharp results - sharper results than field readings, even counting for input precision or rounding in iteration, etc.  The models are useful *because* they give sharp results - or else you would have the perverse consequence of improving the usefulness of a model by adding slop into it to increase the variance.  A bound on the error is useful to track, but no one would actually mix in slop in a model to force the variance wider, even if the model's variance doesn't match field readings.

Any expert would know very well all the possible failure modes and other limitations of a particular model, that so their subjective guessed distribution would have greater variance than the considered model because of that knowledge.  The scientist possess what humans value as knowledge, the documented model cannot (and so scientists cannot be replaced with the models of their creation).  Why else might the variance be greater?  -- perhaps the scientist is in possession of they consider to be a better model, not yet published.  Or perhaps, the scientist is simply aware of the possibility of a better model.

The relatively uncontroversial model of satellite orbits is informative.  They are tighter because they, of course, consider fewer particles than Mother Nature is able to consider.  So they can consider events in the future, because they run faster than reality, and so they can run on economically available hardware.  No one would consider their tighter variance than the variance of observatory readings to be surprising, much less consider it a failure of the model.  Only if there was misrepresentation of best knowledge of the model's error bound, or failure modes, or applicable domain, and then, it would not be a failure of the model, it would be a misapplication by a human agent.

Can I note that the goalposts have been moved?  The original issue was "stabilizing feedback" and the original question contained the assertion "If the prediction of a climate model is very much outside the consensus predictions, it is not likely to be published."  There are other interesting issues to consider, but only after the parties admit that the goalposts have been moved and the focus of the argument shifted.
Reblog this post [with Zemanta]

No comments: