Showing posts with label Andrew Gelman. Show all posts
Showing posts with label Andrew Gelman. Show all posts

Wednesday, December 29, 2010

How trusted is the "Big Five" Personality Traits in mainstream psychology?

flickr | Graela "Bathroom Personality Assessment - Part 2"
http://www.flickr.com/photos/9441263@N04/3984047524/
Comments to "Brain Structure and the Big Five"

Sanjay Srivastava | December 29, 2010 10:36 AM
Walter Mischel's famous critique predated the Big Five. His critique was of the concept of a personality trait more broadly. If you ask around at Mischel's home department they'll probably tell you that Mischel won the argument, but that's not the mainstream view among personality psychologists elsewhere. In fact, I don't think there's a single mainstream view on traits or the Big Five, but I'd guess that many personality psychologists would endorse, at a minimum, "useful enough until a better model comes along." Some might go a lot farther. (Some of my own views on the Big Five are in this paper, if you're curious.)
Russell Almond | December 29, 2010 11:15 AM
A quick comment about the Big-5. A couple of years ago, I did some consulting with a psychologist who was developing new personality measures. The standard practice for validating the new measure was to give it as part of a battery to a sample of the target population, along with the Big 5, and other known measure that were similar to the target. I got the impression that it wasn't that the Big 5 were thought of as the answer to everything, but that it was a starting point that most people in the field understood. The burden of proof was to show that your proposed measures was something other than a composite of the 5 factors in the Big 5.
My comments: trying to get a fix on the Big Five. George E. P. Box: "All models are wrong, some models are useful"
Enhanced by Zemanta

Monday, August 30, 2010

How can we define "Useful", when it comes to our models of reality?

Wikimedia Commons: "N-Gauge Cassiopeia E26 & EF81 from Kato"
A comment to "Useful models, model checking, and external validation: a mini-discussion" by Andrew Gelman - Statistical Modeling, Causal Inference, and Social Science

Gelman wrote (with Cosma Shalizi) a very fine philosophical justification for real-world, effective Bayesian techniques, which differs greatly from the usual philosophy associated with Bayesians.

Gelman, Shalizi: Philosophy and the practice of Bayesian statistics in the social sciences

My own recurring criticism of Gelman is of using Bayesian/statistical models to the exclusion of others.  I am more comfortable with the idea of models of different type in competition.
http://manuelmoeg.blogspot.com/search/label/Model%20building
The multiple models you then have will now compete in different uses - based on predictive power, accuracy, ability to calculate meaningful error ranges, cost of collecting data, cost of computation, cost of comparison, ability to predict outcomes from interventions, cost of understanding, etc.

quoting Gelman:
"We always talk about a model being "useful" but the concept is hard to quantify."
My comment:
Simply build a model of costs and gains and methods of comparison between models!  If a model is good enough for your work, a model must be good enough as a working definition of "useful"!

Sometimes the best answer to "Why" is "Just because".  Sometimes the best mechanism for rating different models is another model.  The Skeptics will always howl, so you simply have to demonstrate that their own behavior is consistent with putting undue confidence in their own model, whether a conscious model or unconscious.  (And, it must simply terminate with a model, because of the limits of the tools available to the human brain.  Only a model, probably over-simplified, can be manipulated with the agility needed to predict future outcomes of the universe from actions considered now, in real-time.)

Just keep asking the Skeptic "Why" with regards to their own personal actions, and when they hit the "Just because" point, they probably have described a model of utility, assumed true without proof, as an answer to "Why" in the previous step.

If the Skeptic refuses ultimate responsibility over their personal actions, and tries to plead pure capriciousness or mystery, then their model is simply statistical, based on stimulus and internal states (like stress) that can be approximately discovered with objective external measures (like galvanic skin response).  Of course, it is easier to plead pure capriciousness or mystery than demonstrate it - if their behavior is well predicted by a deterministic model suggested by another, the Skeptic is shut up.  Most times the reason for behaviors is gross and banal, no matter how elevated the sophistry of the Skeptic.
Enhanced by Zemanta

Thursday, March 25, 2010

Andrew Gelman said something significant, above my head

The Illustrated Sutra of Cause and Effect: 8th...Image via Wikipedia
A new post by Andrew Gelman, with a quite wordy title

The single most useful piece of advice I can give you, along with a theory as to why it isn't better known, all embedded in some comments on a recent article that appeared in the Journal of the American College of Cardiology


I would summarize, but I am embarrassed to say I understand very little of it.  In a comment, I made an attempt:
Hello Prof. Gelman,

Are you saying "model building" will naturally lead to applying fruitful transformations that will lead to statistics that do more than only prove "a formally statistically significant difference for a trivial effect"?

(By "model building" you mean the scientist taking responsibility for an abstraction that goes beyond statistics, i.e. causality and value judgments about what is more than a trivial effect.)

I am having trouble translating your description into something I can understand, so I would appreciate your help if I made a hash of things with my little summary.
I will add edits to this as I learn more.

I feel Pearl's causality graphs (directed acyclic graphs, to be specific) are the appropriate format to present any model.  If you want to allow the possibility of "no true zeros", then use multiple models, and "collapse" all the points where you wish to use statistics to show the possibility of "no true zeros", maybe even "collapsing" everything into a single point!  The multiple models you then have will now compete in different uses - based on predictive power, accuracy, ability to calculate meaningful error ranges, cost of collecting data, cost of computation, cost of comparison, ability to predict outcomes from interventions, cost of understanding, etc.

"No true zeros" - see Andrew Gelman's Review Essay "Causality and Statistical Learning" Section Heading: "There are (almost) no true zeroes: difficulties with the research program of learning causal structure" http://www.stat.columbia.edu/~cook/movabletype/archives/2010/03/causality_and_s.html

I also have a hard time understanding all this in isolation from a model of a rational being working under a motivating sense of responsibility to make a decision about an action (or remaining inactive).  Especially statistical analysis divorced from utility in making a decision.  Comprehension is nice, but comprehension that cannot play a part in any morally motivated decision is valueless.
Reblog this post [with Zemanta]

Friday, March 5, 2010

Statistics versus Causality - A predictable impasse

Stop LandminesImage by Cedric Favero via Flickr
My undignified reply to Andrew Gelmans's take on Causality and Statistical Learning


The causality people and the statistics people are talking past each other, your [Andrew Gelman's] 12 page magnum opus included.

Point 0) Sense of responsibility → decision → commitment to action/inaction → action/inaction ⇒ implies you possess a general description of reality, unless you are limiting yourself to a very narrow sphere of responsibility.

Point 1) Statistics cannot be the basis for a general description of reality because of Simpson's Paradox.  When it arises, the paradox can only be eliminated by an appeal to plausible causality, directly or indirectly.  Also, no statistical test exist, for a static situation, to make a prediction of what relationships would prevail if conditions change -- again, only an appeal to causality can do such.  (See Judea Pearl's book Causality, chapter 6)

Point 2) Causality cannot be the basis for a general description of reality because reality violates the assertion of independent variables needed for effective causal analysis ("no true zeroes" as you put it).  Reality doesn't even adhere to the laws of conditional probability [ http://www.stat.columbia.edu/~cook/movabletype/archives/2009/09/the_laws_of_con.html ] much less the structure of independence needed for causal analysis.

Illustration of the continuous version of Simp...Image via Wikipedia
Point 3) There are no other contenders for general descriptions of reality besides statistics or causality.

Conclusion) SOL

So people, under the burden of responsibility, must maintain several models of reality, over smaller and larger domains of applicability, some statistical, some causal, some based on symmetry & curve fitting, some based on the laws of probability, some based on scientific laws, some based on economic laws, some based on rules of thumb, some based on multiple simulation runs, some hybrids.  These models compete against each other, at the cost of maintenance, data collection, computation, and comparison, with the benefit of correct probabilistic predictions of consequences of action/inaction, or the benefit of demonstrations of broad range of uncertainty that swamps discernment of effects between decisions.

And the sense of responsibility is made of shifting sands, and human values and goals are not static.  So you could pay all the costs for a model, just to dispense with it.

An Inglehart-Welzel Cultural Map of the World:...Image via Wikipedia
But all this *still* can be done for individuals or small groups.  Once you get past 30 members, what is rewarded are techniques for rubber stamping decisions already taken by the politically powerful, under the name of "objective analysis" for political cover.

So "small" decisions can be made quite well, with effort.  And "large" decisions are made quite poorly, because evidence of a cold calculated analysis would be blood on the hands of the politically powerful (besides, the ability to perform such analysis is in opposition to dumb loyalty, which is the most prized character trait of the privileged in-group).  But these "large" lousy decisions possess notoriety, and thus human appeal.  So a thousand pages each over describing a thousand theories chase after a relative small number of very poor decision making processes.

The consequences of all this may dim my sparkling optimism, so I must leave that as an exercise for others.



Reblog this post [with Zemanta]

Friday, February 12, 2010

Andrew Gelman and Climate Change Concern Trolling

New Troll and Old Troll (Front)Image by Dunechaser via Flickr
Andrew Gelman's blog [ http://www.stat.columbia.edu/~cook/movabletype/mlm/ ] has some of the best writing on climate change when Phillip Price submits a post [ see http://manuelmoeg.blogspot.com/2009/12/talking-about-climate-change-publishing.html ] and has some of the worst writing when Andrew Gelman himself posts.

Andrew Gelman seems to be sympathetic to conspiratorial thinking about the scientific culture around climate change.  My feeling is that in his own field of statistics, Bayesian techniques have been actively suppressed and misrepresented, so Gelman is open to the idea that investigators who don't see human global warming could be actively suppressed and misrepresented too.

Which is fine, if the arguments were not so lame.

http://www.stat.columbia.edu/~cook/movabletype/archives/2010/02/stabilizing_fee.html
(Anonymous concern troll says any paper contradicting Arrhenius' 1896 climate model is likely to be self suppressed. And if this is not the point of the anonymous question, what is?)
http://www.stat.columbia.edu/~cook/movabletype/archives/2009/12/how_do_i_form_m.html
(Before Andrew Gelman steps onto a subway train, he ponders that the civil engineers that designed the train might be laboring under thoughts so stupid that they can only come from some aspect of the civil engineering consensus in a particularly ugly undigested form. And he is paralyzed by fits of panic. And if this is not the point of Gelman's "Beyond my limited sphere of scientific compresension, I dunno", what is?)

It could be more convincingly argued that:

(1) human activity is making an ice age less likely

(2) global warming skepticism and preference for inaction plays a semi-rational role in the debate, not because of the poor quality of their arguments, but as a brake against premature solutions:

365 Days - Day 71 - Hippy Tree HuggersImage by Auntie P via Flickr
Al Gore holding up a mercury light bulb in "An Inconvenient Truth",
Ed Begley, Jr. installing semiconductor solar panels to all exposed surfaces of his home,
the US government giving loans to Tesla electric cars

... all of which are very likely contributing to burning *more* fossil fuels, not less, because the total costs over those product's entire lifetime (including manufacturing plants that themselves burn fossil fuels, including safe disposal of wastes) are unrepresented by the relatively small price charged to consumers.  Better to tax fossil fuels in rich countries and spend that money on research placed in the public domain so even the poorest countries can benefit.  At a gradual rate of increase, so as to have minimum harm to productivity.

But, instead, we get lame concern trolling about scientific conspiracies.  I get the tiresome feeling that the global warming skeptics need to be saved from themselves, to have the people convinced by the scientific consensus to make their best arguments for them, because they cannon help but simply echo the last thing they heard from someone with a direct financial link to oil companies.  Tiresome.

My reply to the anonymous concern troll:

> If the prediction of a climate model is very much outside the consensus predictions, it is not likely to be published.

More arguing that climate science is a nonesuch science.  Taken to the logical extreme, we can argue that Einstein's papers on Special and General Relativity are not likely to be published (and that is why we still use epicycles today).  Taken to the logical extreme, we can posit that Alex Rodriguez is not likely to swing for the fences.  The blockbuster behavior of the players in the 99.999% percentile is poorly predicted by the tentative behavior of the average player.

Secondly, science publishing is not the only market for climate modeling.  Commodities traders and the reinsurance market for hedging risk on multi-year massive construction projects have a need for accurate climate modeling, because on those time scales a long range weather report would be worthless.  Those players are willing to leave millions on the table just so their hired gun scientists can parrot safe results that are unlikely to rattle tea cups at the next faculty function?  Unlikely.

I beg your forgiveness for the following snarkiness.  Can your anonymous concern troll name a single branch of science that has remained on a strictly linear trajectory since 1896?  Besides phrenology.

[ Edit 2/15/10 ]

The goalposts have been moved in the comments is Andrew Gelman's post on February 12.  It moved to always having the models subjected to a growing set of data, never casting out past data (good, good).  It moved to meta-analysis of all available models, over time (good, good).  It moved to the variance of published models compared to the subjective guessed distributions of the individual practicing scientists (fine, fine).  But where happened to the original claim "If the prediction of a climate model is very much outside the consensus predictions, it is not likely to be published."?

My comment submitted:
Marc Levy

> They find that if you ask climate experts to characterize their subjective best guess as to the distribution of key climate change parameters, you observe far more variance ... than you observe when you look at the distribution of all the climate model outputs.


This is to be expected, because no one would represent *any* model as perfectly describing reality - if it was perfect, it would no longer be a model, anymore.  Only pure mathematics has the benefit of being able to switch the analysis to a proved isomorphism that is easier to compute.  Every model is an adequate simplification, over a domain, and it is hoped the failure modes are understood so the model is not misused.  But the option of "proving" the model a perfect representation of reality is not available.

Useful scientific models typically give sharp results - sharper results than field readings, even counting for input precision or rounding in iteration, etc.  The models are useful *because* they give sharp results - or else you would have the perverse consequence of improving the usefulness of a model by adding slop into it to increase the variance.  A bound on the error is useful to track, but no one would actually mix in slop in a model to force the variance wider, even if the model's variance doesn't match field readings.

Any expert would know very well all the possible failure modes and other limitations of a particular model, that so their subjective guessed distribution would have greater variance than the considered model because of that knowledge.  The scientist possess what humans value as knowledge, the documented model cannot (and so scientists cannot be replaced with the models of their creation).  Why else might the variance be greater?  -- perhaps the scientist is in possession of they consider to be a better model, not yet published.  Or perhaps, the scientist is simply aware of the possibility of a better model.

The relatively uncontroversial model of satellite orbits is informative.  They are tighter because they, of course, consider fewer particles than Mother Nature is able to consider.  So they can consider events in the future, because they run faster than reality, and so they can run on economically available hardware.  No one would consider their tighter variance than the variance of observatory readings to be surprising, much less consider it a failure of the model.  Only if there was misrepresentation of best knowledge of the model's error bound, or failure modes, or applicable domain, and then, it would not be a failure of the model, it would be a misapplication by a human agent.

Can I note that the goalposts have been moved?  The original issue was "stabilizing feedback" and the original question contained the assertion "If the prediction of a climate model is very much outside the consensus predictions, it is not likely to be published."  There are other interesting issues to consider, but only after the parties admit that the goalposts have been moved and the focus of the argument shifted.
Reblog this post [with Zemanta]

Monday, December 28, 2009

Andrew Gelman on over-use of Economics Utility Model to explain all of psychological behavior


A pair of boots with one bootstrap visible.Image via Wikipedia
I was thinking about this recently. Many times, we can model people's behavior as a boot-strap process: people use a personal, informal, emotional process to decide whether to engage in rational (or semi-rational) utility analysis, or not.

[personal/informal/emotional process] ⇒ {{{decision point}}} ⇒ [begin rational utility analysis]

If they "drop out" at the decision point, nothing worth calling a rational utility analysis even gets started.

Many people are so overwhelmed by grappling with the critical issues of life, that they distract themselves into a silly stupor that makes a rational utility analysis impossible.


Andrew Gelman: Taxation curves and poverty traps - Statistical Modeling, Causal Inference, and Social Science: "
I think the concept of utility is extremely useful, and I've used it in my own applied work (see my papers on the utility of voting and on radon mitigation or the chapter on decision analysis in BDA). Utility is a model, and it's great.
My problem is when people think that the utility model can/should explain everything.
For example, as I've discussed on the blog, I don't think the utility model is particularly useful for explaining uncertainty aversion, seeing as the essence of the 'uncertainty aversion' phenomenon is that preferences can depend on how they are framed and how they are set up in terms of probabilities--two things that violate the classical von Neumann axioms in which preferences should only depend on the ultimate outcomes and their total probabilities, not on where these probabilities come from.
I think it's just sad that utility functions have become a default way of explaining all sorts of psychological processes that don't fit the model so well (requiring the sort of epicyclic adjustments that can make the model more trouble than it's worth). I can respect the general endeavor to take a model and push it as far as you can--to see what tweaks can be done to make it work further than it was originally intended--but, at some point, I think it makes sense to recognize the practical limitations of any mathematical model.
So, yes, I don't think utilities (or, for that matter, preferences) 'exist' in some Platonic sense. But I still think utility theory is great. I think the normal distribution is great, too, even though it can be misused in all sorts of ways!
"


Galton Box (demonstrates normal distribution)Image via Wikipedia
In a follow-up comment by Gelman:
Nathan (and Dan): I think prospect theory is great. I just don't like trying to explain uncertainty aversion using a nonlinear utility function of money (which, as I and others have shown repeatedly, makes no sense at all when you try to look at it quantitatively), and I really really don't like having to explain this to people over and over again, people whose technical ability is such that they could've realized in the first place the impossibility of explaining uncertainty-aversion-at-any-scale using a curving utility function. And I also don't like the term "risk aversion" casually used in a way that blurs three different phenomena: aversion to risk, aversion to loss, and aversion to uncertainty.
Reblog this post [with Zemanta]

Monday, December 14, 2009

How do I form my attitudes about scientific questions? - Statistical Modeling, Causal Inference, and Social Science


Crank it up!!Image by De Shark via Flickr
Andrew Gelman at Statistical Modeling, Causal Inference, and Social Science.

How do I form my attitudes about scientific questions? - Statistical Modeling, Causal Inference, and Social Science: "
It's not that the scientific consensus is stupid, it's that some statements are so stupid that they only come because the speaker has processed some aspect of the consensus in a particularly ugly undigested form.
...
...To me, it's another case where the existence of the consensus has switched off people's brains.
"

This point is valid, and well put, but if you read the whole post, I think Andrew Gelman is being far too pessimistic.

> "What do I recommend you all do? On subjects where Phil and I are the experts, I suggest you listen to what we have to say. Beyond that, I dunno."

This is very pessimistic and skeptical of considered consensus, and contradicted by Andrew Gelman's daily life. Before I step into a subway train, I don't form opinions about the quality of considered consensus of civil engineers, and Mr. Gelman does not either.

Commenter "jonathan" makes the point:

> I think you've raised two separate issues. One is the process by which consensus builds, entrenches, shifts, etc. The other is how rational people make rational decisions about information.
> It's interesting to me how in a few notable areas the two are lumped together: the idea that biologists are maintaining some (evil) consensus in favor of evolution and that climate scientists, etc. are doing the same with regard to climate change.
> ...


A highly resolved Tree Of Life, based on compl...Image via Wikipedia
If you step back and compare "Skepticism of Human Activity Causing Global-Warming/Climate-Change" to established cases of motivated obscurantism, like denying evolution and natural selection, and tobacco carcinogenicity, and the Jewish Holocaust of WWII, and the efficacy of the polio vaccine, and perhaps less established cases of motivated obscurantism like controlled demolition taking down the Twin Towers and HIV/AIDS, you see familiar patterns and similar techniques and motivations both sinister and innocent-by-way-of-ignorance/gullibility. It will seem like bad form to the self-described "Skeptics", but they could bring doubters into their fold by work - the work of authoritatively publishing their opposing immutable thesis, and welcoming that to be subjected to the highest standard of scrutiny. And what are we to make of the "Skeptics" doing everything _except_ that work?

The considered consensus of the scientific experts, here, is slowly growing and publishing an opposing authoritative immutable thesis - far too slowly and too messily and with too much initial unwarranted speculation for an impatient world - but at least they are building something up for possible future champion to knock down. And if it resists being knocked down - we have a consensus where it would be "perverse to withhold provisional consent", using Sagan's phrase.


Astroturf GreenImage by sbisson via Flickr
As for motivation within this possible case of motivated obscurantism, how can I discount the astroturf and sympathetic goodwill David Koch has purchased and does purchase?

If you draw the boundary of consideration small enough "I dunno" seems like honest skepticism of considered consensus. But what is the compelling reason to draw the boundary of consideration so small as to ignore case for motivated obscurantism?
Reblog this post [with Zemanta]

Thursday, November 5, 2009

Slipperiness of the term "risk aversion" - thoughts of Andrew Gelman

Bobby Leach and his barrel after his perilous ...Image via Wikipedia

[ For the purposes of this post, I am defining "risk" as a "non-zero probability of a undesirable outcome. I believe this definition is consistent with everything in this post, including Andrew Gelman's original comment. ] Interesting post by Andrew Gelman: Slipperiness of the term "risk aversion" - Statistical Modeling, Causal Inference, and Social Science: "

But I'm bothered by the term 'risk aversion.' Why exactly is it appropriate to refer to strict rules on drug approvals as 'risk averse'? In a general English-language use of the words, I understand it, but it gets slippery when you try to express it more formally.

I understand what Alex is saying--people are afraid of the risk of an adverse drug reaction, with this fear being 'risk averse' rather than simple rational prudence if the cost of the risk aversion outweighs, in expectation, the risk being avoided. (After all, we don't call it 'risk averse' to avoid going down Niagara Falls in a barrel. The idea of 'aversion' is that one is evaluating a tradeoff using a rule that is more stringent than the calculation of expected values.)

Still, it's tricky to refer to this as 'risk aversion' in a general sense. In the drug-approval context, there are two risks--the risks from an adverse drug reaction, and, on the other side, the risk of something bad happening that could've been prevented by taking the drug. It's risk vs. risk. What if someone said we should approve just about every drug, so as to avoid the risk of some otherwise-preventable condition? That would be risk-averse in another way, right?

This stance might seem fanciful, but I actually think it's pretty common, if you shift the context just slightly. Having done some (academic) work on pest control, I've learned that the most effective method of reducing home roach infestation is to clean the place, put poison in the cracks in the walls, and seal the cracks. 'Bombing' the apartment doesn't really do the trick. It kills some roaches but then the others come back. And this is beyond whatever poisoning you might get from the pesticide that's sprayed all over.

Nonetheless, people just love, love that bombing. Every month in our building they put up a list asking who wants their apartment bombed, and lots of people sign up. (And, beyond these individual choices, there's an institutional choice to bomb people's apartments for free. Nobody's offering to clean and seal our apartments for free.) Every month they do it, so I'm pretty sure the roaches are coming back.

Blondin carrying his manager, Harry Colcord, o...Image via Wikipedia

To get back to the main point of discussion, this behavior can be viewed as risk-seeking or risk-averse. Risk-seeking because people are taking on a risk of being exposed to poison and basically getting nothing out of it. Or, risk-averse because people are willing to do something pretty extreme to avoid the risk of roach exposure. In general, the 'take a pill for it' or 'bomb it' attitude can be seen as risk-averse. Or not, depending on how you look at it.

"
My Comment:
Yes, I think I understand what you are saying. Converting your wealth to *any* basket of goods has risk. Turn all your wealth into gold, fearing inflation, and you are badly situated for a Mad Max Carmageddon rapid societal collapse (if you try to trade gold for firearms, you will simply have the firearms pointed at you). Turn *all* your wealth into gasoline and Chevys and firearms, and you are badly situated for any other possible world. Any action carries risk, any bout of inaction carries risk. So I understand your point to be: instead of having the cultural norms pick which risks count and which risks don't count, and describing some actions as "risk averting", rigor demands specifying the risks for all actions and also for inaction, and specifying how you rank or discount risks relative to each other. I meant to open up my copy of Sam Savage's _Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty_ and see what he wrote about "risk aversion". The whole book is commendable - readable and rigorous.
Reblog this post [with Zemanta]

Thursday, October 22, 2009

Battling Models of Reality

Great point by Andrew Gelman, on having your models of reality travel in groups of two or three. Winston Churchill on statistical modeling:

Winston ChurchillImage via Wikipedia

Winston Churchill said that sometimes the truth is so precious, it must be attended by a bodyguard of lies. Similarly, for a model to be believed, it must, except in the simplest of cases, be accompanied by similar models that either give similar results or, if they differ, do so in a way that can be understood.

In statistics, we call these extra models 'scaffolding,' and an important area of research (I think) is incorporating scaffolding and other tools for confidence-building into statistical practice. So far we've made progress in developing general methods for building confidence in iterative simulations, debugging Bayesian software, and checking model fit. My idea for formalizing scaffolding is to think of different models, or different versions of a model, as living in a graph, and to consider operations that move along the edges of this graph of models, both as a way to improve fitting efficiency and as a way to better understand models by making informative comparisons. The graph of models connects to some fundamental ideas in statistical computation, including parallel tempering and particle flitering.

P.S. I want to distinguish scaffolding from model selection or model averaging. Model selection and averaging address the problem of uncertainty in model choice. The point of scaffolding is that we would want to compare our results to simpler models, even if we know that our chosen model is correct. Models of even moderate complexity can be extremely difficult to understand on their own.

SVG version of :Image:Area.Image via Wikipedia

A minor criticism: a model cannot be "correct", or else it isn't a model anymore.