Monday, February 27, 2012

Nice sarcastic invite for blog comments

Two young men being sarcasticImage via WikipediaWhile reading Reddit Politics, I spotted this comment pointing out the solicitation for comments on the site "The Big Picture":

http://www.reddit.com/r/politics/comments/ftpis/president_of_fox_news_will_be_indictedmaybe_even/c1ijv3p

The Big Picture - http://www.ritholtz.com/blog/

"Please use the comments to demonstrate your own ignorance, unfamiliarity with empirical data, ability to repeat discredited memes, and lack of respect for scientific knowledge. Also, be sure to create straw men and argue against things I have neither said nor even implied. Any irrelevancies you can mention will also be appreciated. Lastly, kindly forgo all civility in your discourse . . . you are, after all, anonymous."

Enhanced by Zemanta

Friday, March 4, 2011

The importance of stupidity in scientific research

Dare to Be StupidImage via WikipediaFrom a comment to
Here's Feynman on the 'terrible uncomfortable feeling called confusion'. And here's a great little paper on 'the importance of stupidity in scientific research' - "actively seek out new opportunities to feel stupid."

The importance of stupidity in scientific research

Excerpt:
I recently saw an old friend for the first time in many years. We had been Ph.D. students at the same time, both studying science, although in different areas. She later dropped out of graduate school, went to Harvard Law School and is now a senior lawyer for a major environmental organization. At some point, the conversation turned to why she had left graduate school. To my utter astonishment, she said it was because it made her feel stupid. After a couple of years of feeling stupid every day, she was ready to do something else.

I had thought of her as one of the brightest people I knew and her subsequent career supports that view. What she said bothered me. I kept thinking about it; sometime the next day, it hit me. Science makes me feel stupid too. It's just that I've gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn't know what to do without that feeling. I even think it's supposed to be this way. Let me explain.

For almost all of us, one of the reasons that we liked science in high school and college is that we were good at it. That can't be the only reason – fascination with understanding the physical world and an emotional need to discover new things has to enter into it too. But high-school and college science means taking courses, and doing well in courses means getting the right answers on tests. If you know those answers, you do well and get to feel smart.

A Ph.D., in which you have to do a research project, is a whole different thing. For me, it was a daunting task. How could I possibly frame the questions that would lead to significant discoveries; design and interpret an experiment so that the conclusions were absolutely convincing; foresee difficulties and see ways around them, or, failing that, solve them when they occurred? My Ph.D. project was somewhat interdisciplinary and, for a while, whenever I ran into a problem, I pestered the faculty in my department who were experts in the various disciplines that I needed. I remember the day when Henry Taube (who won the Nobel Prize two years later) told me he didn't know how to solve the problem I was having in his area. I was a third-year graduate student and I figured that Taube knew about 1000 times more than I did (conservative estimate). If he didn't have the answer, nobody did.

That's when it hit me: nobody did. That's why it was a research problem. And being my research problem, it was up to me to solve. Once I faced that fact, I solved the problem in a couple of days. (It wasn't really very hard; I just had to try a few things.) The crucial lesson was that the scope of things I didn't know wasn't merely vast; it was, for all practical purposes, infinite. That realization, instead of being discouraging, was liberating. If our ignorance is infinite, the only possible course of action is to muddle through as best we can.
Enhanced by Zemanta

Monday, January 3, 2011

Links on Rational Discussion

Heavy BurdenImage via Wikipedia
Currently, I see that Rationality is all about a very high standard for yourself and for your allies. Worrying about the standard of rationality of enemies and opponents should be a very small part. There is a finite amount of energy, and that energy is best used to keep the self from deluding the self with comfortable ideas.
It is interesting that nobody wants to be seen as irrational, but very few happy assume the burden of a very high standard for rationality for their own thoughts.
John Wilkins - Evolving Thoughts - A Code of Conduct for Effective Rational Discussion
  1. The Fallibility Principle
  2. The Truth-Seeking Principle
  3. The Clarity Principle
  4. The Burden of Proof Principle
  5. The Principle of Charity
  6. The Relevance Principle
  7. The Acceptability Principle
  8. The Sufficiency Principle
  9. The Rebuttal Principle
  10. The Resolution Principle
  11. The Suspension of Judgement Principle
  12. The Reconsideration Principle
  13. Fleck’s Addendum
Some of my own posts on Rational Discussion and Rationality:
Enhanced by Zemanta

Wednesday, December 29, 2010

How trusted is the "Big Five" Personality Traits in mainstream psychology?

flickr | Graela "Bathroom Personality Assessment - Part 2"
http://www.flickr.com/photos/9441263@N04/3984047524/
Comments to "Brain Structure and the Big Five"

Sanjay Srivastava | December 29, 2010 10:36 AM
Walter Mischel's famous critique predated the Big Five. His critique was of the concept of a personality trait more broadly. If you ask around at Mischel's home department they'll probably tell you that Mischel won the argument, but that's not the mainstream view among personality psychologists elsewhere. In fact, I don't think there's a single mainstream view on traits or the Big Five, but I'd guess that many personality psychologists would endorse, at a minimum, "useful enough until a better model comes along." Some might go a lot farther. (Some of my own views on the Big Five are in this paper, if you're curious.)
Russell Almond | December 29, 2010 11:15 AM
A quick comment about the Big-5. A couple of years ago, I did some consulting with a psychologist who was developing new personality measures. The standard practice for validating the new measure was to give it as part of a battery to a sample of the target population, along with the Big 5, and other known measure that were similar to the target. I got the impression that it wasn't that the Big 5 were thought of as the answer to everything, but that it was a starting point that most people in the field understood. The burden of proof was to show that your proposed measures was something other than a composite of the 5 factors in the Big 5.
My comments: trying to get a fix on the Big Five. George E. P. Box: "All models are wrong, some models are useful"
Enhanced by Zemanta

Monday, September 27, 2010

Fetishizing p-Values - The Cult of Statistical Significance

en.wikipedia.org William Sealy Gosset
Fetishizing p-Values; Tom Leinster - The n-Category Cafe
Recovering the insight of "Student" Gosset from the over-simplification of Ronald A. Fisher
Leinster: Now there’s a whole book making the same point: The Cult of Statistical Significance, by two economists, Stephen T. Ziliak and Deirdre N. McCloskey. You can see their argument in this 15-page paper with the same title. Just because they’re economists doesn’t mean their prose is sober: according to one subheading, ‘Precision is Nice but Oomph is the Bomb’.
Leinster: it is true that p-value does not measure the magnitude of the effect (but then, anyone who has taken at least one course in statistics should know that)
I think Jost, Ziliak and McCloskey would completely agree that anyone who has taken at least one course in statistics should know that. They’re pointing out, open-mouthed, that this incredibly basic mistake is being made on a massive scale, including by many people who should know much, much better. Bane used the term ‘collective self-deception’; one might go further and say ‘mass delusion’. It’s a situation where a fundamental mistake has become so ingrained in how science is done that it’s hard to get your paper accepted if you don’t perpetuate that mistake.
That last statement is probably putting it too strongly, but as I understand it, the point they’re making is along those lines.
From the 15-page paper "The Cult of Statistical Significance":
In 1937 Gosset, the inventor and original calculator of “Student’s” t-table told Egon, then editor of Biometrika, that a significant finding is by itself “nearly valueless”:
...obviously the important thing in such is to have a low real error, not to have a "significant" result at a particular station. The latter seems to me to be nearly valueless in itself. . . . Experiments at a single station [that is, tests of statistical significance on a single set of data] are almost valueless. . . . What you really want is a low real error. You want to be able to say not only "We have significant evidence that if farmers in general do this they will make money by it", but also "we have found it so in nineteen cases out of twenty and we are finding out why it doesn't work in the twentieth.” To do that you have to be as sure as possible which is the 20th—your real error must be small...
Gosset to E. S. Pearson 1937, in Pearson 1939, p. 244.
Gosset, we have noted, is unknown to most users of statistics, including economists. Yet he was proposing and using in his own work at Guinness a characteristically economic way of looking at the acquisition of knowledge and the meaning of “error.” The inventor of small sample econometrics focused on the opportunity cost of each observation; he tried to minimize random and non-random errors, real errors.
Edit 11/12/10
A very nice write-up here, along same lines: Significance Tests in Climate Science -- Maarten H. P. Ambaum -- http://www.met.reading.ac.uk/~sws97mha/Publications/jclim_ambaum_rev2.pdf
Consider a scientist who is interested in measuring some effect and who does an experiment in the lab. Now consider the following thought process that the scientist goes through:
  1. My measurement stands out from the noise.
  2. So my measurement is not likely to be caused by noise.
  3. It is therefore unlikely that what I am seeing is noise.
  4. The measurement is therefore positive evidence that there is really something happening.
  5. This provides evidence for my theory.
This apparently innocuous train of thought contains a serious logical fallacy, and it appears at a spot where not many people notice it.
To the surprise of most, the logical fallacy occurs between step 2 and step 3. Step 2 says that there is a low probability of finding our specific measurement if our system would just produce noise. Step 3 says that there is a low probability that the system just produces noise. These sound the same but they are entirely different.
This can be compactly described using Bayesian statistics...
This comes from a summary of the paper: How significance tests are misused in climate science -- Guest post by Dr Maarten H. P. Ambaum from the Department of Meteorology, University of Reading, U.K. -- http://www.skepticalscience.com/news.php?n=456#
Edit 11/21/10

Significance Tests, frequentist vs. bayesian

When we perform a test of statistical significance test, what we
would really like to ask is “what is the probability that the
alternative hypothesis is true?”. A frequentist analysis
fundamentally cannot give a direct answer to that question, as
they cannot meaningfully talk of the probability of a hypothesis
being true – it is not a random variable, it is either true or
false and has no “long run frequency”. Instead, the frequentists
gives a rather indirect answer to the question by telling you the
likelihood of the observations assuming the null hypothesis is
true and leaving it up to you to decide what to conclude from
that. A Bayesian on the other hand can answer the question
directly as the Bayesian definition of probability is not based
on long run frequencies but on the state of knowledge of the
truth of a proposition. The problem with frequentist statistical
test is that there is a tendency to interpret the result as if it
were the result of a Bayesian test, which is natural as that is
the form of answer we generally want, but still wrong.

The frequentist approach avoids the “subjectivity” of the
Bayesian approach (although the extent of that “subjectivity” is
debatable), but this is only achieved at the expense of not
answering the question we would most like to ask. It could be
argued that the frequentist approach merely shifts the
subjectivity from the analysis to the interpretation (what should
we conclude based on our p-value). Which form of analysis you
should use depends on whether you find the “subjectivity” of the
Bayesian approach or the “indirectness” of the frequentist
approach most abhorrent! ;o)

At the end of the day, as long as the interpretation is
consistent with the formulation, there is no problem and both
forms of analysis are useful.
This was my favorite comment, the whole sub-thread underneath is interesting. The original Open Mind | tamino.wordpress.com article has good qualifications to Dr Maarten H. P. Ambaum's Skeptical Science post.

Enhanced by Zemanta

Tuesday, September 21, 2010

Valuing stewardship of the environment for future generations, or not

commons.wikimedia.org Sheep_eating_grass_edit02.jpg
Attempted to post comment to Offsetting Behaviour - Eric Crampton : Yer either fer us or agin us
[ I am not speaking to the game-theoretic analysis of New Zealand leaving Kyoto -- Bjorn's swaying this way or that notwithstanding, there is no rational reason for NZ to stay inside Kyoto, unless it was seen as the price for signaling environmental concern. ]
The same climate scientists that Lomborg disparaged for stating evidence of high sensitivities for carbon emissions are now the same climate scientists he will trust to run geo-engineering. This is the the most embarrassing contradiction of Lomborg's evolving stance.
The Copenhagen Consensus cost-benefit analysis put carbon taxes at the bottom by valuing stewardship of the environment for future generations at zero. The same way pre-school for my toddler would be at the bottom of a cost-benefit analysis of all uses of my money, if I valued his own future earnings and quality of life at zero.
If you are standing on the train tracks with a freight train coming in five minutes, you have the choice to leap off the tracks. A "compromise" position of shifting over a few inches will have no effect, no matter how much you value "moderation and reasonableness". If you limit your analysis to only the next step minutes and fifty-nine seconds, the energy expended in the leap is a waste.
I wish we had the choice to live in an "warmer average" world -- it would be nice. If you put two bullets in a six chambered gun to play Russian roulette, on "average", you are still alive but with a headache. But the "average" is an abstraction, and in reality you have to deal with the consequences of the spun barrel. The risk is not a warmer world -- the risk is an over-energetic world that no longer has the climate stability that allowed civilization and large-scale agriculture and inexpensive & quick transportation to be developed and maintained.
It is fine to consider all possible humanitarian uses of scarce capital. The weight that stewardship of the environment for future generations should not be infinite, lest you indulge in pointless profligacy towards but a single goal. But that does not imply that stewardship of the environment for future generations should be weighted at zero.
[ This implies value placed on trying to give future generations a "western/first world" standard of living much like what we currently enjoy. If we are satisfied with a few hundred thousand on each continent living under conditions like indigenous peoples, living along the new raised coastlines and grasslands freed from permafrost, with climate instability but the net warmth & wetness still giving the ability to feed from the meat of small grazing animals, the costs we would bare would be slight. ]
Edit 9/21/10: Reply via Google Buzz from Eric Crampton:
If investing in tech reduces more warming per dollar spent than do other things, what's the problem with redirecting spending towards tech?
Copenhagen valued future generations the same way that cost-benefit analysis typically values future generations: by applying a standard discount rate. That doesn't say that future people don't count; rather, it says that future people might prefer being given cash.
My reply:
"""That doesn't say that future people don't count; rather, it says that future people might prefer being given cash"""
If I am the victim of blunt trauma, I may not value a cash dispersal later over a medical intervention now. There is a rational case to be made that the two are hardly substitutes in some circumstances.
I agree that I should have been more careful and said "valuing stewardship of the environment for future generations, *particularly* in reducing the risk of the very worst outcomes". I will be more careful in future.
"""If investing in tech reduces more warming per dollar spent than do other things, what's the problem with redirecting spending towards tech?"""
No argument here. But the lack of breakthrough tech *now* implies non-zero carbon taxes *now* (and there is a moral argument for quite substantial taxes now). I am certain it will take a few decades of people seeing global military preparation for the worst possible outcomes of climate disruption before it is plain that environmental stewardship may be worth 5 or more points of global economic activity. It is not surprising that substantial carbon taxes have near zero political traction in the two largest economies, now.
Enhanced by Zemanta

Monday, September 13, 2010

Selling Fantasies: Breakthrough Institute

Breakthrough Institute works like the Copy Protection technology wizards selling their tech to record companies. It cannot work, because the pirates will always find a workaround towards copy protection - you are merely punishing your customers and training them to be pirates when they try to use your product in convenient ways. The Copy Protection technology wizards are not selling a working solution - because a solution is impossible - they are selling a pleasant fantasy to the record companies in the few years their business model has left.
People do not confine themselves to buying working products. Sometimes they will purchase fantasies. Look at the exercise gizmos that people buy from TV.
The Breakthrough Institute doesn't have to provide solutions that work - it will provide fantasies that it can sell. So lets try and figure out who their customers are.
If you are in the top 0.5% of incomes, you are intelligent and you may be slightly distressed that your great grandchildren will be born into a boiling world (when you can be bothered to consider the issue). You have the ability to direct funding, and in these few years before the climate disruption really hits human agriculture and infrastructure, you are in the market for fantasies, sold to you by the semi-knowledgeable folk (who are probably sincere, because their confidence in their tech solutions surpasses their scientific capabilities). That is what people like the Breakthrough Institute are selling. For example, Warren Buffet doesn't consider himself a bad person, and he cares for his grandchildren. But he has also made a huge bet on coal transport infrastructure. He would love to support the Breakthrough Institute by some means, to reconcile his position on the responsibility of environmental stewardship for future generations.
"The Breakthrough Institute, a project of Rockefeller Philanthropy Advisors, Inc." lets you know about the customers they are after. How did good ol' John D. make is money?
Lets predict their structure. They will rarely speak in absolute moral terms - they will never flatly state that it is craven to leave future generations a boiling world just because a handful of generations could not bare to lower their standard of living. The absolute moral issues will always be left unspoken. Those that talk about the moral issues will be marginalized as "un-serious" or "alarmist".
They will strive to distance themselves from the worst of the denialists. Pielke Jr and Fuller practically fell over their own feet trying to run away from Virginia State Attorney General Cuccinelli. But they will take "warmist" commentators that have a record of limiting themselves to the published science, like Romm, and equate them with denialists that spout off bat-shit nonsense - even thought the implication of equivalence is ridiculous. But you will know them by their actions, because they will spend most of their energy arguing against those with the clearest grasp of the facts, and moral issues, and political challenges.
It is the foolish "moderate" position of shifting your stance a few inches when you are standing on tracks, freight train coming. The half measure doesn't leave you just half-dead.
All you can do is make the case to ethical decision makers that they are being sold a bill of goods, by comparing the statements and techniques and rhetorical stances of the Breakthrough Institute to bunglers that stood in the way of decisions of moral courage, and the weavers of the Emperors New Clothes. These are the "moderate" apologists for moral failures - like those who stood in the way of eradicating slavery, or were the audience for the Letter from a Birmingham Jail, or were willing to negotiate with Hitler, or were willing to overlook Stalin's crimes. In all these cases, you could find "moderates" that participated in moral failures, and argued for positions with shabby facts and shabby rhetorical devices.
Edit: 09/14/10
Moe, Rockefeller is the BTI fiscal sponsor; the main funder throughout has been the Nathan Cummings Foundation. By itself the fiscal sponsorship doesn't mean much,although it may well in this instance.
Enhanced by Zemanta

Monday, August 30, 2010

How can we define "Useful", when it comes to our models of reality?

Wikimedia Commons: "N-Gauge Cassiopeia E26 & EF81 from Kato"
A comment to "Useful models, model checking, and external validation: a mini-discussion" by Andrew Gelman - Statistical Modeling, Causal Inference, and Social Science

Gelman wrote (with Cosma Shalizi) a very fine philosophical justification for real-world, effective Bayesian techniques, which differs greatly from the usual philosophy associated with Bayesians.

Gelman, Shalizi: Philosophy and the practice of Bayesian statistics in the social sciences

My own recurring criticism of Gelman is of using Bayesian/statistical models to the exclusion of others.  I am more comfortable with the idea of models of different type in competition.
http://manuelmoeg.blogspot.com/search/label/Model%20building
The multiple models you then have will now compete in different uses - based on predictive power, accuracy, ability to calculate meaningful error ranges, cost of collecting data, cost of computation, cost of comparison, ability to predict outcomes from interventions, cost of understanding, etc.

quoting Gelman:
"We always talk about a model being "useful" but the concept is hard to quantify."
My comment:
Simply build a model of costs and gains and methods of comparison between models!  If a model is good enough for your work, a model must be good enough as a working definition of "useful"!

Sometimes the best answer to "Why" is "Just because".  Sometimes the best mechanism for rating different models is another model.  The Skeptics will always howl, so you simply have to demonstrate that their own behavior is consistent with putting undue confidence in their own model, whether a conscious model or unconscious.  (And, it must simply terminate with a model, because of the limits of the tools available to the human brain.  Only a model, probably over-simplified, can be manipulated with the agility needed to predict future outcomes of the universe from actions considered now, in real-time.)

Just keep asking the Skeptic "Why" with regards to their own personal actions, and when they hit the "Just because" point, they probably have described a model of utility, assumed true without proof, as an answer to "Why" in the previous step.

If the Skeptic refuses ultimate responsibility over their personal actions, and tries to plead pure capriciousness or mystery, then their model is simply statistical, based on stimulus and internal states (like stress) that can be approximately discovered with objective external measures (like galvanic skin response).  Of course, it is easier to plead pure capriciousness or mystery than demonstrate it - if their behavior is well predicted by a deterministic model suggested by another, the Skeptic is shut up.  Most times the reason for behaviors is gross and banal, no matter how elevated the sophistry of the Skeptic.
Enhanced by Zemanta

Friday, July 30, 2010

Thoughts on Roger Pielke Jr. | Stand-Up Economist - Yoram Bauman

wikipedia.org - Childe's_Tomb

Thoughts on Roger Pielke Jr. | Stand-Up Economist - Yoram Bauman
As an economist, I found Roger’s lack of discussion of climate impacts to be extremely disturbing. If—totally hypothetically—the science said that hitting 450ppm would cause the planet to explode, I’m pretty sure Roger’s talk would have looked different. (At least I hope so!) The economic point here is that cost-benefit analysis has two halves—costs and benefits—and you can’t do it by just talking about one of the two halves. Why Roger failed to talk about both halves has me totally perplexed and leaves me questioning how much he actually knows about economics. (For the record, he’s not an economist, so I think this is a legitimate question, not an insulting one. He’s a political scientist, but his talk was not about the intersection of science and politics; his talk was fundamentally about economics.)
RPJr's rhetorical trick "there is a lot of misunderstanding and misrepresentation displayed in this post... Fortunately, my new book covers all of these points so that there should be no ambiguity in my views." is annoying.
As if a book is a tomb for the ideas of a public intellectual, and it makes them incapable of stating plainly their views in public forums.

Y. Bauman refuses to play ball:
"Okay, here are some questions: (1) What did you say about the tenets of climate science? (Then I’ll try to get a video of your talk and see if I owe you an apology.) (2) How would you quickly characterize the main points of your talk? (3) Since you note above that you “did not discuss costs or benefits”, I’m curious about why. Do you not think cost-benefit analysis is important? (4) How (if at all) would your talk have been different if the scientific consensus was that 450ppm would destroy the planet?"
Is RPJr so craven as to simply "hit-and-run" from this forum, now that the questions are specific?  Stay tuned!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Related posts:
[Edit 7/31/10]

To my surprise, RPJr replied; his answers:
1. I used a “bathtub” model to describe the challenge of stabilization and I argued that everyone in the debate on all sides agree that CO2 has impacts. Where there are debates is when those impacts become dangerous (the height of the bathtub, e.g., 450 ppm) and the consequences of spilling over. Such debates are of course legitimate.
2. Three points: A. Targets and timetables for reducing emissions now being discussed or even enacted in law (e.g., in the UK) are not credible (I think I proved this), B. Stabilizing concentrations requires advances in technology deployment and innovation rather than GDP contraction (shown a bit, but largely asserted), C. Acccelerating decarbonization requires much greater public investments in technology (asserted not proven).
3. I’ve written a lot of CBA, and teach it as well. This talk was not about CBA, but policy evaluation. I am happy to discuss the topic.
4. I have no idea.
flickr.com/photos/psd/1806225034
"Moral Compass" by "psd"
My comments to (1): "Such debates are of course legitimate."  RPJr has a problem with the debate coming to provisional conclusion, on the side of the science and the moral question of future generations being left a livable world - a provisional conclusion where we begin work on drastically reducing carbon emissions and mitigate previous carbon emissions, where GDP takes a major haircut if need be.  RPJr's fretting and fussing is consistent with the moral question of future generations being left a livable world always taking a backseat to today's GDP/standard of living - but he doesn't have the guts to admit that, or he realizes that if his cravenness is so obvious, he gives up any chance of political effect.

My comments to (2): Actually, this is the first sensible thing RPJr has ever said, to my knowledge.  It is very true: we have exactly zero experience with asking citizens to voluntarily cut their standard of living for the moral outcome of  future generations being left a livable world.  "Warmists", it can be argued, don't have the guts to admit this, or they realize that if they are so truthful they give up any chance of political effect.

My comments to (3): Why talk to economists about policy evaluation without reference to cause and effect?  That was Y. Bauman's original puzzlement.

My comments to (4): Pathetic.  Again, RPJr cannot deal with science predicting catastrophe (catastrophe, because it is hard to imagine the moral monsters that could cheerfully leave future generations an unlivable world), because his goal is that he moral question of future generations being left a livable world always taking a backseat to today's GDP/standard of living.  That can be used to perfectly predict his reaction to anything.
Enhanced by Zemanta