Thursday, May 13, 2021

worked out system of decision about intervention/action

\\cserve3\e$\MMG_Docs\worked_out_system_of_decision_about_intervention00.txt

http://www.stat.columbia.edu/~cook/movabletype/archives/2011/05/improvement_of.html

Typing this out for my own benefit.  Probably to nobody else's.

You need to build 4 models.  [1] A model of the car manufacturers and car market.  [2] A model of your negotiation for purchasing a car, and the resources you can bring to bear to the transaction, and the economic constraints you are under for money, time, etc.  [3] A model of your hierarchy and compromises between your values and goals.  And [4] a model of your rational decision making processes.

Lets call these models [1] "Car-Make", [2] "Car-Buy", [3] "My-Values", [4] "My-Rational"

Per Pearl, since the models "Car-Buy" "My-Values" "My-Rational" need to allow intervention experiments, they have to be expressible as directed acyclic graphs (DAGs).  So "Car-Buy" "My-Values" "My-Rational" may be sub-optimal - they are best candidate (or candidates) out of all models that can be expressed as a DAG.

Literature exists to help with the construction of "Car-Make" "Car-Buy".  "Car-Make" will model different ways goals can be achieved: you can make a car safer with more weight, or you can make a car safer with superior engineering and judicious use of materials, etc.  "Car-Buy" could be informed by a few issues of Consumer Reports, etc.

"My-Values" and "My-Rational" will be constructed with a combination of introspection, objective evidence, tests, and quizzing others.  If "My-Values" and "My-Rational" sooth your ego, they will probably perform badly for the task.  You want to take every opportunity to have a model be truthful above being a mere advertisement of you being a nice and super nifty guy.

It would seem that "My-Rational" might involve an infinite regress - you have a model, a model of the process that you judge models, a model of the process that you judge models of models, etc.  But this flatters the human mind.  Per Herbert Simon, Gerd Gigerenzer, Peter M. Todd, Bounded rationality, Ecological Rationality, your really existing system of "My-Values" "My-Rational" is bristling with irresistible fast and frugal heuristics.  You can discover these from poor decisions you make time and time again - fast and frugal heuristics work very well in their preferred ecological setting, but are susceptible to failure modes in other settings because they are not ideally rational.  (Sure sign of an irresistible fast and frugal heuristic: these are betrayed by poor decisions made time and time again, that are not followed by a relentless implementation of disciplines and restraints to prevent those poor decisions from being made in the future.)

A fast and frugal heuristic expressed as a preference for blondes is not revealed by a string of successful relationships with blondes, because this might be due to good qualities innate in all blondes.  A fast and frugal heuristic expressed as a preference for blondes would be revealed by a string of difficult relationships with blondes, and forever indulging a habit of buying drinks for blondes at bars.

There is a tendency to be self-serving in introspection, so objective evidence and third-party observations and judgements are crucial.

Now you can simulate the outcome of a particular choice of automobile, by combining all 4 models into 1 large model and using intervention experiments on the model, suggested by the particular choice of automobile.

Not fully succumbing to the infinite regress, but incorporating a model of how you judge models, could be helpful (call it "Judge-Models").  That way, if multiple suitable models can be imagined and the top candidate does not immediately reveal itself, you have an analytic recourse to determine best fitness.  As above, your really existing "Judge-Models" is also bristling with irresistible fast and frugal heuristics, which you can discover... etc...

It would be silly to say a satisfactory decision cannot be made without this level of rigor.  But the rigorous fully generalized system can suggest adequate "quick and dirty" substitutes, surely.

I am a faithful reader of Gelman's blog, but I am constantly irritated by his willingness to fashion models of everything _except_ "My-Values" "My-Rational" "Judge-Models", which is the same as crossing the moat and killing the dragon and entering the castle, but refusing to climb up the stairs to the princess in the tower - just sitting there on the first step.

Without discussion of "My-Values" "My-Rational" "Judge-Models", you have done so much preparation for a decision about an intervention (calling inaction its own kind of intervention)... but then dropped the bride at the threshold.

Supplying "My-Values" "My-Rational" "Judge-Models" violates the stereotypical separation of work/concern/responsibility between the academic and the decision maker and the action taker, so the reluctance to discuss them is completely understandable, and my irritation is unreasonable, I know.

Monday, December 14, 2020

If You Want to Stop Procrastinating, Give Yourself a Break

Ten Psychology Studies from 2010 Worth Knowing About


Neuronarrative - David DiSalvo


Ten Psychology Studies from 2010 Worth Knowing About


If You Want to Stop Procrastinating, Give Yourself a Break


Most of us inveterate procrastinators are also world-class self punishers. You miss a deadline because you put something off for too long and your mind instantly turns into the Grand Inquisitor, complete with a studded whip to flog you into self-induced terror. But a study of the past year tells us that we've got this all wrong. If you want to get yourself out of the procrastination trap, stop beating yourself up and try a little self forgiveness instead. Researchers followed first year college students through their first and second midterm exams with an eye toward tracking the effects of procrastination and self forgiveness. They found that students who procrastinated before the first midterm were significantly less likely to do so before their second midterm if they gave themselves a break.


This runs counter to the conventional assumption that letting ourselves off easy will foster more procrastination, but the result actually makes a lot of sense for a very practical reason: self-forgiveness allows you to get past your mistake and concentrate energy on correcting your behavior. When you punish yourself, you're also draining energy, sapping focus and taking on too much mental baggage. Not to mention, you also make trying to do whatever you failed at the first time a horrible experience because of its association with self punishment. Instead, acknowledge your procrastiantion and its ill-effects, forgive yourself for screwing up, and get on with the tasks at hand.

Monday, December 7, 2020

Decision Science News - communicating risks

Some ideas on communicating risks to the general public


Decision Science News - communicating risks - 12/03/10


Representations to use less often


Representations to use more often


speak of natural frequencies, and the models they are based on

Thursday, November 5, 2020

cute ass comment in style of denialists

cute ass comment in style of denialists

http://www.easterbrook.ca/steve/?p=1954&cpage=1#comment-4175

How dare you point to these extreme weather events as proof of global warming. How unscientific of you! Whoever taught you statistics should have their 2.49 children shot, on average.
If we consider all possible worlds, real and fictional, we should be not surprised by extremes. I notice you didn’t mention all the fictional Earths where the planet froze solid. How convenient of you!
I am obliged to end this note castigating you for writing about dead trees when you should have been playing up all the brave scientists working on future speculative Breakthroughs, that allow us to continue burning oil at growing rates, which is the only way we can sustain our current Utopia of Limitless Wealth. It is a strange kind of Utopia – the kind that can stay hidden from 1.7 billion people who live in absolute poverty (our Utopia is such a playful rascal) – but it is a Utopia nonetheless.

Wednesday, October 28, 2020

A Mashey Gem

http://initforthegold.blogspot.com/2010/10/mashey-gem.html

http://www.realclimate.org/index.php/archives/2009/02/on-replication/langswitch_lang/in/

People are making an error common to those comparing science to commercial software engineering.

Research: *insight* is the primary product.
Commercial software development: the *software* is the product.

Of course, sometimes a piece of research software becomes so useful that it gets turned into a commercial product, and then the rules change.

===
It is fairly likely that any “advanced version control system” people use has an early ancestor or at least inspiration in PWB/UNIX Source Code Control System (1974-), which was developed by Marc Rochkind (next office) and Alan Glasser (my office-mate) with a lot of kibitzing from me and a few others.

Likewise, much of modern software engineering’s practice of using high-level scripting languages for software process automation has a 1975 root in PWB/UNIX.

It was worth a lot of money in Bell labs to pay good computer scientists to build tools like this, because we had to:

- build mission-critical systems
- support multiple versions in the field at multiple sites
- regenerate specific configurations, sometimes with site-specific patches
- run huge sets of automated tests, often with elaborate test harnesses, database loads, etc.

This is more akin to doing missile-control or avionics software, although those are somewhat worse, given that “system crash” means “crash”. However, having the US telephone system “down”, in whole or in part, was not viewed with favor either.

We (in our case, a tools department of about 30 people within a software organization of about 1000) were supporting software product engineers, not researchers. The resulting *software* was the product, and errors could of course damage databases in ways that weren’t immediately obvious, but could cause $Ms worth of direct costs.

It is easier these days, because many useful tools are widely available, whereas we had to invent many of them as we went along.

By late 1970s, most Bell Labs software product developers used such tools.

But, Bell Labs researchers? Certainly no the physicists/ chemists, etc, an usually not computing research (home of Ritchie & Thompson). That’s because people knew the difference between R & D and had decent perspective on where money should be spent and where not.

The original UNIX research guys did a terrific job making their code available [but "use at your own risk"], but they’d never add the overhead of running a large software engineering development shop. If they got a bunch of extra budget, they would *not* have spent it on people to do a lot of configuration management, they would have hired a few more PhDs to do research, and they’d have been right.

The original UNIX guys had their own priorities, and would respond far less politely than Gavin does to outsiders crashing in telling them how to do things, and their track record was good enough to let them do that, just as GISS’s is. They did listen to moderate numbers of people who convinced them that we understood what they were doing, and could actually contribute to progress.

Had some Executive Director in another division proposed to them that he send a horde of new hires over to check through every line of code in UNIX and ask them questions … that ED would have faced some hard questions from the BTL President shortly thereafter for having lost his mind.

As I’ve said before, if people want GISS to do more, help get them more budget … but I suspect they’d make the same decisions our researchers did, and spend the money the same way, and they’d likely be right. Having rummaged a bit on GISS’s website, and looked at some code, I’d say they do pretty well for an R group.

Finally, for all of those who think random “auditing” is doing useful science, one really, really should read Chris Mooney’s “The Republican War on Science”, especially Chapter 8 ‘Wine, Jazz, and “Data Quality”‘, i.e., Jim Tozzi, the Data Quality Act, and “paralysis-by-analysis.”

When you don’t like what science says, this shows how you can slow scientists down by demanding utter perfection. Likewise, you *could* insist there never be another release of UNIX, Linux, MacOS, or Windows until *every* bug is fixed, and the code thoroughly reviewed by hordes of people with one programming course.

Note the distinction between normal scientific processes (with builtin skepticism), and the deliberate efforts to waste scientists’ time as much as possible if one fears the likely results. Cigarette companies were early leaders at this, but others learned to do it as well.

Monday, October 26, 2020

Breakthrough Narrative and Time Machines, and Carbon Eating Trees

http://thingsbreak.wordpress.com/2010/10/26/stop-the-presses-climate-journos-think-the-emissions-reduction-issue-looks-an-awful-lot-like-a-narrative-problem-no-word-yet-on-just-how-nail-shaped-people-wielding-hammers-see-it/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+ThingsBreak+(The+Way+Things+Break)

Wow. Not to put too fine a point on it: """
Did I mention that this New Narrative meme is being pushed by the same people who are arguing against any sort of meaningful emissions pricing? They wouldn’t have a vested interested in framing emissions legislation as a dead, would they?
...
It’s nice that you have a meme. It’s nice that some journalists bit. When you feel like getting around to actually hooking some grassroots support, give us a reason to support your Narrative besides an appeal to novelty.
"""

This Breakthrough meme is exactly like a call for more time-machine funding. The only reason they are pushing for stratospheric saline geysers and carbon-eating mega-trees is that they would seem more plausible to the uninformed than a time-machine. Their plausibility is the foremost attraction, their actual viability is a far distant concern -- and that is morally noxious.

Wednesday, October 7, 2020

Education: "Waiting for Superman", Union Busting, and Obscuring the role of Parental Responsiblity

http://www.democracynow.org/2010/10/1/waiting_for_superman_critics_say_much

I agree that the teachers' union get hit with cheap shots in the school debate. I agree that the teachers' union is full of professionals dedicated to providing excellent education in the "really existing" world, and the construction of alternative scenarios (maybe fanciful) for education that blink the teachers' union out of existence has less to do with delivering quality education in the real world and have more to do with crude union busting.

But it silly to pretend that there is no conflict whatsoever between the education needs of the students and the political convenience of the teachers' union. Those are two distinct entities, and they have different political needs. For example: the mechanism that prevents capricious termination of union employees is in conflict with the discretion a school principal would wish to assert to fire a lacking teacher to replace with a potentially better suited teacher. For example: the mechanism that prevents threat of pay reduction being used to punish a union employee is in conflict with the discretion a school principal would want be able to better compensate superior teachers working from a constrained budget for teacher compensation.

The key point as I see it: the *existence* of teaching as a professional discipline is obscuring the primary role of *the responsibility of parents* for America's education of children. The parents are all too happy to shed responsibility, and push the responsibility to teaching professionals. If parents appropriately shouldered the responsibility for the quality of education of children, the primary role of teaching professionals would diminish, plainly.

Taking responsibility does not necessarily mean home schooling. It DOES mean attending PTA meetings and supervising children's homework every school night and being aware of trends of grades and children's enthusiasm or frustration in coursework -- parents taking every and all opportunity to take a full role in their children's education, paid for in *time*. If that time is then not available for television and amusement or if that time is then not available for working to support a certain level of consumerism, so be it.

In America, teaching professionals take the primary role, for praise and for blame, fair and unfair, because of the sloughing off of responsibility by America's parents.

Monday, October 5, 2020

race on the brain

http://www.stat.columbia.edu/~cook/movabletype/archives/2010/10/racism.html

I would much rather deal with racists than people who have "race on the brain".
I am not interested in searching the world for people free of racism, because it is hard to imagine people who *really* don't allow race to inform *any* judgement whatsoever.  I have met some children and adults who I would guess come very close, but so very very few to make the effort not worthwhile.
So I would much rather deal with racists, because, honestly, I must judge myself a racist.
I *do* have a problem with people who have "race on the brain" -- when the topic of race comes up they are reduced to blithering idiocy and vile reactionary tribalism.  White males haven't cornered the market on this particular form of idiocy -- it is embarrassing when Latino candidates win office on nothing more than their publicized ethnicity (such as the insubstantial Los Angeles mayor Antonio Villaraigosa, aka Tony Villar).
Political correctness makes physical and verbal violence against traditionally disadvantaged groups less likely, and that is good, but it cannot do much to lesson "race on the brain" on both sides of the racial divide.  Only the self-imposed discipline of critical thinking can do that, and people get too much pleasure from their vile reactionary tribalism to self constrain their thought.

Wednesday, April 15, 2020

good summary gelman philosophy of Bayesian statistics

Bayesian statistical pragmatism






Thank you for writing this.  This is the clearest and quite comprehensive (even though short and to the point) support for your philosophical views, and I am inclined to agree on all counts.

I read Gelman and Shalizi 2010, and enjoyed it a lot, what my novice brain could understand.  But the summary above hits and handles all the difficulties, and is easy to read.

I would recommend people read Gelman and Shalizi 2010 "Philosophy and the practice of Bayesian statistics" [ http://www.stat.columbia.edu/~gelman/research/unpublished/philosophy.pdf ] for the section on Mayo's "severe" testing of models, Section 4 "Model checking" - the only lack of the summary above that I can see.

Tuesday, April 14, 2020

size of genome

http://sandwalk.blogspot.com/2011/03/how-big-is-human-genome.html?showComment=1300992550197#c1663047036846499582





manuel "moe" g said...

[Part 1 of 2]

Forgive my ignorance, but I am trying to make sense of different descriptions of the human genome, and different descriptions of the information needed to fully specify a large mammal, like a man.

You talk about 3.5 Gb for the genome, and, giving Ray Kurzweil the benefit of the doubt, 50 million bytes after loss-less compression.

If someone made extravagant claims about a computer program that runs on some unknown hardware and unknown OS, I would be unamused if they handed me a thumb-drive containing the compressed binary executable, and nothing more. This single file would demonstrate nothing.

I would demand the original source code, the specification for the code (including the business decisions the code is meant to automate, at the very least), some documentation demonstrating that I can move back and forth between points in the specification and the source code lines encoding that part of the specification, and the code for the automated tests (so an automated test can demonstrate what changes to the code will still keep it within specification, at the very least).

And maybe the same for some of the libraries and hardware - maybe needing the full specification if the libraries, OS, and hardware if they all are very novel, quite unlike any I have worked with before.

So there would be a dramatic explosion of information needed, moving from the binary executable to a bare minimum specification of a computer program as defined above.

manuel "moe" g said...

[Part 2 of 2]

In the debate between PZ and Kurzweil, PZ makes this point:

http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php

"""

Let me give you a few specific examples of just how wrong Kurzweil's calculations are. Here are a few proteins that I plucked at random from the NIH database; all play a role in the human brain.

First up is RHEB (Ras Homolog Enriched in Brain). It's a small protein, only 184 amino acids, which Kurzweil pretends can be reduced to about 12 bytes of code in his simulation. Here's the short description.

MTOR (FRAP1; 601231) integrates protein translation with cellular nutrient status and growth signals through its participation in 2 biochemically and functionally distinct protein complexes, MTORC1 and MTORC2. MTORC1 is sensitive to rapamycin and signals downstream to activate protein translation, whereas MTORC2 is resistant to rapamycin and signals upstream to activate AKT (see 164730). The GTPase RHEB is a proximal activator of MTORC1 and translation initiation. It has the opposite effect on MTORC2, producing inhibition of the upstream AKT pathway (Mavrakis et al., 2008).

Got that? You can't understand RHEB until you understand how it interacts with three other proteins, and how it fits into a complex regulatory pathway.

"""

I am inclined to grant PZ the point, and say his understanding of the immensity of the task outstrips Kurzweil's understanding.

Would the explosion of information needed to move from the complete genome to the complete specification of a large mammal be on the same order of the explosion of information needed to move from the binary executable to a bare minimum specification of a computer program as defined above? Did I capture the gist of it, or am I hopelessly mistaken?