Tuesday, February 23rd 2010, 7:45 AM EST
In poker a four-flusher cheats by claiming to have a flush, five cards all of the same suit, when what he really has is four cards of the same suit and one bad card. Sometimes the card is known to be bad, and sometimes the four flusher just gets excited, failing to check his hand closely. If another player notices the bad card, the four flusher will say that an honest mistake was made, and -- who knows? -- maybe that is exactly what happened. What non-scientists often do not realize is that the way we support non-profit research turns many scientists into scientific four flushers because, like rich poker players who must remain friends, they have little incentive to look for the hidden bad cards.
Teams of professional scientists, no matter what their field of research, always know that next year’s paychecks depend on making the case for more funding. I have worked in groups of this sort for thirty years and know how financial pressure warps the values of those working in an institutionalized “Big Science“ environment.
If a scientist or engineer in a Big-Science project is worried about the soundness of the research and alerts a Big-Science manager about possible problems, the scientist or engineer will usually be ignored. After all, checking something nobody knows for sure is wrong can only cause trouble in the short term, and what manager likes that? In my first Big-Science job, the supervisor told us that our research should be “success oriented”. Success-oriented research -- it sounds good, who can be against it? But in practice it means that research should aim at creating a funding story that is likely to bring in more money. Four flushers flourish in this sort of environment because nobody wants to find hidden cards -- they might be bad ones. Big Science managers who don’t worry much about hidden cards are more likely to impress their colleagues because it’s easier to give a sincere presentation when you think everything’s OK. Society can live with this sort of scientific four-flushing as long as an actual product has to get built. Then, if the project leaders are basically correct about all the hidden cards being unimportant, and the product works, the project is a success.
Often, however, at least one of the hidden cards turns out to be surprisingly important. When the Hubble telescope was first launched, it was found to have the telescopic equivalent of astigmatism. This was a truly embarassing sort of kindergarten mistake for telescope designers to make. Not to worry, though; once the funding authorities have bought in, the project has to be pushed through to something resembling success. In the case of the Hubble, there was extra equipment designed to fix the problem, and a special space shuttle mission to install it. Even after a Big Mistake like this, Big Science still wins because you need to consult and thus pay the same experts — the same people who participated in the original mistake — to correct the errors. The funding stream for design work was preserved past the point when the most hardened cynic might have supposed it would stop -- namely after the telescope had been launched and put into operation! It would be ridiculous, of course, to claim that the mistake was made on purpose to obtain more funding; rather, this is the sort of problem that occurs when Big Science CEOs have learned not to sweat the details. This attitude arises from the cynical observation that mistakes made the first time through an important project tend to be rewarded by extra funding so that the work can be done correctly the second time around. (Trying for a third time by making still more mistakes tends to get the project cancelled.) The Hubble story, in fact, turned out rather well; so far no other hidden cards have turned up to cause trouble. Not so in the Challenger disaster, however — remember how physics Nobelist Richard Feynman demonstrated at a congressional hearing that the combustion-chamber sealant turned stiff at cold temperatures, causing the solid fuel rockets to leak burning gas. For years shuttle managers ignored the sealant card because a fix would significantly increase costs. The card stayed safely hidden until one low-temperature launch day in Florida, on world-wide TV, we see that “Oops, this hidden card turns out to matter a lot.” Seventeen years later another hidden card, the one about how ice breaking off during launch tends to damage thermal tiles, led to the Columbia space shuttle disintegrating during re-entry.
Today’s AGW debate is just another adventure in Big Science, an adventure that could go on indefinitely because there is no obvious endpoint, no obvious product or technology that must work as advertised. The climategate emails and data fudging do not shock professional scientists because similar things happen all the time when the funding story is threatened; really, the best way to avoid them is to return to the not-so-distant past (say, the first half of the twentieth century) and stop paying teams of scientists and engineers lots of money to do non-profit research. If you must, give them a one-time prize for an important discovery (like the Nobel) but they and their organizations should not expect follow up money as a matter of course.
If this sort of return to the past sounds impractical, we might as well recognize that professionals working for Big Science are like lawyers — they get paid to be advocates for research programs. Then we could elaborate on a recent Chamber of Commerce suggestion, setting up a “science court” with “science judges” to examine research projects that attract controversy. Like legal judges, science judges must be paid the same steady salary no matter what their conclusions may be. Like legal judges, science judges should recuse themselves from ruling on research they have been connected to in the past and, the same way legal judges cannot act as ordinary lawyers, science judges should not participate in significant research. Science judges would not be deciding what is scientifically true or false -- only the disinterested judgment of the future can do that. What science judges can do, however, is to produce majority and minority reports (like the supreme court) so that policy can be made based on something other than a public-relations free-for-all. They could also, if the issue is truly important, authorize new research teams -- pro and con -- with the pro team trying to verify the point under debate and the con team trying to refute it. Charter members of Big Science should welcome this solution -- think of all the extra funding that would be made available! Non-scientists should also welcome it -- paying for multiple “do-overs” is usually cheaper than acting on a false premise (for example, accepting AGW as true and then deciding to redesign the world’s economy). What everyone can hope for, with both teams eager to examine and challenge each other’s work, is to force all those hidden cards out into the open so that they can be seen for what they really are.