What’s Truth? Scientific Method Under the Microscope

“Do you believe in God?”

“Do you believe aspirin helps your headaches go away?”

Why do you believe what you believe?”

In a recent New Yorker article, Jonah Lehrer shakes up this last question and stirs it well with an engaging discussion of some intriguing and troubling limitations of the scientific method. I knew Lehrer had hit a chord when within just a few hours, an ethicist, a geneticist, and a philosopher each collared me and told me to read Lehrer’s article.

Lehrer’s startling conclusion?: “When the experiments are done, we still have to choose what to believe.”

Whatever your answers to the questions above, they most likely involve experience—yours or someone else’s. You somehow experienced God or hope to, or read about or know someone who did. You experienced aspirin helping a headache, or your mom told you it would, or physicians told her.

But for all intents and purposes, the similarities end there, right? Because scientific experiences, what we call experiments, are replicable and controllable, and God-experiences and the like aren’t.

Nothing against God-experiences, and maybe this even speaks to their power, but the point is: while I might have some kind of God-experience, unlike testing aspirin, I can’t easily carry out my God-experience simultaneously on statistically significant large numbers of people, write up the experience, repeat it, compare the same number of people at the same time who don’t believe in God, but are otherwise similar, see if they have the same experience, and then have someone in Australia and China repeat my experiment and see if they get the same results. This method is what science is supposed to be about; this is scientific proof, this is what scientists believe in.

The Decline Effect: Results are Not Replicable Over Time

But, says Lehrer, it’s not that simple. He describes what he calls ‘the decline effect’: many experimental results that are strikingly positive and statistically significant are not replicable. Over time, the striking results slowly go away or cannot be found again in repeated experiments! And here we’re talking everything from large clinical trials involving drugs for schizophrenics to the preference (across a diversity of species) of females for physically symmetric males. In initial experiments and trials (more than one and in different locations at different times), the drug works, the females prefer symmetry, but then, in later repeated versions of the same experiments, the effects go away. How could this be?

Way back in 1620 in his monumental Novum Organum, Francis Bacon, Mr. Scientific Method himself, listed what he called “idols,” fixed ideas that scientists must do their best to avoid. All Bacon’s idols, as Lehrer suggests, probably play some role in the decline effect. Bacon wrote that scientists, like everyone else, search out, create, and prefer patterns, look for evidence to support their own pre-existing ideas, are biased by their culture, traditions, and previous beliefs, can be fooled by their senses, favor the parts over the whole or vice versa, and can be fooled by language and words.

Lehrer brings a magnifying glass to Bacon’s idols. Conflicts of interest, especially in the case of clinical trials, which are run and supported by the companies that fund the development of the very drugs being tested and stand to make the profits from them, have long been known to be a serious bias-producer.

Other strong belief systems and approaches within science also may help explain the decline effect. There are statistical effects (anomalous events happen, and only after the experiment has been done many, many times do the anomalies average out). Even what counts as statistically significant is itself a scientific norm.

And who can publish what is a big part of the story. Lehrer points out that 97% of published studies show a positive effect. First, very few journals are interested in negative results or effects. Second, the journals are often not interested in data that supports ideas radically different from the accepted norms, especially if those ideas are coming from less well-known scientists. In fact, most studies, even published ones, are never repeated by others. Bottom line: a scientific paradigm becomes that, at least in part, because scientists look to support the paradigm because it is what is ‘true’, and if they do find something that goes against the paradigm and want to publish it, they might not be able to. So, the paradigm sticks.

Extra-Sensory Proof?

A front-page story in the New York Times last week brings many of these issues to a head in soon-to-be-published experiments on extra-sensory perception. Yes, the decline effect also shows up in work on the paranormal. In the current case, a (and this is important) prominent psychologist presents data that people can predict, statistically better than chance, what random erotic picture will appear next on the computer screen. The social psychology academic community is in an uproar, because ESP is a phenomenon, they say, that has no known possible scientific mechanism. The journal, a (again important) prominent one in psychology, counters that all the peer reviewers of the research said the work satisfies the journal’s publishing criteria. Many others are trying to repeat the results; no one has yet.

Understandably, the decline effect makes scientists very nervous, and not just because some make their reputations on results that then decline, but also because of the actual and perceived anti-science bias in some quarters.

However, the mistake would be to hide out, bury our heads in the sand, and pretend that there aren’t limitations to the scientific method. Instead, we should come out and say, “We plan on exploring and understanding these limitations better.” The mistake would be to insist that “those people don’t understand science anyway.”

The scientific method is powerful, but not foolproof. Theres definitely room for progress.