Sunday, November 27, 2011

When The Scientific Method Is Important

[Added later: There's an article on LessWrong about Value of Information.]

Just yesterday I visited frequent reader and old friend Dan at his office. I was about to go out running, and I mentioned to him that I was doing a little experiment. I had a vial of chia seeds which supposedly help the Tarahumara (the infamous long-distance running people in Chihuaha) when they start getting tired. Dan pointed out that if I really wanted to know for reals if it was working, I would have to do a control. He helpfully suggested chocolate sprinkles. (By the way, if you're a skeptic-minded runner, an excellent blog is Science of Running.)

Magic beans! Actually just chia seeds. Yes, like that would grow chia pet hair.


I'm not planning to do a control, but not because I don't believe in the scientific method. It's because increased certainty on this question just isn't worth the effort to me. It's all about allocation of time and attention: we encounter many more truth claims in a day than we have time to evaluate, and we have to quench the flood to something manageable by limiting to those that seem relevant, plausible, and come from someone not obviously motivated by promoting self-interested falsehoods, all based on our prior beliefs. We therefore eliminate the vast majority without considering them. Of those that remain, there are still only so many hours in the day, so we have to decide how much certainty it's worth getting from our evaluation.

Having a control is one of the best ways to dramatically increase certainty, but it's not an absolute - good Bayesians will tell you, it's not you-know-everything or you-know-nothing based on whether you have a control or based on any new data - and certainty is expensive. Replication is another way, but with profound results, your certainty justifiably skyrockets before you replicate: V.S. Ramachandran has said that if someone shows you a talking dog, you'd be kind of a moron to refuse to believe it at all until you see a second one. So with claims that are not necessarily that believable and/or that are only worth following up if they give dramatic results, we do quick-and-dirty studies. The pharmaceutical industry does this all the time and calls these "proof-of-concept" studies, to see if there's anything there worth spending more money on. Same in medicine in general: we talk about various levels of evidence based on what's behind the claim, and sometimes it's just expert opinion with no studies to back it up (Level C).

In this case, my prior beliefs about probable outcomes for the "experiment" lead me to believe that it's likely there's either nothing to this, or if there is, the effect (if it's as dramatic as people have claimed for the Tarahumara) should be fairly clear, as compared against my prior experience. Granted, my prior experience is a poor control if it can even be called that, but it's still better than nothing, and a little better than nothing is all the time I'm willing to invest here. Of course if someone did a controlled study that conflicted with my own "results", I would gladly to defer to their results.* (Note that the "white-collar athletics" world is ripe for skeptics. It's one of Steve Novella's favorite targets.

So why am I spending extra time and attention on this point? It's good to strive to improve our rationality, but it's good to keep in mind both for being both an effective rationalist and an effective rhetorician that there are constraints on these operations that prevent us from granting equal effort to all propositions:

- We care whether beliefs are accurate because they help us predict experience and affect our behavior.
- There are various ways to increase certainty in the accuracy of beliefs. Controls are one of the best.
- However, certainty costs time and attention, which are the constraints on the evaluation process.
- Therefore for questions where prior beliefs make us doubt their accuracy or big "pay-offs" to begin with, we make executive decisions and either throw them out before we think too hard about them, or do a "quick-and-dirty" evaluation like this one.

An economist might say the value of the marginal unit of certainty of a belief test is some function of the initial evaluation of the belief's plausibility and the expected utility benefit of the updated belief. Thinking this way can help not only to narrow down what it's worth worrying about on your own part, but also to decide the kinds of claims it's worth making to others, based on their prior beliefs and the effect you'd like to have. As with evaluating beliefs, in rhetoric you have to determine goals and prioritize.


*So you want results? At this writing a quick search for "chia seed running" turns up a single paper on Pubmed that finds no statistically significant effect on long-period running performance relative to Gatorade. Having consumed chia several times on runs, I've experienced nothing that would lead me to a different conclusion.

4 comments:

dbonfitto said...

I bet chocolate sprinkles taste better, too!

Figure out gradually more horrible things to eat and see if you can and convince runners to eat them.

Hypotheis: runners can be convinced to eat anything if it's promised that they'll become faster.

Michael Caton said...

I've heard rumors to the effect that some performance drinks marketed to marathon+ distance runners are intentionally made to taste worse than they otherwise would, to make them seem better for you.

dbonfitto said...

s

That's a missing letter from my previous post.

I just want to see people running around eating things that amuse me.
e.g. I hear you can knock a few minutes off your marathon time if you suck the juice out of a hot dog while you run. Don't eat the hot dog, just run around with it sticking out of your mouth. Also, it improves your endurance if you also slap yourself in the face while running.

Michael Caton said...

People do all kinds of stuff, and this is the kind of silliness that finally turned Michael Shermer into a skeptic. He had his depiphany during a training ride in CO.