LaCour and Green

The recent developments concerning the LaCour and Green paper have been fascinating. I’m still a bit unclear how Green didn’t catch the problems with the data. Regardless, I think it’s commendable that he is urging for the retraction of the paper, rather than lashing out at critics, as is sometimes the case when the integrity or validity of a piece of work is questioned. Others, take note - this is how to respond when you realize there’s a problem in your work.

Also, if you haven’t already looked at it, the report issued by Broockman, Kalla, & Aronow, is a pretty nice piece of work. Careful, thorough, clear, and with all analyses conscientiously detailed. Kudos to these guys as well.

I think there’s a couple of things worth thinking about here. First, the more remarkable (i.e. unexpected) a finding, the more critical we should be. Even Don Green himself, in a NYTimes article on the original work, was quoted as saying:

“I truly did not expect to see this,” Dr. Green said. “I thought attitudes on issues like this were fundamentally stable over time…”

Second, it’s worth considering what the value of the work by Broockman, Kalla, & Aronow is. If we buy the idea that science proceeds in a Popperian fashion, then they’ve clearly done a nice piece of science by falsifying a set of ideas previously assumed to be true (or were for a few months, anyway). Indeed, one might even say that what they’ve done is an important piece of science. Why so important? Well, they’ve falsified an idea tha had clearly spread far and wide relatively quickly and has strong implications for public policy. Unfortunately, there’s no way that they’re going to get the credit commensurate with what I think they deserve. At least, not formally. That seems like a big problem, doesn’t it?

There’s one other thing I want to point out here. This whole process primes me to wonder how many times something similar has happened, but hasn’t been detected. I don’t mean just making up data. I mean any kind of research practice that leads to publishing results that either aren’t true, or are vastly overstated. When this happens, there will very often be some enterprising scientist who wants to extend or build on the work. If he or she tries to, but is unable, what then? In this case, the data were conveniently posted online, making it easy to find the source of the discrepancy. On the other hand, if the data are not posted online, or if the experimental setup is not totally reproducible, or any number of other things, said enterprising scientist will not be able to point at the spot in the original research where the questionable work occurred. Instead, he or she will spend a long time on a project that was dead on arrival, and when it doesn’t replicate, may question his or her own worth as a scientist.

Communicating false or overstated research is bad for all kinds of reasons, but I think this is one that’s often overlooked - the cost in time and in morale. How many of us have spent ages whiling away on some project that was never going to work? How many researchers have lost their passion because they thought that they were incapable, when in fact they were perfectly capable, but had been fooled by either some sham work, or were trying to honestly labor in a system where it was expected that everyone would be at least a little dishonest?

Written on May 29, 2015
comments powered by Disqus