On Dec. 18, 1912, amateur geologist Charles Dawson presented to the Geological Society of London a partial skull. It was purported to be a human ancestor 500,000 to 1 million years old, an age scientists now assign to Homo erectus. Dawson said he had found the fossils in a gravel pit near Piltdown Common, south of London.
Dawson had no scientific credentials, but his friend Arthur Smith Woodward did. Woodward was the keeper of the geological department at the British Museum. He had been at the dig and had seen the jawbone “fly out” of the ground under the blow of Dawson’s pick.
There was a problem with the jawbone. It was from an orangutan only a few hundred years old. It was fitted with two fossilized chimpanzee teeth, filed down to make them look more like human teeth. The cranium fragments were human, from the Middle Ages. All had been treated with an iron solution and acid to make them look older.
Scientists didn’t have many fossil skulls in 1912, but none of them looked like a human cranium with an ape jaw.
Several scientists, including one from the Smithsonian Institution, argued that the jaw and cranium did not match. It took 40 years for them to be proved right, and even longer for Dawson to be confirmed as the con man responsible.
Science is human. It is subject to error and, what’s more, malice. Unlike some other purported paths to truth, science has a way of detecting errors, but not an automatic way. Someone has to do it.
A century on, Piltdown man seems quaint, but fraud is not.
In 1998, British researcher Andrew Wakefield published a fraudulent paper arguing that the vaccine for measles, mumps and rubella was a cause of autism.
In 2010, 30 papers by Japanese virologist Naoki Mori had to be withdrawn from scientific journals because he had improperly manipulated data.
The Mori case sparked the attention of Ferric Fang, professor of laboratory medicine and microbiology at the University of Washington School of Medicine. Fang is editor of Infection and Immunity, one of the journals Mori stung. Earlier this year, The National Academy of Sciences published a study by Fang, Arturo Casadevall and R. Grant Steen on retractions of papers in the life sciences.
Only about one in 10,000 papers is retracted. That’s comforting. Less comforting is that the number of retractions is increasing, and that in three-quarters of the cases, the reason is misconduct.
In other words, says Fang, scientists are “making stuff up.”
Why? To advance themselves. That was Charles Dawson’s motive, a century ago. He wanted to be admitted to Britain’s prestigious Royal Society. (He wasn’t.) Today the motive is to get research grants.
“You have to get people excited about what you’re doing, so you can get the money to keep doing it,” says Fang.
Many academic researchers have jobs tied to research grants. “Everybody is struggling to get grants, and right now it’s tighter than it’s ever been,” Fang says.
The culture of research grants creates an incentive for safe but sure results, of science by nibbles. Scientists who are unsure in their positions are less likely to take risks in their work.
And some will cheat. It may be only one scientific paper in 10,000 – but that is the reported rate. The real rate, Fang believes, is higher than that.
“For an editor,” he says, “this keeps you up at night.”