Shouldn’t we have space colonies and a universal cure for cancer by now?
Instead there are signs that the pace of technological progress is slowing – even as researchers pump out papers at a prolific and increasing rate. With slowing progress in computing power, medicine and agriculture, my Bloomberg Opinion colleague Noah Smith warns that the stakes could not be higher.
Surely some of the fault lies with technology itself.
Our connected world has allowed researchers to become so tightly networked that they’re falling into the trap of groupthink.(1) That might explain why some researchers seeking cures for Alzheimer’s disease, for example, have conceded that they’ve been throwing years of work and billions of dollars toward a single theory that has failed to lead to any treatment – while ignoring promising alternatives.
Sociologist James Evans of the University of Chicago has concluded that what’s being lost, at least in biomedical research, is scientific independence. Being able to work independently of other labs allows researchers to come up with fresher insights.
In a new study, Evans and colleagues found that weak studies are more likely to come from labs that share lots of researchers and methods with others, and strong studies come from labs that do things their own way.
Weak studies are not just those that come to the wrong conclusions but those whose conclusions are fragile. If a competitor tries to replicate them, the result will be different, unless conditions and methods are exactly the same. The conclusions of such studies are unlikely to represent broad biological facts, and probably won’t be of much use in medicine.
To sort the weak from the strong, Evans and colleagues were able to use a special case where thousands of studies on the interaction between drugs and genes can be re-tested quickly. A machine can now do what’s called a high throughput assay to rerun a whole slew of previous studies. And so Evans was able to evaluate more than 3,000 published claims against the results of this mechanical backup, which can not only replay the exact experiments but also test the robustness of the claims by varying the parameters a bit.
There was a huge correlation between centralised, networked groups and weak studies. The most networked groups were more likely to replicate themselves and each other, but less likely to reach conclusions that checked out with the mechanical system.
Groupthink is well known in politics and media. Where once competing reporters would look into the same events independently and not know the others’ results until the next day’s papers, now there’s an unconscious temptation among journalists to believe the interpretation of the most prominent news outlets, or whoever posts online first.
Scientists are subject to the same human foibles, but groupthink shouldn’t be conflated with scientific consensus, which is often based on ideas that are backed up by multiple lines of inquiry. That would include things like the structure of DNA, Einstein’s theory of relativity, and the basic physics behind the greenhouse effect. Those are widely accepted now, in part because they were supported by independent, even isolated researchers.
What’s rewarded these days is the absolute opposite of those historic claims. While science works best when researchers prove one idea multiple ways, funding agents and journal editors today reward those with only a single line of evidence to support multiple claims. They want bigger claims and are content with lesser evidence.
The technology that’s allowed so much connection has of course also been positive, enabling people to collaborate and learn more efficiently. Researchers can sometimes even counteract extraneous noise by harnessing a wisdom-of-the-crowd phenomenon, where many individuals converge on a right answer. But like many technological changes, it’s come with unintended consequences. The fact that US researchers are producing 1,000 papers a day shows there’s a lot of energy out there to be used more productively – if funding encouraged bold exploration. – Bloomberg
(1) The idea was documented by Francis Galton at dawn of the 20th century, when he asked 800 people to guess the weight of a rather heavy ox at a fair. While the guesses were all over the place, the average was within 1% of the true weight, 1,198 pounds. The phenomenon breaks down, however, if individual guessers confer too much.
(Faye Flam is a Bloomberg Opinion columnist. She has written for the Economist, the New York Times, the Washington Post, Psychology Today, Science and other publications. She has a degree in geophysics from the California Institute of Technology.)
Did you find this article insightful?