Post by Ruthanna Gordon
On-line and interactive is an interesting way to learn about science. But it can also be a useful—if uncomfortable—way to do science.
When most people think about science, they think about lab work: setting up equipment, running experiments, and gathering data. But science is also what happens afterward. Once data is collected and written up, the research usually undergoes peer review. This is not a complicated process. Journal editors send an article out to other researchers (usually 3), and ask them to comment on its merits. Reviewers can ask for further explanation, or additional studies to rule out competing explanations for the findings. Sometimes they decide that the study wasn’t sufficiently well-done to share with the wider public at all; more often they demand changes that make for stronger, if somewhat delayed, published work.
Many researchers have questioned the peer review process. Reviewers may be biased, positively or negatively. They may miss problems because they are caught up in the excitement of an interesting finding, because they are distracted by their own studies, or because they are fitting the review into 37 free minutes at 2 AM. As Winston Churchill said of democratic government, it’s the worst possible system, except for all the others we’ve tried. But the collaborative hothouse of the internet opens up new possibilities.
These possibilities were highlighted late last year, when NASA-funded scientist Felisa Wolfe-Simon announced her discovery of arsenic-based life in a California lake. This work had undergone peer review and been published in one of the world’s most prestigious journals, then brought to public attention amid intense hype. Wolfe-Simon and her colleagues were somewhat startled to find their work subjected to an informal, and often snide, supplement to the original review—but with dozens of well-informed reviewers rather than a handful.
There is an excellent overview here, but in brief: several scientists criticized Wolfe-Simon’s methods and measurements, questioning her conclusions. She, and NASA, responded by suggesting that the peer review process was not only important, but the only legitimate venue for scientific critique. They dismissed the input from blogs and Twitter—on the basis that it occurred on blogs and Twitter. And they used the journal’s prestige as a defense against the very real questions of fact raised by their critics.
There’s always some tension between science and scientists. Science works by constantly seeking evidence against claims, accepting only those which are supported by the observed state of the world. Technically speaking, every experiment should be a whole-hearted attempt to prove that one is wrong—because one can only be sure of being right when that disproof fails. Scientists, however, depend on rightness for their livelihood and reputation. If you successfully disprove all your hypotheses for several years, universities and grant-givers consider you a failure. Furthermore, scientists are human, and we like to be right.
The peer review process is intended as a counter to these human tendencies. It is not the only one possible: any expansion of informed debate and criticism is good for science. That the criticism is informed—that it comes from people who understand and are involved with the field in question—is important. That it come from a traditional venue is not. Online forums provide a rich environment for discussion, facilitating a more collaborative and extended critique than was previously possible.
Some scientists take deliberate advantage of 2.0 review. A few journals print any paper that appears to have valid methods, with review as an ongoing and public process. Other sites are devoted to “post-publication peer review.” Although these methods have their weak points, they have the potential to fill some of the gaps in the more traditional system. And as these innovations become more familiar, one hopes that more researchers will welcome them—and that their research will become stronger as a result.