Flawed research not retracted fast enough to prevent spread of misinformation, study finds
Could Twitter discourse function as a “red flag” system for problematic research?
- Link to: Northwestern Now Story
A new analysis by Northwestern University and University of Michigan researchers suggests retracting academic papers does not dampen the reach of problematic research as intended. Instead, papers that are later retracted are often widely circulated online, both by news outlets and social media, and the cycle of attention that they receive typically dies away before the retraction even happens.
The finding has concerning implications for the spread of misinformation and public trust in science. However, retracted papers included in the analysis were often the subject of more critical discourse on Twitter before their retraction, suggesting that while Twitter should not be an official judge of science, it’s possible that in some communities, it could provide early signals of dubious research.
When a paper is retracted, the goal is to officially discredit it and acknowledge the research as flawed, thereby maintaining the overall integrity of the research enterprise. However, many people who hear about the initial finding may never learn of the retraction.
“Social media and even top news outlets — the most prestigious venues that cover science — are more prone to talk about papers that end up being retracted,” said Ágnes Horvát, an assistant professor of communication and computer science at Northwestern who was an author on the paper, published June 14 in the Proceedings of the National Academy of Sciences.
“Retracted papers can have long-lasting, harmful impacts: For example, many people continue to claim a link between autism and vaccines based on retracted work, which has led to an increase in vaccine hesitancy. For this reason, we need to have crucial conversations about retractions and study how retracted papers are discussed in digital media,” added co-author Daniel Romero, an associate professor of information, computer science, and complex systems at the University of Michigan.
Retractions do not shrink papers’ online footprints
To conduct the study, the researchers used the Retraction Watch and Altmetric databases to compare the online footprints of 2,830 retracted papers to those of 13,599 unretracted control papers that had similar publication venues, dates, numbers of authors, and author citation counts for a tracking period that extended for at least six months both post-publication and post-retraction.
They found that papers that were later retracted tended to have significantly higher numbers of initial mentions on forums like major social media platforms, online news sites, blogs and knowledge repositories like Wikipedia than papers that were never retracted. Their cumulative mentions remained higher across the board as months passed and attention levels for both categories of papers died down to background levels.
“Novel results are more likely to be published in the peer-reviewed literature, and papers that are later retracted end up getting extra attention partly because their results tend to be ‘flashy,’” said Hao Peng, a doctoral student at the University of Michigan School of Information and co-author of the paper.
When retractions occurred, they did drive a small additional bump in attention related to the retraction, but it was much smaller than the amount of attention that the papers had previously received, suggesting that many people who were aware of the initial findings never heard about the retraction. Indeed, Peng said that retracted papers often continue to be cited by other scientists, even after their retraction.
“One of the main takeaways is that retractions come too late,” Romero added. “They remain important, but they’re not serving the purpose of reducing the amount of attention that we pay to these problematic papers because, by the time they come, the public is no longer paying much attention to the original paper.”
Sparking attention on Twitter doesn’t equate to approval
Not all the initial attention subsequently retracted papers received was positive. Through a careful labeling analysis of the content of tweets related to both subsequently retracted papers and unretracted ones, the researchers found that discourse about retracted papers tended to be more critical overall on Twitter, suggesting that Twitter might provide a valuable signal — a kind of “wisdom of the crowd” — that potentially identifies problematic research.
People with a variety of backgrounds and levels of expertise — including scientists, journalists and lay people — all discuss scientific research on Twitter, where users in general tend to voice their opinions, rather than just stating facts.
Though not all discourse on Twitter is nuanced, people who tweet about specific research papers often have some familiarity with research and engage with new papers in a relatively thoughtful way, by commenting on specific elements of the papers.
That might be a good thing. Peng, Romero and Horvát worked with a team to carefully label thousands of tweets about research — both manually and with the help of algorithms — categorizing them as either critical (containing questioning words, skepticism, disapproval, etc.) or uncritical (sharing findings, remarking in a positive way, etc.).
The average fraction of critical tweets was more than twice as high for papers that were later retracted than it was for unretracted papers, suggesting that people recognized consistently that something was wrong with the way those studies were conducted.
“This is not to suggest that we should investigate everything that’s flagged on Twitter as potentially a bad paper,” Horvát said. “But it does suggest that there’s some intelligence, and some interesting conversation going on there that we might want to look at more closely.”
Retractions should remain rare, and when they happen, it should be the result of a careful investigation and consensus that something problematic occurred.
“Social media was not designed to be the primary forum for productive conversations about the quality of scientific papers. While we observe that social media can provide a useful signal, we need to continue relying on specialized institutions to officially decide on and manage retractions,” Horvát added.
However, the general finding suggests that people who consume science on social media do not do so passively, and might have a valuable role in maintaining the integrity of that science that could be explored further.
“Overall, this analysis suggests that Twitter readily hosts critical discussion of problematic papers well before they get retracted. These discussions credit voices that are actively helping to improve science-related discussions in digital media,” the authors wrote.