AI for Science Study: Good for the Goose, But What About the Gander?
The use of AI tools by scientists is associated with a greater volume of published work, more citations, and faster promotions of individual scientists, according to a new study published this week. But AI tools also are associated with a decrease in human engagement among scientists and tackling fewer scientific topics, the study found.
In the study published in the journal Nature, titled “Artificial intelligence tools expand scientists’ impact but contract science’s focus,” researchers sought to find how the use of AI tools shapes the careers of individual scientists, as well as how it impacts science as a whole.
The researchers, who hailed from Tsinghua University in Beijing and the University of Chicago, used a BERT-based language model to detect “AI-augmented research” across more than 41 million of published papers via the OpenAlex repository across nature sciences, including biology, medicine, chemistry, physics, materials science, and geology. Out of that total, about 311,000 papers (0.75%) had traces of AI augmentation, they wrote.
The researchers found “an accelerated adoption of AI tools among scientists and consistent professional advantages associated with AI usage, but a collective narrowing of scientific focus,” researchers Qianyue Hao, Fengli Xu, Yong Li1, and James Evans wrote in the Nature article, which can be viewed here (a pre-print copy is also available on arXiv).
Source: Nature article “Artificial intelligence tools expand scientists’ impact but contract science’s focus”
“Scientists who engage in AI-augmented research publish 3.02 times more papers, receive 4.84 times more citations and become research project leaders 1.37 years earlier than those who do not,” they wrote. “By contrast, AI adoption shrinks the collective volume of scientific topics studied by 4.63% and decreases scientists’ engagement with one another by 22%.
“By consequence,” they continued, “adoption of AI in science presents what seems to be a paradox: an expansion of individual scientists’ impact but a contraction in collective science’s reach, as AI-augmented work moves collectively towards areas richest in data. AI tools appear to automate established fields rather than explore new ones, highlighting a tension between personal advancement and collective scientific progress.”
The researchers’ analysis spanned 46 years, reaching all the way back to 1980. The researchers found the rate at which scientists used AI tools and techniques has increased dramatically in recent years. In the early days, the type of AI that scientists used were classical machine learning data analysis techniques, such as linear regression, principal component analysis, and support vector machines, all of which were well understood back then. More advanced types of AI, including generative AI like large language models, convolutional neural networks, and generative adversarial models, had either yet to be invented or had not gained widespread use. (Former Meta AI chief Yann LeCun is credited with making breakthroughs in CNNs back in 1989, but the technique would not gain widespread use for decades.)
James Evans, Max Palevsky Professor of Sociology, Computational & Data Science @ UChicago and Santa Fe Institute
Despite the better technology and increased use, the researchers concluded that the patterns they identified (i.e. a narrowing of group focus and boosting of visibility of individual scientist) shows up just as consistently in the earlier part of their research timespan as it does with the 4.7 million papers that have been published since the GenAI era began several years ago.
“…[A]ll results in the generative AI era are consistent with those from prior decades of AI development,” they wrote.
The researchers suggested that AI-driven science would lead to “more…focus on existing popular research problems rather than explor[ing] new ones.” They also found that having more focus on a limited breadth of scientific problems does not help researchers find solutions to those problems any quicker. “Maintaining a broader diversity of research topics does not hinder a sub-field’s performance,” they wrote. “Rather, it likely offers more opportunities for advancing the scientific frontier and provides researchers with a wider array of investigation choices, ultimately benefiting the state of collective knowledge.”
Researchers also identified in the AI-assisted research papers evidence of the Matthew Effect, which describes how an initial advantage tends to compound over time, while disadvantages also build up. “In AI research, a small number of superstar papers dominate the field, with approximately 20% of top papers receiving 80% of citations and 50% receiving 95%,” they wrote.
Researcher James Evans said in an X post that AI use has shifted to fields and problems where data is abundant. “AI gravitates toward well-lit problems and away from foundational and emergent questions where data is necessarily sparse,” he wrote. “The result is collective hill-climbing–everyone scaling the same popular peaks rather than searching for higher mountains.”
While the paradox appears to be real, it’s not inevitable. Evans noted that AI models can be flipped around to identify what is surprising and point researchers in new directions. “But without deliberate intervention, local incentives have and will likely continue to push scientists to optimize and compress what’s already known rather than discover what isn’t,” he wrote.
This article first appeared on HPCwire.
Related

