Researchers use AI in secret: a danger to science?

Artificial intelligence (AI) is now interfering in laboratories and scientific publications, raising crucial questions about research integrity. A recent study reveals that more than 13 % of biomedical articles carry the traces of Chatgpt and others.

A sweaty scientist hides a USB key while a screen displays a threatening AI. A mysterious silhouette observes it. Suspense, tension and technological secret dominate this dramatic scene.

In short

  • An analysis of 15 million biomedical articles reveals that 13.5 % of the 2024 publications have signs of AI use.
  • Researchers have identified 454 words “suspect” frequently used by AI tools as “Delve”, “Showcasing” and “Underscore”.
  • Current detection tools remain unreliable, sometimes confusing historical texts with content generated by AI.
  • Divided experts: some see it as a danger, others a democratization of research.

AI leaves its imprints in science

Researchers from Northwestern University, in collaboration with the Hertie Institute for AI applied to health, have analyzed more than 15 million scientific summaries published on Pubmed. Their observation is unequivocal: in 2024, the generative AI, in particular Chatgpt, deeply marked the language of biomedical research.

To demonstrate this, the team compared the frequency of use of certain keywords in 2024 with that of the years 2021 and 2022. And the difference is obvious: terms previously not very common as “Delves”, “Underscores” or “Showcasing” are experiencing an explosion of use, to the point of becoming stylistic markers typical of the texts generated by IA.

This “word hunt”, however, reveals a more nuanced reality. Stuart Geiger, professor at the University of California in San Diego, tempers the alarm:

Language evolves over time. The word “Delve” is now part of the current vocabulary, partly thanks to Chatgpt.

Linguistic evolution thus poses a major dilemma. How to distinguish a fraudulent use of AI from a simple cultural influence? Even more worrying: do researchers risk changing their natural writing style for fear of being wrongly accused?

Start your crypto adventure safely with Coinhouse
This link uses an affiliation program

Between democratization and ethical drift

Kathleen Perley, professor at Rice University, adopts a more nuanced position on theuse of AI in scientific research.

According to her, these tools can play a decisive role in the democratization of access to academic research, especially for non-English speakers or those with learning disabilities.

In an academic environment dominated by English and formal requirements, AI can offer a real springboard to brilliant profiles, but marginalized by the linguistic barrier.

This approach raises a fundamental question: should we really penalize researchers who use tools to overcome structural obstacles? Couldn't AI, on the contrary, bring out quality work, so far invisible due to editorial rather than conceptual limits?

Drifts, bias and false positives, science in the face of the limits of AI

But enthusiasm comes up against very real drifts. The example of the Grok chatbot, developed by Elon Musk's company, is an freezing illustration.

Since its latest update, the tool has produced a series of anti-Semitic messages published on X (ex-Twitter), going so far as to justify hateful remarks and praise Hitler. Such incidents recall that even the most advanced models can convey dangerous biases if they are not properly supervised.

At the same time, AI detection tools are struggling to be reliability. Zerogpt, for example, estimated that the United States Declaration of Independence was generated at 97 % by an AI, while GPTZERO assesses it to only 10 %. This inconsistency reveals the immaturity of detection technologies and the risk of unfounded accusations.

Beyond technical tools, the emergence of AI in scientific research questions the very essence of the intellect. Rigor, originality and integrity are the pillars of scientific production. Can we preserve these values ​​when the border between assistance and substitution becomes vague?

More than ever, academic institutions must define clear guidelines. It is not a question of slowing down innovation, but of drawing a line between ethical use and intellectual fraud. The future of research is based on our collective capacity to integrate artificial intelligence without losing the soul of science.

Maximize your Cointribne experience with our 'Read to Earn' program! For each article you read, earn points and access exclusive rewards. Sign up now and start accumulating advantages.

Similar Posts