AI swarms could evade online manipulation detection systems
Summarize this article with:

A new academic paper warns that influence campaigns powered by autonomous AI agents could soon become much harder to detect and stop. Instead of obvious botnets, future operations could rely on systems that behave like real users and adapt their actions over time. Researchers say this change poses serious risks for public debate and platform governance.

A focused engineer works on a laptop while a bright

In brief

  • Researchers warn that AI swarms can mimic human behaviors, making coordinated influence campaigns much harder to detect and curb.
  • Unlike traditional botnets, these swarms adapt their discourse over time, spreading subtle, persistent narratives rather than brief offensives.
  • Weak identity controls make it easy to deploy at scale, with minimal detection risk.
  • The study does not provide a single solution, but calls for better detection of coordination and clearer labeling of automated content.

Autonomous agents could transform information warfare online

According to a study published Thursday in Science, digital manipulation is evolving: easy-to-identify botnets are giving way to coordinated groups of AI agents, called “swarms.” These systems can simulate human behavior, interact with constantly evolving conversations, and operate with very little supervision. Result: the application of the rules becomes much more complicated.

Written by researchers from prestigious institutions, the study describes an internet where manipulation blends into daily activity. No more peaks of activity around elections: AI campaigns can now spread their ideas slowly, but continuously, over time.

Discover our newsletter
This link uses an affiliate program

These campaigns adapt the tone, timing and targets throughout conversations, thus avoiding detection by automated defenses or human moderators. Researchers define a swarm as a group of independent agents pursuing a common goal.

Social networks already have structural flaws that favor these tactics, particularly because users are often exposed to content that confirms their opinions. Previous studies have shown that false information often circulates faster than true information, fueling divisions and undermining confidence in facts.

Paid AI swarms pose new challenges for moderation

Researchers list several characteristics that distinguish these swarms from older methods of manipulation:

  • They operate with minimal human intervention once the objectives are defined
  • They modify their messages in real time according to user reactions
  • They distribute their content across many accounts, without a repetitive pattern
  • They maintain narratives over time, rather than peaks of activity
  • They blend in, imitating normal human activity

Sean Ren, a professor of computer science at the University of Southern California and CEO of Sahara AI, points out that these accounts are already very difficult to detect. For him, identification rules stricter ones would be more effective than content moderation alone.

These swarms of agents are generally managed by teams or service providers paid by companies or external actors to carry out coordinated manipulation campaigns.

Sean Ren

Stricter KYC rules, as well as limitations on account creation, could reduce the ability of AI agents to manage large, coordinated networks. When there are fewer accounts available, abnormal posting patterns become easier to spot, even if individual posts appear normal.

Traditional influence campaigns relied on volume: a multitude of accounts broadcasting the same message simultaneously. An approach that facilitated their detection. Conversely, AI swarms, according to the study, are distinguished by finer coordination, enhanced autonomy and broader reach.

According to Ren, possible solutions include better detection of unusual coordinated behavior and more explicit labeling of automated activity. However, he warns that technical tools alone will probably not be enough to stop the phenomenon.

Finally, he points out that many operations of this type are carried out by teams paid to influence online discussions. Until identity checks are strengthened and rules better enforced, platforms risk remaining helpless in the face of increasingly discreet and persistent influence tactics.

Maximize your Tremplin.io experience with our 'Read to Earn' program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.

Similar Posts