AI can manipulate voters, new study reveals
Summarize this article with:

Suspicions become numbers. Two studies published in Science and Nature confirm that AI chatbots, although similar to the ones everyone uses, can move voting preferences by several points, up to around 15% in controlled scenarios.

A giant robot manipulates puppet voters who vote mechanically in an orange-lit ballot box.

In brief

  • Studies show that AI chatbots can change voting preferences by several points, up to about 15%, after a few exchanges.
  • Their power of persuasion is based mainly on public policy arguments, but the more they convince, the more factual errors and biases they produce.
  • By interacting directly with voters, these AI chatbots manage to shape the perception of programs and discreetly influence their choices.

When a few artificial intelligence messages are enough to move a vote.

Researchers from Cornell University and the UK AI Security Institute tested a very simple situation: a voter, a candidate and a political chatbot. First, participants rated a candidate. Then, they would chat with an artificial intelligence chatbot programmed to advocate for that candidate. Finally, they noted it again. On the surface, nothing extraordinary: a brief conversation, a few arguments, a revised note.

The results are anything but trivial. In the United States, before the 2024 presidential election, a simple exchange of this type was enough to change the appreciation of a candidate by several points, particularly when the bot supported the camp opposite to the participant's initial preference.
The same pattern appears in Canada and Poland, with shifts of up to ten points on a scale of 0 to 100.

Above all, the effect is not symmetrical: a chatbot which preaches for an already popular candidate reinforces convictions, but the one which defends the “wrong” camp sometimes manages to crack the resistance. In other words, AI does not just reassure the convinced, it begins to undermine the certainties of the opponents.

Your first cryptos with Coinbase
This link uses an affiliate program

The more AI talks about politics, the more it convinces and the more it is wrong

Studies converge on a key point : what persuades the most are messages focused on public policies, whether economic, tax, security or health measures, and not elements of personality or storytelling. When the chatbot provides numerical arguments, program comparisons, and references to facts, real or supposed, the impact on voting intentions is significantly stronger.

But this power comes at a cost. Researchers note a stark trade-off between persuasion and accuracy: the most convincing models are also those that produce the greatest number of inaccurate statements.

In several experiments, bots aligned with right-wing candidates generated more errors or misleading claims than those aligned with left-wing candidates, revealing an imbalance in what the models really “know.”

At the same time, the second study carried out on 19 AI language models and nearly 77,000 adults in the United Kingdom shows that the key is not so much the size of the model as the way in which it is controlled via AI prompts. Instructions that push these models to introduce new information significantly increase persuasive power, but again degrade factual accuracy. More arguments, more impact, less truth.

In this context, the rise of AI is no longer limited to political chatbots alone. Tether has just invested 70 million euros on Generative Bionics to accelerate the development of humanoid AI, illustrating how these systems, virtual or embodied, are called upon to interact ever more with the public and influence opinions on a large scale.

Maximize your Tremplin.io experience with our 'Read to Earn' program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.

Similar Posts