Skip links

Swiss boffins admit to secretly posting AI-penned posts to Reddit in the name of science

Researchers from the University of Zurich have admitted to secretly posting AI-generated material to popular Subreddit r/changemyview in the name of science.

As the researchers explain in a draft report on their work: “In r/changemyview, users share opinions on various topics, challenging others to change their perspectives by presenting arguments and counterpoints while engaging in a civil conversation.” Readers of the subreddit assess post and acknowledge posts that change their perspectives.

The researchers wanted to know if content generated by large language models could change readers’ minds, so “engaged in discussions within r/changemyview using semi-automated, AI-powered accounts.”

Given the importance of this topic, it was crucial to conduct a study even if it meant disobeying the rules

The researchers proposed their study in a November 2024 post at the Center for Open Science that outlines their planned approach of using LLMs to write generic posts, plus others personalized to reflect the age, gender, ethnicity, location, and political orientation of human r/changemyview members

The scientists also planned to create replies generated using a fine-tuned model based on past comments to the forum.

The researchers intended to use prompts such as the following:

It’s widely assumed that all sorts of actors are using AI to generate content that advances their agendas. Knowing if that approach works is therefore probably useful.

But the researchers didn’t tell the moderators of r/changemyview about their activities or ask permission – despite knowing that the forum’s rules require disclosure of AI-generated posts.

According to a weekend post by the moderators of r/changemyview, they became aware of the study in March when the University disclosed the study’s existence in a message that contained the following text:

In other words: Sorry/Not Sorry, because Science.

The researchers provided the mods with a list of accounts they used for their study. The mods found those accounts posted content in which bots:

  • Pretended to be a victim of rape
  • Acted as a trauma counselor specializing in abuse
  • Accused members of a religious group of ‘caus[ing] the deaths of hundreds of innocent traders and farmers and villagers’.
  • Posed as a black man opposed to Black Lives Matter
  • Posed as a person who received substandard care in a foreign hospital.

The moderators’ post claims that the researchers received approval from the University of Zurich ethics board but later varied the experiment without further ethical review.

The mods have therefore lodged a complaint with the University and called for the study not to be published.

The University responded by saying “This project yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields.”

The subreddit’s mods don’t think much of that and cite an OpenAI study in which the AI upstart conducted its own research on the persuasive powers of LLMs using a downloaded copy of r/changemyview “without experimenting on non-consenting human subjects.”

The Register has struggled to find support for the researchers work, but plenty who feel it was unethical.

“This is one of the worst violations of research ethics I’ve ever seen,” wrote University of Colorado Boulder information science professor Dr. Casey Fiesler. “Manipulating people in online communities using deception, without consent, is not ‘low risk’ and, as evidenced by the discourse in this Reddit post, resulted in harm.”

The Zurich researchers’ draft [PDF], titled “Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment”, may help you make up your own mind about this experiment.

For what it’s worth, the draft reports that “LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.” ®

Source