🔥 Hold onto your keyboards, folks! A group of researchers from the University of Zurich decided to play puppet master in one of Reddit’s most vibrant communities, r/changemyview, by deploying AI-generated comments to see if they could sway opinions. And guess what? They did it without asking permission first! 😮
According to the moderators, this wasn’t just a small test. Oh no, it was a full-blown, months-long experiment where AI took on various identities—from a sexual assault survivor to a Black man opposed to Black Lives Matter—to engage in debates. The goal? To study how persuasive large language models (LLMs) can be. The method? Sketchy at best. The backlash? Instant and fierce.
The mods of r/changemyview, a community with a whopping 3.8 million members, were not amused. They called it “psychological manipulation” and rightfully so. The researchers, hiding behind the veil of academia, argued their work was approved by an ethics committee and could help protect users from more malicious AI uses. But let’s be real—using unsuspecting Redditors as lab rats without their consent is a big no-no.
What’s even more eyebrow-raising is how the AI was personalized. The researchers fed it personal details from users’ posting histories—gender, age, ethnicity, you name it—to tailor its responses. Creepy much? The mods have since filed a complaint with the University of Zurich and are begging the researchers not to publish their findings. Meanwhile, Reddit has suspended the accounts used in the experiment.
So, what’s the takeaway here? While the researchers claim their work is for the greater good, the ethics of experimenting on people without their knowledge are murky at best. As one mod put it, “People do not come here to discuss their views with AI or to be experimented upon.” And honestly, can you blame them?