this post was submitted on 03 May 2025
933 points (97.7% liked)

Technology

69727 readers
3760 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 4) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 40 points 1 day ago (4 children)

The key result

When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

load more comments (4 replies)
[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (3 children)

Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.

This experiment is also nearly worthless because, as proved by the researchers, there's no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.

load more comments (3 replies)
[–] [email protected] 20 points 1 day ago (1 children)

Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject

load more comments (1 replies)
[–] [email protected] 42 points 1 day ago (1 children)
[–] [email protected] 13 points 1 day ago (1 children)

Yes. Much more than we peasants all realized.

[–] [email protected] 10 points 1 day ago (8 children)

Not sure how everyone hasn't expected Russia has been doing this the whole time on conservative subreddits...

[–] [email protected] 5 points 1 day ago

Mainly I didn't really expect that since the old methods of propaganda before AI use worked so well for the US conservatives' self-destructive agenda that it didn't seem necessary.

load more comments (7 replies)
[–] [email protected] 70 points 1 day ago (2 children)

The reason this is "The Worst Internet-Research Ethics Violation" is because it has exposed what Cambridge Analytica's successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a "unaffiliated" anonymous third party.

[–] [email protected] 1 points 1 day ago (2 children)

Just a few months ago it was literally Meta itself...

Well, it's Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 57 points 2 days ago* (last edited 1 day ago) (2 children)

Holy Shit... This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski.... He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student.... And would just argue against any point he had to see when he would break....

And that's how you get the Unabomber folks.

[–] [email protected] 17 points 1 day ago (1 children)

I don't condone what he did in any way, but he was a genius, and they broke his mind.

Listen to The Last Podcast on the Left's episode on him.

A genuine tragedy.

load more comments (1 replies)
[–] [email protected] 25 points 2 days ago (3 children)
load more comments (3 replies)
[–] [email protected] 86 points 2 days ago (3 children)

The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

[–] [email protected] 12 points 1 day ago (1 children)

To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you'd normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

Their assessment of how "convincing" it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I'd want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

This, of course, doesn't discount the fact that AI models are often much cheaper to run than the salaries of human beings.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 2 points 2 days ago

I don't remember that subreddit

I remember a meme, but not a whole subreddit

[–] [email protected] 12 points 2 days ago (1 children)

ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

[–] [email protected] 30 points 2 days ago (2 children)

It could, if it annoumced itself as such.

Instead it pretended to be a rape victim and offered "its own experience".

[–] [email protected] 0 points 1 day ago (3 children)

Blaming a language model for lying is like charging a deer with jaywalking.

[–] [email protected] 7 points 1 day ago

Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.

[–] [email protected] 6 points 1 day ago (1 children)

Which, in an ideal world, is why AI generated comments should be labeled.

I always break when I see a deer at the side of the road.

(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)

[–] [email protected] 2 points 1 day ago (1 children)

I'm glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it's useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn't lie or deceive but also didn't announce itself as an AI?

[–] [email protected] 8 points 2 days ago (3 children)

I think when posting on a forum/message board it's assumed you're talking to other people, so AI should always announce itself as such. That's probably a pipe dream though.

If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people's forum conversations, but there should be a prioritization of actual human experiences there.

load more comments (3 replies)
[–] [email protected] -2 points 2 days ago (1 children)

What a bunch of fear mongering, anti science idiots.

load more comments (1 replies)
[–] [email protected] 6 points 2 days ago

I think it's a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it's only getting worse. Avoiding the research because it's embarrassing just prolongs and deepens the problem

[–] [email protected] 13 points 2 days ago (2 children)

I was unaware that "Internet Ethics" was a thing that existed in this multiverse

[–] [email protected] 3 points 2 days ago

Bad ethics are still ethics.

[–] [email protected] 16 points 2 days ago

No - it's research ethics. As in you get informed consent. It just involves the Internet.

If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.

This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.

load more comments
view more: ‹ prev next ›