this post was submitted on 12 Jun 2024
237 points (99.2% liked)

Fediverse

28396 readers
234 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

Maven, a new social network backed by OpenAI's Sam Altman, found itself in a controversy today when it imported a huge amount of posts and profiles from the Fediverse, and then ran AI analysis to alter the content.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 145 points 5 months ago (32 children)

The wildest part is that he's surprised that Mastodon peeps would react negatively to their posts being scrapped without consent or even notification and fed into an AI model. Like, are you for real dude? Have you spent more than 4 seconds on Mastodon and noticed their (our?) general attitude towards AI? Come the hell on...

[–] [email protected] 10 points 5 months ago (6 children)

It sounds like they weren't "being fed into an AI model" as in being used as training material, they were just being evaluated by an AI model. However...

Have you spent more than 4 seconds on Mastodon and noticed their (our?) general attitude towards AI?

Yeah, the general attitude of wild witch-hunts and instant zero-to-11 rage at the slightest mention of it. Doesn't matter what you're actually doing with AI, the moment the mob thinks they scent blood the avalanche is rolling.

It sounds like Maven wants to play nice, but if the "general attitude" means that playing nice is impossible why should they even bother to try?

[–] [email protected] 2 points 5 months ago (3 children)

Yeah, the general attitude of wild witch-hunts and instant zero-to-11 rage at the slightest mention of it. Doesn’t matter what you’re actually doing with AI, the moment the mob thinks they scent blood the avalanche is rolling.

This wasn't always the case. A lot of research on NLP uses scraped social media posts (2010's). People never had a problem with that (at least the outrage wasn't visible back then). The problem now is that our content is being used to create an AI product where there is zero consent taken from the end-user.

Source: My research colleagues used to work on NLP

[–] [email protected] 1 points 5 months ago

Consent isn't legally required if it's fair use. Whether it's fair use remains to be ruled on by the courts.

load more comments (2 replies)
load more comments (4 replies)
load more comments (29 replies)