this post was submitted on 02 Mar 2025
165 points (87.0% liked)

Technology

67151 readers
3663 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 30 comments
sorted by: hot top controversial new old
[–] Ledericas@lemm.ee 11 points 3 weeks ago (1 children)

isnt it already happening on reddit? i mean the massive amounts of accs that were banned in the last few months were all AI

load more comments (1 replies)

I disagree, but I think he's close. The future of moderation should be customizable by users, but it needs to be based on human moderation. Let them pick their own moderators, and fine tune that moderation to their liking, and give them an option to review moderation and make adjustments.

[–] Zak@lemmy.world 4 points 3 weeks ago

It already does, though not in the individualized manner he's describing.

I don't think that's entirely a bad thing. Its current form, where priority one is keeping advertisers happy is a bad thing, but I'm going to guess everyone reading this has a machine learning algorithm of some sort keeping most of the spam out of their email.

BlueSky's labelers are a step toward the individualized approach. I like them; one of the first things I did there is filter out what one labeler flags as AI-generated images.

[–] DarkFuture@lemmy.world 19 points 3 weeks ago (1 children)

Lol. I left Reddit because of automated moderation.

load more comments (1 replies)
[–] FartsWithAnAccent@fedia.io 4 points 3 weeks ago
[–] FrostyCaveman@lemm.ee 11 points 3 weeks ago

And you’d be in charge of the AI, right Alexis? What a cunt.

[–] CaptainBasculin@lemmy.ml 11 points 3 weeks ago (1 children)

In my opinion AI should cover the worst content; ones that harm people just by looking at it. Anything up to debate is a big no; however there exists many content where even seeing the content can be disturbing to anyone seeing it.

[–] lka1988@lemmy.dbzer0.com 3 points 3 weeks ago (2 children)

Yeah, but who decides what content is disturbing? I mean there is CSAM, but the fact that it even exists shows that not everyone is disturbed by it.

[–] anus@lemmy.world 4 points 3 weeks ago (1 children)

This is a fucking wild take

[–] lka1988@lemmy.dbzer0.com 2 points 3 weeks ago* (last edited 3 weeks ago)

I mean I'm not defending CSAM, just to be clear. I just disagree with any usage of AI that could turn somebody's life upside down based on a false positive. Plus you also get idiots who report things they just don't like.

[–] Zexks@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

You’ll never be able to get a definition that covers your question. The world isn’t black and white. It’s gray and because of that a line has to be drawn and yes it would always be considered be arbitrary for some. But a line must be drawn none the less.

load more comments (3 replies)
[–] tal@lemmy.today 0 points 3 weeks ago* (last edited 3 weeks ago)

I think that he's probably correct that this is, in significant part, going to be the future.

I don't think that human moderation is going to entirely vanish, but it's not cheap to pay a ton of humans to do what it would take. A lot of moderation is, well...fairly mechanical. Like, it's probably possible to detect, with reasonable accuracy, that you've got a flamewar on your hands, stuff like that. You'd want to do as much as you can in software.

Human moderators sleep, leave the keyboard, do things like that. Software doesn't.

Also, if you have cheap-enough text classification, you can do it on a per-user basis, so that instead of a global view of the world, different people see different content being filtered and recommended, which I think is what he's proposing:

Ohanian said at the conference that he thinks social media will "eventually get to a place where we get to choose our own algorithm."

Most social media relies on at least some level of recommendations.

This isn't even new for him. The original vision for Reddit, as I recall, was that the voting was going to be used to build a per-user profile to feed a recommendations engine. That never really happened. Instead, one wound up with subreddits (so self-selecting communities are part of it) and a global voting on stuff within that.

I mean, text classifiers aimed at filtering out spam have been around forever for email. It's not even terribly new technology. Some subreddits on Reddit had bots run by moderators that did do some level of automated moderation.

[–] billwashere@lemmy.world 16 points 3 weeks ago (1 children)

Why would anybody even slightly technical ever say this? Has he ever used what passes for AI? I mean it’s a useful tool with some giant caveats, and as long as someone is fact checking and holding its hand. I use it daily for certain things. But it gets stuff wrong all the time. And not just a little wrong. I mean like bat shit crazy wrong.

Any company that is trying to use this technology to replace actually intelligent people is going to have a really bad time eventually.

[–] alcoholic_chipmunk@lemmy.world 6 points 3 weeks ago (1 children)

"Hey as a social media platform one of your biggest expenses is moderation. Us guys at Business Insider want to give you an opportunity to tell your investors how you plan on lowering that cost." -Business Insider

"Oh great thanks. Well AI would make the labor cost basically 0 and it's super trendy ATM so that." -Reddit cofounder

Let's be real here the goal was never good results it was to get the cost down so low that you no longer care. Probably eliminates some liability too since it's a machine.

load more comments (1 replies)
[–] eran_morad@lemmy.world 99 points 3 weeks ago (2 children)

Cool. I think he should piss on the 3rd rail.

[–] db2@lemmy.world 25 points 3 weeks ago

This pukebag is just as bad as Steve. Fuck both of them.

[–] SoupBrick@pawb.social 7 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

I think I am for this use of AI. Specifically for image moderation, not really community moderation. Yes, it would be subject to whatever bias they want, but they already moderate with a bias.

If they could create this technology, situations like the linked article could be avoided: https://www.cnn.com/2024/12/22/business/facebook-content-moderators-kenya-ptsd-intl/index.html

Edit: To be clear, not to replace existing reddit mods, but to be a supplemental tool.

[–] mp3@lemmy.ca 3 points 3 weeks ago* (last edited 3 weeks ago)

Hotdog / Not Hotdog

But yeah, having a semantical image filter could do be a good first line, of course with human oversight.

[–] FaceDeer@fedia.io 0 points 3 weeks ago

And frankly, seeing the mod abuse that goes on in many communities, having AI moderators helping with text moderation would be nice too. At least they'd be more consistent.

load more comments (1 replies)
[–] regrub@lemmy.world 47 points 3 weeks ago (2 children)

Only if the company using the AI is held accountable for what it does/doesn't moderate

[–] Alexstarfire@lemmy.world 22 points 3 weeks ago (2 children)

Accountability, what is that?

[–] jubilationtcornpone@sh.itjust.works 12 points 3 weeks ago (1 children)

Something for poor people to worry about.

[–] sunzu2@thebrainbin.org 0 points 3 weeks ago

that's what criminal law does to shiti organic entity while legal entities enjoy unhindred personhood.

load more comments (1 replies)
[–] Xanza@lemm.ee 73 points 3 weeks ago (2 children)

Why don't we get AI to moderate Alexis. He stopped being relevant 10 years ago.

load more comments (2 replies)
[–] masterofn001@lemmy.ca 23 points 3 weeks ago* (last edited 3 weeks ago)

No.

It is simple enough as is to confuse ai or to make it forget or work around its directives. Not least of the concerns would be malicious actors such as musk censoring our thoughts.

Ai is not something humanity should, in any way, be subjugated by or subordinate to.

Ever.

load more comments