isnt it already happening on reddit? i mean the massive amounts of accs that were banned in the last few months were all AI
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I disagree, but I think he's close. The future of moderation should be customizable by users, but it needs to be based on human moderation. Let them pick their own moderators, and fine tune that moderation to their liking, and give them an option to review moderation and make adjustments.
It already does, though not in the individualized manner he's describing.
I don't think that's entirely a bad thing. Its current form, where priority one is keeping advertisers happy is a bad thing, but I'm going to guess everyone reading this has a machine learning algorithm of some sort keeping most of the spam out of their email.
BlueSky's labelers are a step toward the individualized approach. I like them; one of the first things I did there is filter out what one labeler flags as AI-generated images.
No thanks.
And you’d be in charge of the AI, right Alexis? What a cunt.
In my opinion AI should cover the worst content; ones that harm people just by looking at it. Anything up to debate is a big no; however there exists many content where even seeing the content can be disturbing to anyone seeing it.
Yeah, but who decides what content is disturbing? I mean there is CSAM, but the fact that it even exists shows that not everyone is disturbed by it.
This is a fucking wild take
I mean I'm not defending CSAM, just to be clear. I just disagree with any usage of AI that could turn somebody's life upside down based on a false positive. Plus you also get idiots who report things they just don't like.
You’ll never be able to get a definition that covers your question. The world isn’t black and white. It’s gray and because of that a line has to be drawn and yes it would always be considered be arbitrary for some. But a line must be drawn none the less.
I think that he's probably correct that this is, in significant part, going to be the future.
I don't think that human moderation is going to entirely vanish, but it's not cheap to pay a ton of humans to do what it would take. A lot of moderation is, well...fairly mechanical. Like, it's probably possible to detect, with reasonable accuracy, that you've got a flamewar on your hands, stuff like that. You'd want to do as much as you can in software.
Human moderators sleep, leave the keyboard, do things like that. Software doesn't.
Also, if you have cheap-enough text classification, you can do it on a per-user basis, so that instead of a global view of the world, different people see different content being filtered and recommended, which I think is what he's proposing:
Ohanian said at the conference that he thinks social media will "eventually get to a place where we get to choose our own algorithm."
Most social media relies on at least some level of recommendations.
This isn't even new for him. The original vision for Reddit, as I recall, was that the voting was going to be used to build a per-user profile to feed a recommendations engine. That never really happened. Instead, one wound up with subreddits (so self-selecting communities are part of it) and a global voting on stuff within that.
I mean, text classifiers aimed at filtering out spam have been around forever for email. It's not even terribly new technology. Some subreddits on Reddit had bots run by moderators that did do some level of automated moderation.
Why would anybody even slightly technical ever say this? Has he ever used what passes for AI? I mean it’s a useful tool with some giant caveats, and as long as someone is fact checking and holding its hand. I use it daily for certain things. But it gets stuff wrong all the time. And not just a little wrong. I mean like bat shit crazy wrong.
Any company that is trying to use this technology to replace actually intelligent people is going to have a really bad time eventually.
"Hey as a social media platform one of your biggest expenses is moderation. Us guys at Business Insider want to give you an opportunity to tell your investors how you plan on lowering that cost." -Business Insider
"Oh great thanks. Well AI would make the labor cost basically 0 and it's super trendy ATM so that." -Reddit cofounder
Let's be real here the goal was never good results it was to get the cost down so low that you no longer care. Probably eliminates some liability too since it's a machine.
Cool. I think he should piss on the 3rd rail.
🔥
This pukebag is just as bad as Steve. Fuck both of them.
I think I am for this use of AI. Specifically for image moderation, not really community moderation. Yes, it would be subject to whatever bias they want, but they already moderate with a bias.
If they could create this technology, situations like the linked article could be avoided: https://www.cnn.com/2024/12/22/business/facebook-content-moderators-kenya-ptsd-intl/index.html
Edit: To be clear, not to replace existing reddit mods, but to be a supplemental tool.
Hotdog / Not Hotdog
But yeah, having a semantical image filter could do be a good first line, of course with human oversight.
And frankly, seeing the mod abuse that goes on in many communities, having AI moderators helping with text moderation would be nice too. At least they'd be more consistent.
Only if the company using the AI is held accountable for what it does/doesn't moderate
Accountability, what is that?
Something for poor people to worry about.
😢
that's what criminal law does to shiti organic entity while legal entities enjoy unhindred personhood.
Why don't we get AI to moderate Alexis. He stopped being relevant 10 years ago.
No.
It is simple enough as is to confuse ai or to make it forget or work around its directives. Not least of the concerns would be malicious actors such as musk censoring our thoughts.
Ai is not something humanity should, in any way, be subjugated by or subordinate to.
Ever.