this post was submitted on 22 Apr 2025
249 points (95.3% liked)

Technology

69212 readers
3837 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 32 points 1 day ago (3 children)

This is an angle I've never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it's a supergodbrain, we start putting way too much faith in its flawed output, and it's our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.

The call is coming from inside the ~~house~~ mechanical Turk!

[–] [email protected] 2 points 1 day ago

That's the intended effect. People with real power think this way: "where it does work, it'll work and not bother us with too much initiative and change, and where it doesn't work, we know exactly what to do, so everything is covered". Checks and balances and feedbacks and overrides and fallbacks be damned.

Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.

[–] [email protected] -1 points 1 day ago

I mean, it's like none of you people ever consider how often humans are wrong when criticizing AI.

How often have you looked for information from humans and have been fed falsehoods as though they were true? It happens so much we've just gotten used to filtering out the vast majority of human responses because most of them are incorrect or unrelated to the subject.

[–] [email protected] 8 points 1 day ago

They call it hallucinations like it's a cute brain fart, and "Agentic" means they're using the output of one to be the input of another, which has access to things and can make decisions and actually fuck things up. It's a complete fucking shit show. But humans are expensive so replacing them makes line go up.