this post was submitted on 25 May 2025
29 points (100.0% liked)
askchapo
23012 readers
105 users here now
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try [email protected] if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
EDIT: didn't read that this was a non Big Yud post, so sorry, but...you can't really talk about AI safety in the general sense without him.( He dominated the field for 20 years, long before his dumb rationalist cult. Every single alignment guy is either using his assumptions or deliberately departing from them.)
I'm going to talk about the hard AI problem of rogue AI, not the obviously more pressing one of AI being used for bad things, because that can't be stopped
So, Big Yud is mad and constantly full of nuclear Bay Area brainworm takes. but some of his arguments against alignment make sense. At least enough to engage with them and try to solve them, but bombing data centres et al is dumb and won't solve the issue since most code is already so inefficient.
AI is not going to go FOOM in the way he suggested, basically because LLMs exist, which he thought were impossible as part of his arguments. He never considered we would have time to practice alignment with extremely limited AI at near human levels. And that's kicked the core of his doom argument away.
I think UlyssesT and I once agreed that he is the dumbest smart guy alive and will probably become leftist after trying and failing at every tech nerd principal. If someone could keep him the hell away from the Bay Area he might normalise and stop writing weird shit.
For what it's worth he also thinks Transformers have no chance at AGI, at least alone. And his thoughts on the dumb approaches used by AI companies are sound. But he is arguing in good faith and has probably thought about this more than anyone else. (Nobody even in the Yud cult except for a few weirdos ever took Roko's Basilisk seriously. Yud deleted the post mostly because he saw that people would go weird about it, a rare example of him being exactly correct.)
As for everyone else...Altman doesn't care and is using it for market capture. Anthropic probably does care but is more or less forced down the same path of ineffective safety that captures the market.
Ilya definitely cares but for all the wrong reasons, he's massively pro Israel so he'll never achieve a consistently aligned AI anyway, but I'd not want him working on this stuff.
But ultimately hard alignment is a secondary concern against dumb corporate HR managers trying to AI the compliance team.
I forgive you. The traditions of all Yud generations weigh like rational nightmares on the alignments of the sneerers.