I know this is like super low-hanging fruit, but Reddit’s singularity forum (AGI hype-optimists on crack) discuss the current chapter in the OpenAI telenovela and can’t decide whether Ilya and Jan Leike leaving is good, because no more lobotomizing the Basilisk, or bad, because no more lobotomizing the Basilisk.
Yep, there’s no scenario here where OpenAI is doing the right thing, if they thought they were the only ones who could save us they wouldn’t dismantle their alignment team, if AI is dangerous, they’re killing us all, if it’s not, they’re just greedy and/or trying to conquer the earth.
vs.
to be honest the whole concept of alignment sounds so fucked up. basically playing god but to create a being that is your lobotomized slave…. I just dont see how it can end well
Of course, we also have the Kurzweil fanboys chiming in:
Our only hope is that we become AGI ourselves. Use the tech to upgrade ourselves.
But don’t worry, there are silent voices of reasons in the comments, too:
Honestly feel like these clowns fabricate the drama in order to over hype themselves
Gee, maybe …
no ,,, they’re understating the drama in order to seem rational & worthy of investment ,, they’re serious that the world is ending ,, unfortunately they think they have more time than they do so they’re not helping very much really
Yeah, never mind. I think I might need to lobotomize myself now after reading that thread.