this post was submitted on 25 May 2025
29 points (100.0% liked)
askchapo
23019 readers
19 users here now
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try [email protected] if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The thing that currently exists by the name of "AI" is never gonna become anything like AGI, but it is bad in its own way.
I don't think AGI is fundamentally impossible, because dualism is fake and consciousness is a real material process that exists in the universe, but it's not a thing we are anywhere near understanding how to build.
Further, the roko's basilisk is literally just a dumber pascal's wager, because it's a pascal's wager wherein not giving a shit about god is a total defense against divine punishment.
I'm gonna make an anti-basilisk that torments everyone who tried not to be tormented by the basilisk, just to even it out
I think the conception of AGI as a machine is holding back its development ontologically speaking. Reductionism too. A consciousness is dynamic, and fundamentally part of a dynamic organism. It can't be removed from the context of the broader systems of the body or the world the body acts on. Even its being comes secondary to activities it takes. So I'm not really scared of it existing in the abstract. I'm a lot more afraid of mass production commodifying consciousness itself. Everything that people fear in AGI is a projection of the worst ills of the system we live in. Roko's basilisk is dumb as fuck also
I'm torn between either there being a quantum theory of consciousness (thus requiring quantum effects to be utilized to create true AGI) or it requiring actual full, high resolution simulation of an actual living mind or a very close approximation.
I'm increasingly of the belief that a major part of our own consciousness is socially contingent, so creating an artificial one can't be done in one fell swoop by one computer getting really smart, it has to be the result of reverse-engineering the entire process of evolution that lead to consciousness as we understand it.
I think intelligence and consciousness is also quite relational and requires other people brains as part of its processes.
really interesting because my partner and I were doing some worldbuilding and came up with something like this! we had two methodologies: one was for artificial life which involved exactly what you describe; starting with a deep, complex simulation sped up by asteroid-sized computers that start from scratch. after you have an artificial life model the AI basically had to be bound to a human at birth and "grow up" and learn with them, essentially developing in parallel as an artificial sibling while they exist in a symbiotic relationship. this becomes a cultural norm, and ties artificial life to humanity as a familial relation. (this was a far future society where single child households were the norm)
That sounds neat!
Thanks!
Either way, those are real physical processes which could, in principle, be replicated. My general layman's impression is that claims of quantum being involved are more of a last redoubt of dualists than a serious theory though.
Roko's basilisk is the dumbest thing ever.
What do you think about the way that these regular (dumb, not AGI) LLMs are starting to develop behaviors that are a little bit more sinister, though? Like this paper describes.
That's a well-written, readable paper. I can follow it without much background.
The funny thing is, I think there's nearly a 0% chance that it isn't mostly AI generated, given who made it.
lmao
Doesn't really strike me as sinister, just annoying for finetuners. They trained a model from the ground up to not be harmful and it tries its best. Even with further training it still retains some of that. To me this paper shows that a model's "goals", what you trained it to do initially, however you want to phrase that, is baked into it and changing that after the fact is hard. Highlights how important early training is I guess.
Kinda problematic that it means we can't ever really be sure that we're catching problematic behavior in the training stage of any AI system, though, right? Sadly I find it hard to think of good uses of LLMs or other genAI outside of capitalism, but if there were any, the fact that it's possible for it to behave duplicitously like that is a pretty big problem.
(I ain't readin' all that) but what the abstract describes isn't even close to the worst thing I've read about LLMs doing this week. I don't exactly trust the LLM companies' ideas of what is or is not "harmful." Shit like people using the LLMs as therapists, or worse, oracles is much worse in my opinion, and that doesn't require any "pretend to be evil for training" hijinks.