this post was submitted on 25 May 2025
29 points (100.0% liked)

askchapo

23013 readers
80 users here now

Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.

Rules:

  1. Posts must ask a question.

  2. If the question asked is serious, answer seriously.

  3. Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.

  4. Try [email protected] if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.

founded 4 years ago
MODERATORS
 

Prompted by the recent troll post, I've been thinking about AI. Obviously we have our criticisms of both the AI hype manchildren and the AI doom manchildren (see title of the post. This is a Rationalist free post. Looking for it? Leave)

But looking at the AI doom guys with an open mind, sometimes it appear that they make a halfway decent argument that's backed up by real results. This YouTube channel has been talking about the alignment problem for a while, and I think he probably is a bit of a Goodhart's Law merchant (as in, by making a career out of measuring the dangers of AI, his alarmism is structural) so he should be taken with a grain of salt, it does feel pretty concerning that LLMs show inner misalignment and are masking their intentions (to anthropomorphize) under training vs deployment.

Now, I mainly think that these people are just extrapolating out all the problems with dumb LLMs and saying "yeah but if they were AGI it would become a real problem" and while that might be true if taking the premise at face value, the idea that AGI will ever happen is itself pretty questionable. The channel I linked has a video arguing that AGI safety is not a Pascal's mugging, but I'm not convinced.

Thoughts? Does the commercialization of dumb AI make it a threat on a similar scale to hypothetical AGI? Is this all just a huge waste of time to think about?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 1 week ago (2 children)

I'm torn between either there being a quantum theory of consciousness (thus requiring quantum effects to be utilized to create true AGI) or it requiring actual full, high resolution simulation of an actual living mind or a very close approximation.

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (2 children)

I'm increasingly of the belief that a major part of our own consciousness is socially contingent, so creating an artificial one can't be done in one fell swoop by one computer getting really smart, it has to be the result of reverse-engineering the entire process of evolution that lead to consciousness as we understand it.

[–] [email protected] 1 points 1 week ago

I think intelligence and consciousness is also quite relational and requires other people brains as part of its processes.

[–] [email protected] 6 points 1 week ago* (last edited 1 week ago) (1 children)

really interesting because my partner and I were doing some worldbuilding and came up with something like this! we had two methodologies: one was for artificial life which involved exactly what you describe; starting with a deep, complex simulation sped up by asteroid-sized computers that start from scratch. after you have an artificial life model the AI basically had to be bound to a human at birth and "grow up" and learn with them, essentially developing in parallel as an artificial sibling while they exist in a symbiotic relationship. this becomes a cultural norm, and ties artificial life to humanity as a familial relation. (this was a far future society where single child households were the norm)

[–] [email protected] 5 points 1 week ago (1 children)
[–] [email protected] 1 points 6 days ago
[–] [email protected] 10 points 1 week ago

Either way, those are real physical processes which could, in principle, be replicated. My general layman's impression is that claims of quantum being involved are more of a last redoubt of dualists than a serious theory though.