this post was submitted on 22 Apr 2025
221 points (95.1% liked)

Technology

69212 readers
3612 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -2 points 1 day ago* (last edited 1 day ago) (2 children)

Are we really going to devil's advocate for the idea that avoiding society and asking a language model for life advice is okay?

[–] [email protected] 7 points 1 day ago (1 children)

It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.

And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.

[–] [email protected] -2 points 1 day ago (1 children)

The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we'd know we had it 😉

[–] [email protected] 6 points 1 day ago (1 children)

What do you mean? I don't follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?

[–] [email protected] 0 points 1 day ago (1 children)

It's an answer on if one is sure if they are not just a fancy autocomplete.

More directly; we can't be sure if we are not some autocomplete program in a fancy computer but since we're having an experience then we are conscious programs.

[–] [email protected] 7 points 1 day ago* (last edited 1 day ago)

When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.

And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.

[–] [email protected] 12 points 1 day ago (1 children)

No, but thinking about whether it's conscious is an independent thing.

[–] [email protected] -5 points 1 day ago (2 children)

There's no reason to think it's conscious. That's just advertising for [product]. My Product Is Conscious.

[–] [email protected] 4 points 20 hours ago

Dunning-kruger effect on full display here, everyone.

Take your pictures.