this post was submitted on 28 Jun 2025
880 points (94.6% liked)

Technology

72041 readers
2617 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 13 hours ago (14 children)

My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

load more comments (14 replies)
[–] [email protected] 25 points 15 hours ago (3 children)

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

This is not a good argument.

[–] [email protected] 0 points 14 hours ago (1 children)

philosopher

Here's why. It's a quote from a pure academic attempting to describe something practical.

[–] [email protected] 6 points 14 hours ago

The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn't do.

Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

[–] [email protected] 5 points 15 hours ago (2 children)

The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.

[–] [email protected] 9 points 14 hours ago* (last edited 14 hours ago) (4 children)

It's hard to see that books argument from the Wikipedia entry, but I don't see it arguing that intelligence needs to have senses, flesh, nerves, pain and pleasure.

It's just saying computer algorithms are not what humans use for consciousness. Which seems a reasonable conclusion. It doesn't imply computers can't gain consciousness, or that they need flesh and senses to do so.

load more comments (4 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 22 points 16 hours ago

The other thing that most people don't focus on is how we train LLMs.

We're basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they've found a snack, and when the bird gets close enough the snake strikes and eats the bird.

Now, I'm not saying we're building something that is designed to kill us. But, I am saying that we're putting enormous effort into building something that can fool us into thinking it's intelligent. We're not trying to build something that can do something intelligent. We're instead trying to build something that mimics intelligence.

What we're effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What's crazy about that is that we're not building this to fool a predator so that we're not in danger. We're not doing it to fool prey, so we can catch and eat them more easily. We're doing it so we can fool ourselves.

It's like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn't work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn't intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

[–] [email protected] 8 points 16 hours ago* (last edited 16 hours ago) (1 children)

I agreed with most of what you said, except the part where you say that real AI is impossible because it's bodiless or "does not experience hunger" and other stuff. That part does not compute.

A general AI does not need to be conscious.

[–] [email protected] 1 points 13 hours ago* (last edited 13 hours ago) (1 children)

That and there is literally no way to prove something is or isn't conscious. I can't even prove to another human being that I'm a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

Not saying I consider AI in it's current form to be conscious, more so the whole idea is just silly and unfalsifiable.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›