this post was submitted on 28 Jun 2025
961 points (94.7% liked)

Technology

72484 readers
3350 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

(page 6) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 21 points 1 week ago

The idea that RAGs "extend their memory" is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

[–] [email protected] 16 points 1 week ago* (last edited 1 week ago) (6 children)

I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

E: I use it to give me ideas that I then test out solo.

[–] [email protected] 30 points 1 week ago (2 children)

This is very interesting... because the general saying is that AI is convincing for non experts in the field it's speaking about. So in your specific case, you are actually saying that you aren't an expert on yourself, therefore the AI's assessment is convincing to you. Not trying to upset, it's genuinely fascinating how that theory is true here as well.

load more comments (2 replies)
[–] [email protected] 4 points 1 week ago (1 children)

Are we twins? I do the exact same and for around a year now, I've also found it pretty helpful.

load more comments (1 replies)
load more comments (4 replies)
[–] [email protected] 14 points 1 week ago (1 children)

Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

[–] [email protected] 15 points 1 week ago (4 children)

Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

load more comments (4 replies)
[–] [email protected] 23 points 1 week ago (2 children)

This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

[–] [email protected] 19 points 1 week ago (1 children)

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

[–] [email protected] 20 points 1 week ago* (last edited 1 week ago) (12 children)

"…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

[–] [email protected] 4 points 1 week ago

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

Not on my phone it didn't. It looks as you intended it.

load more comments (11 replies)
load more comments (1 replies)
[–] [email protected] 0 points 1 week ago (2 children)

I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts: https://ai-2027.com/

[–] [email protected] 4 points 1 week ago (1 children)

Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

This argument seems weak to me:

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

[–] [email protected] 3 points 1 week ago (1 children)

Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

load more comments (1 replies)
[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Ask AI:
Did you mean: irresponsible AI Overview The term "unresponsible" is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (3 children)

In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

[–] [email protected] 5 points 1 week ago (2 children)
[–] [email protected] 0 points 1 week ago (25 children)

My auto correct doesn't care.

[–] [email protected] 6 points 1 week ago (4 children)
load more comments (4 replies)
load more comments (24 replies)
[–] [email protected] 0 points 1 week ago (5 children)

Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

I wonder how different it'll be in 500 years.

[–] [email protected] 4 points 1 week ago (4 children)

It's called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can't write for beans.

load more comments (4 replies)
[–] [email protected] 3 points 1 week ago (4 children)

Would you rather use the same contraction for both? Because "its" for "it is" is an even worse break from proper grammar IMO.

load more comments (4 replies)
[–] [email protected] 5 points 1 week ago

It’s “its”, not “it’s”, unless you mean “it is”, in which case it is “it’s “.

load more comments (2 replies)
load more comments (2 replies)
load more comments
view more: ‹ prev next ›