this post was submitted on 17 Mar 2025
58 points (96.8% liked)

Technology

67002 readers
3809 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 3 days ago

The distinction between hallucination and reality doesn't really exist for humans either. To discover a divergence between sensor data and perception, a human needs to somehow find out the truth another way. The only way to find anything out is via these sensors (using your ears to listen to someone telling you that your eyes are deceiving you etc.).

[–] [email protected] 4 points 3 days ago (3 children)

I couldn't read this article. It is badly a need of a spelling and grammar check.

[–] [email protected] 0 points 2 days ago

English is not the author's native language. So maybe give him some slack?

[–] [email protected] 0 points 2 days ago

Didn't know spelling and grammar checks had needs

[–] [email protected] 3 points 3 days ago

Well at least you know it's not AI generated.

[–] [email protected] 24 points 3 days ago (2 children)

That's something people really have to get into their heads: an "answer" by an LLM ist just a series of high probability tokens. It's only us humans who interpret reason and value into it. From the system's standpoint it's just numbers without any meaning whatsoever. And no amount of massaging will change that. LLMs are about as "intelligent" as a fancy database query.

[–] [email protected] 0 points 1 day ago

And no amount of massaging will change that.

I disagree!

[–] [email protected] 9 points 3 days ago (1 children)

I use it for basic Python questions, but it gets even basic stuff wrong. The reframing can sometimes help me see new options when I get in a rut, but I'm not putting that code into production.

[–] [email protected] 7 points 3 days ago (1 children)

I find myself asking an AI things and getting an answer that makes me go “what the actual fuck, why would you do this when you SHOULD do it this other way”

Which is the best way it’s helped me.

Making me realize I know what I’m doing already.

[–] [email protected] 3 points 3 days ago

Yup, sounds like my average experience

[–] [email protected] 19 points 3 days ago (1 children)

I think not many people are aware of that. No matter how well you build the systems with this type of AI, they don't yet know. Now, maybe they're useful, maybe not, but this awareness that everything is actually just made up, by statistics and such, is lacking from peoples minds.

[–] [email protected] 11 points 3 days ago

This is something I've been saying for a while now, because it really needs to be understood.

LLMs do not "sometimes hallucinate." Everything they produce is a hallucination. They are machines for creating hallucinations. The goal is that the hallucination will - through some careful application of statistics - align with reality.

But there's literally no feasible way that anyone has yet found to guarantee that.

LLMs were designed to effectively impersonate human interaction. They're actually pretty good at that. They take intelligence so well that it becomes really easy to convince people that they are in fact intelligent. As a model for passing the Turing test they're brilliant, but what they've taught us is that the Turing test is a terrible model for gauging the advancement of machine intelligence. Turns out, effectively reproducing the results a stupid human can achieve isn't all that useful for the most part.