this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59405 readers
2527 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 5 months ago

That's like saying you can't be 100% sure you never have fake news at the top of search query results. It's just a fact.

[–] [email protected] 26 points 5 months ago

Of course they can't. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.

[–] [email protected] 10 points 5 months ago (1 children)

I don't know why they're trying to shove AI down our throats. They need to take their time, allow it to evolve.

[–] [email protected] 6 points 5 months ago* (last edited 5 months ago)

Because it's all a corporation and a huge part of the corporate capitalist system is infinite growth. They want returns, BIG ones. When? Right the fuck now. How do you do that? Well AI would turn the world upside down like the dot-com boom. So they dump tons of money into AI. So..... it's the AI done? Oh no no no, we're at machine leaning AI is pretty far down the road actually, what we're firing the AI department heads and releasing this machine leaning software as 100% all the way done AI?

It's all the same reasons section 8 housing and low cost housing don't work under corporate capitalism. It's profitable to take government money, it's profitable to have low rent apartments. That's not the problem, the problem is THEY NEED THE GROWTH NOW NOW NOW!!!! If you have a choice between owning a condo where you have high wage renters, and you add another $100 to rent every year, you get more profit faster. No one wants to invest in a 10% increase over 5 years if the can invest in 12% over 4 years. So no one ever invests in low rent or section 8 housing.

[–] [email protected] 20 points 5 months ago

Seeing these systems just making shit up when they're not sure on the answer is probably the closest they'll ever come to human behaviour.

We've invented the virtual politician.

[–] [email protected] 7 points 5 months ago

Tim Cook..go take your meds and watch Price is Right

[–] [email protected] 6 points 5 months ago

To be 100 percent sure is a hallucination. Probably he tried to say that he is less than 80 percent sure.

[–] [email protected] 41 points 5 months ago (2 children)

Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.

That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.

[–] [email protected] 9 points 5 months ago* (last edited 5 months ago)

First of all I agree with your point that it is all hallucination.

However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.

The thing that really stops LLMs from being able to make judgements about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.

I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won't get truth-judging machines until we get actually-thinking machines.

[–] [email protected] 10 points 5 months ago

Yeah, I try to make this point as often as I can. The notion that AI hallucinates only wrong answers really misleads people about how these programs actually work. It couches it in terms of human failings rather than really getting at the underlying flaw in the whole concept.

LLMs are a really interesting area of research, but they never should have made it out of the lab. The fact that they did is purely because all science operates in the service of profit now. Imagine if OpenAI were able to rely on government funding instead of having to find a product to sell.

[–] [email protected] 41 points 5 months ago (1 children)

I'm 100% sure he can't. Or at least, not from LLMs specifically. I'm not an expert so feel free to ignore my opinion but from what I've read, "hallucinations" are a feature of the way LLMs work.

[–] [email protected] 9 points 5 months ago

One can have an expert system assisted by ML for classification. But that's not an LLM.

load more comments
view more: next ›