Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
There are guardrails in place to avoid providing the user illegal and hateful information to the en user and specially to avoid situations like that (well not all companies do, but you can expect Google to have it in place),
I wonder: 1- How did the LLM hallucinate so much to generate that answer out of the blues given the previous context. 2- Why did the guardrails failed blocking this such obvious undesired output.
They would need general AI to police the LLM AI. Otherwise LLMs will keep serving up crap because their input data set is full of crap.
As someone that works in AI, most of what Lemmy writes about LLM's is hilariously wrong. This, however, is very right, and what amazes me is that every big tech company had made this realisation - yet doesn't give a fuck. Pre-LLM's, we knew that manual patching and intervention wasn't a scalable solution, and we knew that LLM's were prone to hallucinations, but ChatGPT showed companies that people often don't care if the answer is wrong. Fuck it, let's just patch this shit as we go...
But when this shit happens, oh boy, do I feel for the poor engineers and scientists on-call that need to fix this shit regularly...
It's not just that the input data is crap. Mostly the issue is that an LLM is a glorified autocomplete. The core of the technology is making grammatically correct sentences. It has no concept of facts or logic. Any impression that it does is just an illusion borne of the word probabilities baked in.
LLMs are a remarkable example of brute-forcing a solution to a problem, but it's this same brute force that makes me doubt it'll ever reach the next level.
And name it "Deckard" for maximum concentrated cringe