this post was submitted on 12 May 2025
630 points (98.8% liked)
Just Post
863 readers
66 users here now
Just post something ๐
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes they do.
Oh, I think you misunderstand what hallucinations mean in this context.
AIs (LLMs) train on a very very large dataset. That's what LLM stands for, Large Language Model.
Despite how large this training data is, you can ask it things outside the training set and it will answer as confidently as things inside it's dataset.
Since these answers didn't come from anywhere in training, it's considered to be a hallucination.