this post was submitted on 01 May 2025
139 points (100.0% liked)
chat
8384 readers
287 users here now
Chat is a text only community for casual conversation, please keep shitposting to the absolute minimum. This is intended to be a separate space from c/chapotraphouse or the daily megathread. Chat does this by being a long-form community where topics will remain from day to day unlike the megathread, and it is distinct from c/chapotraphouse in that we ask you to engage in this community in a genuine way. Please keep shitposting, bits, and irony to a minimum.
As with all communities posts need to abide by the code of conduct, additionally moderators will remove any posts or comments deemed to be inappropriate.
Thank you and happy chatting!
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's much better than them pretending they simply know this information. We should encourage people to be open with their sources and not get mad at them when they say something like that. Otherwise they will just ask ChatGPT and not admit it.
They could also just not post, because if I wanted a wrong answer from a robot, I'd have just asked the robot myself.
What bothers me is when people post long outputs from LLMs and expect me to actually read it. Seems rude to me.
LLMs are wrong around half the time. So there is some value in asking it, depending on the question.
That's worse odds than a Magic 8 Ball, which at least sometimes admits it doesn't know
The Magic 8 Ball has that failure rate with yes/no questions. LLMs can achieve this rate on open ended questions, which is much more impressive IMO.
Source?
What do you mean?
I am being a weenie. :P