this post was submitted on 06 Aug 2024
78 points (96.4% liked)
Apple
605 readers
5 users here now
There are a couple of community rules in addition to the main instance rules.
All posts must be about Apple
Anything goes as long as it’s about Apple. News about other companies and devices is allowed if it directly relates to Apple.
No NSFW content
While lemmy.zip allows NSFW content this community is intended to be a place for all to feel welcome. Any NSFW content will be removed and the user banned.
If you have any comments or suggestions please message one of the moderators.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm asking for a source specifically on how commanding an LLM to not hallucinate makes it provide better output.
That's not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to "do not hallucinate". I simply don't believe you and there is no evidence that you're correct, aside from you saying that maybe the entirety of reddit had "do not hallucinate" prepended when OpenAI scraped it.
Yeah, that's about what I expected. If you re-read my comments, you might notice that I never stated that "commanding an LLM to not hallucinate makes it provide better output", but I don't think that you're here to have any kind of honest exchange on the topic.
I'll just leave you with one thought - you're making a very specific claim ("doing XYZ can't have a positive effect!"), and I'm just saying "here's a simple and obvious counter-example". You should either provide a source for your claim, or explain why my counter-example is not valid. But again, that would require you having any interest in actual discussion.
I didn't make an extraordinary claim, you did. You're claiming that the influence of "do not hallucinate" somehow fundamentally differs from the influence of any other phrase (extraordinary). I'm claiming that no, the influence is the same (ordinary).