this post was submitted on 01 May 2025
139 points (100.0% liked)

chat

8384 readers
282 users here now

Chat is a text only community for casual conversation, please keep shitposting to the absolute minimum. This is intended to be a separate space from c/chapotraphouse or the daily megathread. Chat does this by being a long-form community where topics will remain from day to day unlike the megathread, and it is distinct from c/chapotraphouse in that we ask you to engage in this community in a genuine way. Please keep shitposting, bits, and irony to a minimum.

As with all communities posts need to abide by the code of conduct, additionally moderators will remove any posts or comments deemed to be inappropriate.

Thank you and happy chatting!

founded 3 years ago
MODERATORS
 

Bitch if I wanted the robot, I’d ask it myself (well, I’d ask the Chinese one)! I’m asking you!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 3 days ago* (last edited 3 days ago) (1 children)

If you tell people that ChatGPT doesn't know anything, they will only think you're obviously wrong when it gives them apparently correct answers. You should tell people the truth -- the harm in ChatGPT is that it is generally subtly wrong in some way, and often entirely wrong, but it always looks plausibly right.

[–] [email protected] 4 points 3 days ago (1 children)

Yea, that's definitely one of the worst aspects of AI is how confidently incorrect it can be. I had this issue using deep seek and had to turn on the mode where you can see what it's thinking and often it will say something like.

I can't analyze this properly, let's assume this.... Then confidently spits an answer out based on that assumption. At this point I feel like AI is good for 100 level CS students that don't want to do their homework and that's about it

[–] [email protected] 4 points 3 days ago

Same, I just tried deepseek-R1 on a question I invented as an AI benchmark. (No AI has been able to remotely correctly answer this simple question, though I won't reveal what the question is here obviously.) Anyway, R1 was constantly making wrong assumptions, but also constantly second-guessing itself.

I actually do think the "reasoning" approach has potential though. If LLMs can only come up with right answers half the time, then "reasoning" allows multiple attempts at a right answer. Still, results are unimpressive.