this post was submitted on 01 May 2025
139 points (100.0% liked)

chat

8384 readers
271 users here now

Chat is a text only community for casual conversation, please keep shitposting to the absolute minimum. This is intended to be a separate space from c/chapotraphouse or the daily megathread. Chat does this by being a long-form community where topics will remain from day to day unlike the megathread, and it is distinct from c/chapotraphouse in that we ask you to engage in this community in a genuine way. Please keep shitposting, bits, and irony to a minimum.

As with all communities posts need to abide by the code of conduct, additionally moderators will remove any posts or comments deemed to be inappropriate.

Thank you and happy chatting!

founded 3 years ago
MODERATORS
 

Bitch if I wanted the robot, I’d ask it myself (well, I’d ask the Chinese one)! I’m asking you!

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 3 days ago

I just say "I dunno lol" like a normal idiot i-love-not-thinking

[–] [email protected] 14 points 3 days ago (2 children)
[–] [email protected] 7 points 3 days ago

Okay but is it accurate tho

[–] [email protected] 10 points 3 days ago

smug-explain to be clear: that is cum, the verb, not the noun

[–] [email protected] 11 points 4 days ago (1 children)

Hexbear is a leftist online community that originated from users of the banned subreddit r/ChapoTrapHouse. It operates on a modified version of the Lemmy platform, focusing on socialist discussions and content.

[–] [email protected] 14 points 3 days ago

getting head from c-3po I call that golden dome

[–] [email protected] 10 points 4 days ago

According to ChatGPT, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.

[–] [email protected] 3 points 4 days ago* (last edited 3 days ago) (2 children)

It's much better than them pretending they simply know this information. We should encourage people to be open with their sources and not get mad at them when they say something like that. Otherwise they will just ask ChatGPT and not admit it.

[–] [email protected] 6 points 3 days ago (1 children)

They could also just not post, because if I wanted a wrong answer from a robot, I'd have just asked the robot myself.

[–] [email protected] 1 points 3 days ago* (last edited 3 days ago) (1 children)

What bothers me is when people post long outputs from LLMs and expect me to actually read it. Seems rude to me.

LLMs are wrong around half the time. So there is some value in asking it, depending on the question.

[–] [email protected] 1 points 2 days ago (1 children)

That's worse odds than a Magic 8 Ball, which at least sometimes admits it doesn't know

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago)

The Magic 8 Ball has that failure rate with yes/no questions. LLMs can achieve this rate on open ended questions, which is much more impressive IMO.

[–] [email protected] 4 points 3 days ago (1 children)
[–] [email protected] 2 points 3 days ago (1 children)
[–] [email protected] 3 points 3 days ago

I am being a weenie. :P

[–] [email protected] 25 points 4 days ago

I fucking cringe so hard when people are like "i asked chat gpt about what's holding me back in life and it came back with great answers"

Bitch it's fucking astrology. I can come up with shit that vaguely makes sense for most people and you'll think that it's "so true" and "it knows you better than you know yourself"

[–] [email protected] 22 points 4 days ago

This should be an instant ban on any part of the internet. I've seen it on Lemmy before.

[–] [email protected] 29 points 4 days ago (3 children)

My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn't know anything, she persisted. Then the dumbass LLM made up some rules because it doesn't know anything.

[–] [email protected] 5 points 3 days ago* (last edited 3 days ago) (1 children)

If you tell people that ChatGPT doesn't know anything, they will only think you're obviously wrong when it gives them apparently correct answers. You should tell people the truth -- the harm in ChatGPT is that it is generally subtly wrong in some way, and often entirely wrong, but it always looks plausibly right.

[–] [email protected] 4 points 3 days ago (1 children)

Yea, that's definitely one of the worst aspects of AI is how confidently incorrect it can be. I had this issue using deep seek and had to turn on the mode where you can see what it's thinking and often it will say something like.

I can't analyze this properly, let's assume this.... Then confidently spits an answer out based on that assumption. At this point I feel like AI is good for 100 level CS students that don't want to do their homework and that's about it

[–] [email protected] 4 points 3 days ago

Same, I just tried deepseek-R1 on a question I invented as an AI benchmark. (No AI has been able to remotely correctly answer this simple question, though I won't reveal what the question is here obviously.) Anyway, R1 was constantly making wrong assumptions, but also constantly second-guessing itself.

I actually do think the "reasoning" approach has potential though. If LLMs can only come up with right answers half the time, then "reasoning" allows multiple attempts at a right answer. Still, results are unimpressive.

[–] [email protected] 12 points 4 days ago

one of my friends did the same thing and it provided incorrect information about the games rules lmao

[–] [email protected] 16 points 4 days ago* (last edited 4 days ago) (2 children)

My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn't know anything, she persisted. Then the dumbass LLM made up some rules because it doesn't know anything.

Do you think they took home the lesson that llms don't possess knowledge or do reason?

[–] [email protected] 1 points 3 days ago (1 children)

Why would she take away that lesson? It produced a list of rules to the game that look approximately right.

[–] [email protected] 3 points 3 days ago (1 children)

Presumably her friends corrected her and showed her why the "generated" rules were incorrect... at least that's what I would expect of my friends

[–] [email protected] 4 points 3 days ago

I hope so. You have to be patient in circumstances like that.

[–] [email protected] 4 points 3 days ago (1 children)

i fucking hope, but she didn't really pay much attention to us lol

[–] [email protected] 4 points 3 days ago

she didn't really pay much attention to us

Why do people do things like this? What is the point of playing a game with your friends if you won't listen to or pay attention to them?

[–] [email protected] 31 points 4 days ago (2 children)

Heard two kids arguing in the gym the other day: "Even ChatGPT says abortion is murder!"

[–] [email protected] 4 points 3 days ago

This should be a tagline

[–] [email protected] 16 points 4 days ago (1 children)

just today I had a kid tell me they were writing minecraft in HTML using chatgpt. Sillier and far less disturbing than your example, really just normal kid stuff, but a little disappointing

[–] [email protected] 3 points 3 days ago

And whenever they say "i wrote minecraft" they mean a bunch of voxel hills. Where's the villagers, the pirate treasure, the enchanting, the caves?

[–] [email protected] 40 points 4 days ago (1 children)

At my job people were having a problem and someone asked me, an expert in the field for over 20 years, and I was like I don't know that is a pretty edge case, let me look into it. 5 minutes later they were like "Well copilot says this [obviously wrong answer]"

DIPSHIT DO YOU THINK THE WORST AI BOT KNOWS MORE THAN ACTUAL TRAINED EXPERTS?

[–] [email protected] 2 points 3 days ago

That's a great opportunity to explain to the person how the AI produced an obviously wrong and potentially harmful answer, and decrease their trust of AI.

[–] [email protected] 54 points 4 days ago* (last edited 4 days ago) (1 children)

I can't remember who it was, but some PhD in anthropology (I think) was tweeting about his research and some fucking idiot was arguing with him because ChatGPT said something different. Fucking bong-cloud epistemology.

[–] [email protected] 35 points 4 days ago* (last edited 4 days ago)

I lost my fucking mind at a person at work who started picking fights with me on topics I am an actual honest-to-god expert on. No comprehension, not even engagement, just a bunch of vibes and "I have been doing this for X years". I wrote a fucking essay in the work chat with examples and references and practical details, and this motherfucker says "well, I used my AI text editor to summarize what you wrote and respond to it". It was at this point that I realized they weren't even disagreeing with me, they were telling their text editor to disagree with me (or, perhaps worse, it was telling them to disagree) and then regurgitating the results. AI freaks are literally inhuman.

load more comments
view more: next ›