this post was submitted on 27 May 2025
2062 points (99.4% liked)

Programmer Humor

23794 readers
2962 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 week ago* (last edited 1 week ago)

Coding isn't special you are right, but it's a thinking task and LLMs (including reasoning models) don't know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn't learn to think from that. That's why LLMs can't replace humans.

That does certainly not mean that software can't be smarter than humans. It will and it's just a matter of time, but to get there we likely have AGI first.

To show you that LLMs can't think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it "saw" the entire Wikipedia article on how xxo works during training, that it's a solved game, different strategies and how to consistently draw - but still it can't do it. It loses most games against my four year old niece and she doesn't even play good/perfect xxo.

I wouldn't trust anything, which is claimed to do thinking tasks, that can't even beat my niece in xxo, with writing firmware for cars or airplanes.

LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can't think. For now, but likely we'll need different architectures for real thinking models than LLMs have.