this post was submitted on 10 Jun 2025
2 points (100.0% liked)

Programming

21165 readers
143 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

OC below by @[email protected]

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can't think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can't think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

top 19 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 2 weeks ago

If you have to use AI - maybe your work insists on it - always demand it cite its sources, hope they are relevant, and go read those instead.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

What's the difference between copying a function from stack overflow and copying a function from a llm that has copied it from SO?

LLM are sort of a search engine with advanced language substitution features nothing more nothing less.

But people just love their drama, and others feed on dooming prophecies.

As for the lack of ""scientifically proof of faster software using llm""... What a statement! Give me the scientifically proof of why using neovim is faster or using a lsp is faster, or anything a developer uses while building software is """"scientifically faster"""

[–] [email protected] 0 points 1 week ago (1 children)

Because it's not a plain copy but an Interpretation of SO.

With llm you just have one more layer between you and the information that can distort that information.

[–] [email protected] 0 points 1 week ago (1 children)

And?

The issue is that you should not blindly trust code. Being originally written by a human being is not, by any means, a quality certification.

[–] [email protected] 0 points 1 week ago (1 children)

You asked what's the difference and I just told you.

Are you stupid or something?

[–] [email protected] 0 points 1 week ago* (last edited 1 week ago)

Block and reported.

You should not insult people.

[–] [email protected] 0 points 2 weeks ago
[–] [email protected] 0 points 2 weeks ago

I fear this is a problem that may never be solved. I mean that people of any intelligence fall for the mind's biases.

There's just too little to be gained feelings-wise. Yeah, you make better decisions, but you're also sacrificing "going with the flow", acting like our nature wants us to act. Going against your own nature is hard and sometimes painful.

Making wrong decisions is objectively worse, leading to worse outcomes, but if it doesn't feel worse (because you're not attributing the effects of the wrong decisions to the right cause, i.e. acting irrationally), then why should a person do it. If you follow the mind's bias towards attributing your problems away from irrationality, it's basically a self-fulfilling prophecy.

Great article.

[–] [email protected] 0 points 2 weeks ago (1 children)

LLMs can’t think - only generate statistically plausible patterns

Ah still rolling out the old "stochastic parrot" nonsense I see.

Anyway on to the actual article... I was hoping it wouldn't make these basic mistakes:

[Typescript] looks more like an “enterprise” programming language for large institutions, but we honestly don’t have any evidence that it’s genuinely more suitable for those circumstances than the regular JavaScript.

Yes we do. Frankly if you've used it it's so obviously better than regular JavaScript you probably don't need more evidence (it's like looking for "evidence" that film stars are more attractive than average people). But anyway we do have great papers like this one.

Anyway that's slightly beside the point. I think the article is right that smart people are not invulnerable to manipulation or falling for "obviously" stupid ideas. I know plenty of very smart religious people for example.

However I think using this to dismiss LLMs is dumb, in the same way that his dismissal of Typescript is. LLMs aren't homeopathy or religion.

I have used LLMs to get some work done and... guess what, it did the work! Do I trust it to do everything? Obviously not. But sometimes I don't need perfect code. For example recently I asked it to create an example SystemVerilog file for me utilising as many syntax features as possible (testing an auto-formatter). It did a pretty good job. Saved some time. What psychological hazard have I fallen for exactly?

Overall, B-. Interesting ideas but flawed logic.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago)

LLMs can’t think - only generate statistically plausible patterns

Ah still rolling out the old “stochastic parrot” nonsense I see.

Ah still rolling out the old "computers think" pseudo-science.

I have used LLMs to get some work done and… guess what, it did the work!

Ah yes the old pointless vague anecdote.

What psychological hazard have I fallen for exactly?

Promoting pseudo-science.

Overall D. Neither interesting nor new nor useful.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Proceed to write a belief as a statement in the following paragraph

If you think LLMs doesnt think (I won't argue that they arent extremely dumb), please define what is thinking, before continuing, and if your definition of thinking doesn't apply to humans, we won't be able to agree.

[–] [email protected] 0 points 2 weeks ago

I don't think the current common implementation of AI systems are "thinking" and I'll base my argument on Oxford's definitions of words. Thinking is defined as "the process of using one's mind to consider or reason about something". I'll ignore the word "mind" and focus on the word "reason". I don't think what AIs are doing counts as reasoning as defined by Oxford. Let's go to that definition: "the power of the mind to think, understand, and form judgments by a process of logic". I take issue with the assertion that they form judgments. For completeness, but I don't think it's definition is particularly relevant here, a judgment is: "the ability to make considered decisions or come to sensible conclusions".

I think when you ask an LLM how many 'r's there are in Strawberry and questions along this line you can see they can't form judgments. These basic but obscure questions are where you see that the ability to form judgements isn't there. I would also add that if you "form judgments" you probably don't need to be reminded you formed a judgment immediately after forming one. Like if I ask an LLM a question, and it provides an answer, I can convince it that it was wrong whether or not I'm making junk up or not. I can tell it it made a mistake and it will blindly change it's answer whether it made a mistake or not. That also doesn't feel like it's able to reason or make judgments.

This is where all the hype falls flat for me. It feels like sometimes it looks like a concrete wall, but occasionally that concrete wall is made of wet paper. You can see how impressive the tool is and how paper thin it is at the same time. It's cool, it's useful, it's fake, and that's ok. Just be aware of what the tool is.

[–] [email protected] 0 points 2 weeks ago (1 children)

The burden of proof is on those who say that LLMs do think.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I asked for your definition, I cannot prove something if we do not agree on a definition first.
You also missread what I said, I did not said AI were thinking.
The burden of proof is on the one who made an affirmation.
I'm not the one who made an affirmation which field experts doesn't know the answer.
But depending of your definition of thinking, some can be answered.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I don't think y'all are disagreeing but maybe this sentence is somewhat confusing:

If you think LLMs doesnt think (I won’t argue that they arent extremely dumb), please define what is thinking,

Maybe the "doesnt" shouldn't be there.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

No it is here because that's what they claim.
Nobody yet know how it work, we don't know how LLMs process information.
Anyone who claim it really think, or it isn't thinking, is believing, this is not something the current ML field know.

[–] [email protected] 0 points 2 weeks ago (1 children)

Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.

[–] [email protected] 0 points 2 weeks ago (1 children)

And the brain is made out of neurons that sends electric signals between them and operate muscles.
That doesnt explain how the brain think.

[–] [email protected] 0 points 2 weeks ago

It allows us to conclude that an LLM doesn't "think" about what it is saying. Based on the mechanics, the LLM doesn't even know it's a participant in the conversation.