this post was submitted on 26 Feb 2025
676 points (98.0% liked)

Programmer Humor

20800 readers
1742 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 21 hours ago

It’s WYSIWYG all over again…

[–] [email protected] 16 points 1 day ago

I personally find copilot is very good at rigging up test scripts based on usings and a comment or two. Babysit it closely and tune the first few tests and then it can bang out a full unit test suite for your class which allows me to focus on creative work rather than toil.

It can come up with some total shit in the actual meat and potatoes of the code, but boilerplate stuff like tests it seems pretty spot on.

[–] [email protected] 21 points 1 day ago (2 children)

Offtopic: But when I was a kid, I was obsessed with the complex subway rail system in NYC, I keep trying to draw and map it out.

[–] [email protected] 5 points 16 hours ago

OpenTTD is a good game.

[–] [email protected] 13 points 1 day ago (1 children)

When did you get diagnosed?

[–] [email protected] 2 points 16 hours ago* (last edited 16 hours ago)

He's got that ol' New York City Metropolitan Area Transit Authority Blues again, momma!

[–] [email protected] 16 points 1 day ago

The key is identifying how to use these tools and when.

Local models like Qwen are a good example of how these can be used, privately, to automate a bunch of repetitive non-determistic tasks. However, they can spot out some crap when used mindlessly.

They are great for skett hing out software ideas though, ie try a 20 prompts for 4 versions, get some ideas and then move over to implementation.

[–] [email protected] 22 points 1 day ago

God, seriously. Recently I was iterating with copilot for like 15 minutes before I realized that it's complicated code changes could be reduced to an if statement.

[–] [email protected] 4 points 1 day ago (1 children)

I don't understand how build times magically decrease with AI. Or did they mean built?

[–] [email protected] 9 points 1 day ago

They mean time to write the code, not compile time. Let's be honest, the AI will write it in Python or Javascript anyway

[–] [email protected] 40 points 1 day ago (5 children)

Not to be that guy, but the image with all the traintracks might just be doing it's job perfectly.

[–] [email protected] 22 points 1 day ago (1 children)

Engineers love moving parts, known for their reliability and vigor

[–] [email protected] 6 points 1 day ago

Vigor killed me

[–] [email protected] 8 points 1 day ago

Might is the important here

[–] [email protected] 6 points 1 day ago (1 children)

It gives you the right picture when you asked for a single straight track on the prompt. Now you have to spend 10 hours debugging code and fixing hallucinations of functions that don't exist on libraries it doesn't even neet to import.

[–] [email protected] 1 points 1 day ago (2 children)

Not a developer. I just wonder about AI hallucinations come about. Is it the 'need' to complete the task requested at the cost of being wrong?

[–] [email protected] 1 points 15 hours ago

No, it's just that it doesn't know if it's right or wrong.

How "AI" learns is they go through a text - say blog post - and turn it all into numbers. E.g. word "blog" is 5383825526283. Word "post" is 5611004646463. Over huge amount of texts, a pattern is emerging that the second number is almost always following the first number. Basically statistics. And it does that for all the words and word combinations it found - immense amount of text are needed to find all those patterns. (Fun fact: That's why companies like e.g. OpenAI, which makes ChatGPT need hundreds of millions of dollars to "train the model" - they need enough computer power, storage, memory to read the whole damn internet.)


So now how do the LLMs "understand"? They don't, it's just a bunch of numbers and statistics of which word (turned into that number, or "token" to be more precise) follows which other word.


So now. Why do they hallucinate?

How they get your question, how they work, is they turn over all your words in the prompt to numbers again. And then go find in their huge databases, which words are likely to follow your words.

They add in a tiny bit of randomness, they sometimes replace a "closer" match with a synonym or a less likely match, so they even seen real.

They add "weights" so that they would rather pick one phrase over another, or e.g. give some topics very very small likelihoods - think pornography or something. "Tweaking the model".

But there's no knowledge as such, mostly it is statistics and dice rolling.

So the hallucination is not "wrong", it's just statisticaly likely that the words would follow based on your words.

Did that help?

[–] [email protected] 2 points 1 day ago

Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.

What's marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called "tokens" to break apart natural language into elements that a machine can understand, and then recombining those tokens to "create" something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.

So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.

[–] [email protected] 4 points 1 day ago (1 children)

While being more complex and costly to maintain

[–] [email protected] 6 points 1 day ago (1 children)

Depends on the usecase. It's most likely at a trainyard or trainstation.

[–] [email protected] 4 points 1 day ago

The image implies that the track on the left meets the use case criteria

[–] [email protected] 27 points 1 day ago (1 children)

The one on the right prints “hello world” to the terminal

[–] [email protected] 4 points 1 day ago

And takes 5 seconds to do it

[–] [email protected] 10 points 1 day ago

I think I would more picture planes taking off those railroads when it comes to AI. It tends to hallucinate API calls that don't exist. if you don't go check the docs yourself you will have a hard time debugging what went wrong.

load more comments
view more: next ›