It’s WYSIWYG all over again…
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
I personally find copilot is very good at rigging up test scripts based on usings and a comment or two. Babysit it closely and tune the first few tests and then it can bang out a full unit test suite for your class which allows me to focus on creative work rather than toil.
It can come up with some total shit in the actual meat and potatoes of the code, but boilerplate stuff like tests it seems pretty spot on.
Offtopic: But when I was a kid, I was obsessed with the complex subway rail system in NYC, I keep trying to draw and map it out.
When did you get diagnosed?
He's got that ol' New York City Metropolitan Area Transit Authority Blues again, momma!
The key is identifying how to use these tools and when.
Local models like Qwen are a good example of how these can be used, privately, to automate a bunch of repetitive non-determistic tasks. However, they can spot out some crap when used mindlessly.
They are great for skett hing out software ideas though, ie try a 20 prompts for 4 versions, get some ideas and then move over to implementation.
God, seriously. Recently I was iterating with copilot for like 15 minutes before I realized that it's complicated code changes could be reduced to an if
statement.
AI can't imagine an image full glass of wine because there are barely any images of that in any dataset out there. AI can't think, just massage it's dataset into something vaguely plausible.
I don't understand how build times magically decrease with AI. Or did they mean built?
They mean time to write the code, not compile time. Let's be honest, the AI will write it in Python or Javascript anyway
Not to be that guy, but the image with all the traintracks might just be doing it's job perfectly.
Engineers love moving parts, known for their reliability and vigor
Vigor killed me
Might is the important here
It gives you the right picture when you asked for a single straight track on the prompt. Now you have to spend 10 hours debugging code and fixing hallucinations of functions that don't exist on libraries it doesn't even neet to import.
Not a developer. I just wonder about AI hallucinations come about. Is it the 'need' to complete the task requested at the cost of being wrong?
No, it's just that it doesn't know if it's right or wrong.
How "AI" learns is they go through a text - say blog post - and turn it all into numbers. E.g. word "blog" is 5383825526283. Word "post" is 5611004646463. Over huge amount of texts, a pattern is emerging that the second number is almost always following the first number. Basically statistics. And it does that for all the words and word combinations it found - immense amount of text are needed to find all those patterns. (Fun fact: That's why companies like e.g. OpenAI, which makes ChatGPT need hundreds of millions of dollars to "train the model" - they need enough computer power, storage, memory to read the whole damn internet.)
So now how do the LLMs "understand"? They don't, it's just a bunch of numbers and statistics of which word (turned into that number, or "token" to be more precise) follows which other word.
So now. Why do they hallucinate?
How they get your question, how they work, is they turn over all your words in the prompt to numbers again. And then go find in their huge databases, which words are likely to follow your words.
They add in a tiny bit of randomness, they sometimes replace a "closer" match with a synonym or a less likely match, so they even seen real.
They add "weights" so that they would rather pick one phrase over another, or e.g. give some topics very very small likelihoods - think pornography or something. "Tweaking the model".
But there's no knowledge as such, mostly it is statistics and dice rolling.
So the hallucination is not "wrong", it's just statisticaly likely that the words would follow based on your words.
Did that help?
Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.
What's marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called "tokens" to break apart natural language into elements that a machine can understand, and then recombining those tokens to "create" something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.
So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.
While being more complex and costly to maintain
Depends on the usecase. It's most likely at a trainyard or trainstation.
The image implies that the track on the left meets the use case criteria
The one on the right prints “hello world” to the terminal
And takes 5 seconds to do it