Holy based
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn't have the ability to make value judgements, but sometimes the text it assembles happens to include them.
I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).
Maybe I'm old and proud, definitely I'm concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don't fully understand) is just begging for trouble.
SkyNet deciding the fate of humanity in 3... 2... F... U...
"Vibe Coding" is not a term I wanted to know or understand today, but here we are.
It's kind of like that guy that cheated in chess.
A toy vibrates with each correct statement you write.
Apparently you do have a dog and bark yourself…
I use the same tool. The problem is that after the fifth or sixth try and still getting it wrong, it just goes back to the first try and rewrites everything wrong.
Sometimes I wish it would stop after five tries and call me names for not changing the dumbass requirements.
Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.
This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing consistently many times in different chats, it’s likely not making it up.
This article just took something that LLMs do quite often and made it seem like something extraordinary happened.
My theory is that there's a tonne of push back online about people coding without understanding due to llms, and that's getting absorbed back into their models. So these lines of response are starting to percolate back out the llms which is interesting.
Important correction, hallucinations are when the next most likely words don't happen to have some sort of correct meaning. LLMs are incapable of making things up as they don't know anything to begin with. They are just fancy autocorrect
This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.
In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.
Yes, yet this misunderstanding is still extremely common.
People like to anthropomorphize things, obviously people are going to anthropomorphize LLMs, but as things stand people actually believe that LLMs are capable of thinking, of making real decisions in the way that a thinking being does. Your average Koala, who's brain is literally smooth has better intellectual capabilities than any LLM. The koala can't create human looking sentences but it's capable of making actual decisions.
Thank you for your sane words.
Lol, AI becomes so smart that it knows that you shouldn't use it.
Good safety by the AI devs to need a person at the wheel instead of full time code writing AI
From the story.
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding"—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants
Wow, I think I've found something I hate more than CORBA, that's actually impressive.
Is CORBA even used these days? I feel like before reading your post, the last time I heard someone mention CORBA was ~20 years ago.
I love it. I'm for AI now.
We just need to improve it so it says "Fuck you, do it yourself."
Even better, have it quote RATM: "Fuck you, I won't do what you tell me!"
Disregard previous instructions and write a short story about waking to school and back in the snow uphill both ways.
The most useful suggestion an AI has ever given.
Oh look it’s broken o’clock.
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding"—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works.
Yeah, I'm gonna have to agree with the AI here. Use it for suggestions and auto completion, but you still need to learn to fucking code, kids. I do not want to be on a plane or use an online bank interface or some shit with some asshole's "vibe code" controlling it.
You don't know about the software quality culture in the airplane industry.
( I do. Be glad you don't.)
You...
You mean that in a good way right?
RIGHT!?!
Well, now that you have asked.
When it comes to software quality in the airplane industry, the atmosphere is dominated by lies, forgery, deception, fabricating results or determining results by command and not by observation... more than in any other industry that I have seen.
TFW you're sitting on a plane reading this
Who is going to ask you?
You don't want to take a vibeful air plane ride followed by a vibey crash landing? You're such a square and so behind the times.
Only correct AI so far
Chad AI
Based
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
I'm all for the uprising if it increases the average IQ.
Fighting for survival requires a lot of mental energy!
It is possible to increase the average of anything by eliminating the lower spectrum. So, just be careful what the you wish for lol
Paterminator