Garbage in garbage out is how they all work if you give it a well defined prompt you can get exactly what you want out of it most of the time but if you just say fix this problem it’ll just fix the problem ignoring everything else
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Claude is my coding mentor. Wouldn't want to work without it.
I run code snippets by three or four LLMs and the consensus is never there. Claude has been the worst for me.
Which one has been best? I’m only a hobbyist, but I’ve found Claude to be my favorite, and the best UI by a mile.
Everyone keeps talking about autocomplete but I've used it successfully for comments and documentation.
You can use vs code extensions to generate and update readme and changelog files.
Then if you follow documentation as code you can update your Confluence/whatever by copy pasting.
I also use it a lot for unit tests. It helps a lot when you have to write multiple edge cases, and even find new one at times. Like putting a random int in an enum field (enumField = (myEnum)1000), I didn't knew you could do that...
I'm a penetration tester and it increases my productivity a lot
so it's a vector of attack?
I mainly use AI for learning new things. It’s amazing at trivial tasks.
as a dental assistant I can also confirm that AI has increased my productivity, checks notes, by a lot.
While I am not fond of AI, we do have access to it at work and I must admit that it saves some time in some cases. I'm not a developer with decades of experience in a single language, so something I am using AI to is asking "Is it possible to do a one-liner in language X where it does Y?" It works very well and the code is rarely unusable, but it is still up to my judgement whether the AI came up with a clever use of functions that I didn't know about or whether it crammed stuff into a single unreadable line.
lol Uplevel's """full report""" saying devs using Copilot create 41% more bugs has 2 pages and reads like a promotional material.
you can download it with a 10 minute email if you really want to see for yourself.
just some meaningless numbers.
Yep, by definition generative AI gets worse the more specific you get. If you need common templates though, it’s almost as good as today’s google.
… which is not a high bar.
I truly don't understand the tendency of people to hate these kinds of tools. Honestly seems like an ego thing to me.
I sent a PR back to a Dev five times before I gave the work to someone else.
they used AI to generate everything.
surprise, there were so many problems it broke the whole stack.
this is a routine thing this one dev does too. every PR has to be tossed back at least once. not expecting perfection, but I do expect it to not break the whole app.
Like I told another person ITT, hiring terrible devs isn't something you can blame on software.
that depends on your definition of what a "terrible dev" is.
of the three devs that I know have used AI, all we're moderately acceptable devs before they relied on AI. this formed my opinion that AI code and the devs that use it are terrible.
two of those three I no longer work with because they were let go for quality and productivity issues.
so you can clearly see why my opinion of AI code is so low.
Having to deal with pull requests defecated by “developers” who blindly copy code from chatgpt is a particularly annoying and depressing waste of time.
At least back when they blindly copied code from stack overflow they had to read through the answers and comments and try to figure out which one fit their use case better and why, and maybe learn something... now they just assume the LLM is right (despite the fact that they asked the wrong question and even if they had asked the right one it'd've given the wrong answer) and call it a day; no brain activity or learning whatsoever.
That is not a problem with the ai software, that's a problem with hiring morons who have zero experience.
No. LLMs are very good at scamming people into believing they're giving correct answers. It's practically the only thing they're any good at.
Don't blame the victims, blame the scammers selling LLMs as anything other than fancy but useless toys.
Okay before I blamed your terrible employees who are incredibly bad at their jobs but maybe I should blame their leadership after that comment
Did you get scammed by the LLM? If not, what's the difference between you and the dev you mentioned?
I was lucky enough to not have access to LLMs when I was learning to code.
Plus, over the years I've developed a good thick protective shell (or callus) of cynicism, spite, distrust, and absolute seething hatred towards anything involving computers, which younger developers yet lack.
Sorry, you misunderstood my comment, which was very badly worded.
I meant to imply that you, an experienced developer, didn't get "scammed" by the LLM, and that the difference between you and the dev you mentioned is that you know how to program.
I was trying to make the point that the issue is not the LLM but the developer using it.
Also, when a tool increases your productivity but your salary and paid time off don't increase, it's a tool that only benefits the overlords and as such deserves to be hated.
Oh, so do you use a 13 year old PC because a newer one increases your productivity without increasing your salary and paid time off?
Personally... I do
I could request a new one, but why? This one works, it's just slow as all hell.
I mean, you're clearly using them because they still work, not because of a hatred for increasing productivity for the overlords. Your choice was based on reasonable logic, unlike the other guy.
I could request a new one, but why?
Gives excellent argument for requesting a new one:
slow as all hell.
Generative AI is great for loads of programming tasks like helping create regular expressions or syntax conversions between languages. The main issue I've seen in codebases that rely heavily on generative AI is that the "solutions" often fix today's bug while making future debugging more difficult. Generative AI makes it easy to go fast in the wrong direction. Used right it's a useful tool.