this post was submitted on 22 Jun 2025
480 points (99.0% liked)

Programming

21150 readers
196 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] -1 points 4 days ago (4 children)

Funny how I never see articles on Lemmy about improvements in LLM capabilities.

[–] [email protected] 6 points 4 days ago

Probably because nobody really wants to read absolute nonsense.

[–] [email protected] 7 points 4 days ago (1 children)

there aren't that many, if you're talking specifically LLMs, but ML+AI is more than LLMs.

Not a defence or indictment of either side, just people tend to confuse the terms "LLM" and "AI"

I think there could be worth in AI for identification (what insect in this, find the photo I took of the receipt for my train ticket last month, order these chemicals from lowest to highest pH...) - but LLMs are only part of that stack - the input and output - which isn't going to make many massive breakthroughs week to week.

[–] [email protected] 2 points 3 days ago

The recent boom in neural net research will have real applicable results that are genuine progress: signal processing (e.g. noise removal), optical character recognition, transcription, and more.

However the biggest hype areas with what I see as the smallest real return is in the huge model LLM space, which basically try to portray AGI as just around the corner. LLMs will have real applications in summarization, but largely otherwise they just generate asymptotically plausible babble, very good for filling the Internet with slop, not actually useful to replace all the positions OAI, et al, need it to (for their funding to be justified).

[–] [email protected] 1 points 4 days ago* (last edited 3 days ago)

Because Lemmy is more representative of scientists and underprivileged while other media is more representative of celebrities and people who can afford other media, like hedge funds or tech monopolies.

[–] [email protected] 15 points 4 days ago

i would guess a lot of the pro ai stuff is from corpos given the fact good press is money to them.

[–] [email protected] 7 points 4 days ago (1 children)

Fortunately, 90% of coding is not hard problems. We write the same crap over and over. How many different creat an account and signin flows do we really need. Yet there seem to be an infinite amount, and each with it's own bugs.

[–] [email protected] 14 points 4 days ago* (last edited 4 days ago) (1 children)

The hard problems are the only reason I like programming. If 90% of my job was repetitive boilerplate, I'd probably be looking elsewhere.

I really dislike how LLMs are flooding the internet with a seemingly infinite amount of half-broken TODO-app style programs with no care at all for improving things or doing something actually unique.

[–] [email protected] 1 points 4 days ago

A lot of people don't realize how many times the problem they are solving has already been solved. But after being in the industry for 3 decades, very few things people are working on haven't been done before. They just get put together in different combinations.

As for AI, I have found it decent at wruting one time scripts to gather information I need to make design decisions. And it's a little quicker when I need to look up a syntax for a language or like a resource name for terraform. But even one off scripts I sometimes have to ask it if a while loop wouldn't be better and such.

[–] [email protected] 48 points 4 days ago (4 children)

For instance, if an AI model could complete a one-hour task with 50% success, it only had a 25% chance of successfully completing a two-hour task. This indicates that for 99% reliability, task duration must be reduced by a factor of 70.

This is interesting. I have noticed this myself. Generally, when an LLM boosts productivity, it shoots back a solution very quickly, and after a quick sanity check, I can accept it and move on. When it has trouble, that's something of a red flag. You might get there eventually by probing it more and more, but there is good reason for pessimism if it's taking too long.

In the worst case scenario where you ask it a coding problem for which there is no solution—it's just not possible to do what you're asking—it may nevertheless engage you indefinitely until you eventually realize it's running you around in circles. I've wasted a whole afternoon with that nonsense.

Anyway, I worry that companies are no longer hiring junior devs. Today's juniors are tomorrow's elites and there is going to be a talent gap in a decade that LLMs—in their current state at least—seem unlikely to fill.

[–] [email protected] 5 points 3 days ago

In the worst case scenario where you ask it a coding problem for which there is no solution—it's just not possible to do what you're asking—it may nevertheless engage you indefinitely until you eventually realize it's running you around in circles.

Exactly this, and it's frustrating as a Jr dev to be fed this bs when you're learning. I've had multiple scenarios where it blatantly told me wrong things. Like using string interpolation in a terraform file to try and set a dynamic source - what it was giving me looked totally viable. It wasn't until I dug around some more that I found out that terraform init can't use variables in the source field.

On the positive side it helps give me some direction when I don't know where to start. I use it with a highly pessimistic and cautious approach. I understand that today is the worst it's going to be, and that I will be required to use it as a tool in my job going forward, so I'm making an effort to get to grips when working with it.

[–] [email protected] 9 points 4 days ago (1 children)

I've noticed this too and it's even weirder when you compare it to a physics question. It very consistently tells me when my recent brain fart of an idea is just plain stupid. But it will try eternally to help me find a coding solution even it it just keeps going in circles.

[–] [email protected] 4 points 4 days ago

I think part of this comes down to the format. Physics can often be analogized and can be very conversational when it comes to demonstrating ideas.

Most code also looks pretty similar if you don’t know how to read it and unlike language, the syntax is absolute with no room for interpretation or translation.

I’ve found it’s consistently good if you treat it like a project specification list, including all of your requirements in a list format in the very first message and have it psuedocode the draft along with list what libraries it wants to use and make sure they work how you expect.

There’s some screening that goes into utilizing it well and that only comes with already knowing roughly how to code what you’re trying to make.

[–] [email protected] 11 points 4 days ago

Sadly, the lack of junior devs means my job is probably safe until I am ready to retire. I have mixed feelings about that. On the one hand, yeah for me. On the other sad for the new grads. And sad for software as a whole. But software truely sucks, and has only been enshitifying worse and worse. Could a shake up like this somehow help that? I don't see how, but who knows.

[–] [email protected] 6 points 4 days ago

Sucks for today's juniors, but that gap will bring them back into the fold with higher salaries eventually.

[–] [email protected] 7 points 4 days ago (1 children)

I've found that AI is only good at solving programming problems that are relatively "small picture" — or if it has to do with the basics of a language — anything else that it provides a solution for you will have to re-write completely once you consult with the language's standards and best practices.

[–] [email protected] 5 points 4 days ago

Well, I recently did kind of an experiment, writing a kid game in Kotlin without ever using it. And it was surprisingly easy to do. I guess it helps that I'm fluent in ~5 other programming languages because I could tell what looked obviously wrong.

My conclusion kinda is that it's a really great help if you know programming in general.

load more comments
view more: next ›