this post was submitted on 11 Jul 2025
368 points (100.0% liked)
TechTakes
2057 readers
425 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Entry-level devs ain't replacing anyone. One senior dev is going to be doing the work of a whole team
For now.
But when a mid-tier or entry level dev can do 60% of what a senior can do, it’ll be a great way to cut costs.
I don’t think we’re there now. It’s just that that’s the ultimate goal - employ fewer people, and pay the remaining people you do employ less.
This simply isn't how software development skill levels work. You can't give a tool to a new dev and have them do things experienced devs can do that new devs can't. You can maybe get faster low tier output (though low tier output demands more review work from experienced devs so the utility of that is questionable). I'm sorry but you clearly don't understand the topic you're making these bold claims about.
I think more low tier output would be a disaster.
Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged "issues". I spent at least 2 days debugging a memory corruption crash caused by such "fix", and then I had to spend who knows how long reviewing every such "fix".
And don't get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to. Not some obscure corner case, the stuff you immediately run into if you just follow the documentation.
With AI all the numbers would be much larger - more commits "fixing coverity issues" (and worse yet fixing "issues" that LLM sees in code), more so called "tests" that don't actually flag any real regressions, etc.