this post was submitted on 21 May 2025
1 points (100.0% liked)

TechTakes

1871 readers
34 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 35 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 13 hours ago

Microsoft claims that AI can replace human programmers. Why doesn't Microsoft just do so and let an AI "fix" the problems reported by AIs?

Not sure why they're even involving human employees in this problem. /s

[–] [email protected] 0 points 1 day ago

An image of a Github-themed restaurant that serves poop burgers.

[–] [email protected] 0 points 1 day ago

Came here to post this, funnily enough.

We're poisoning people's air for this.

[–] [email protected] 0 points 1 day ago

just look at it. it is not enough that AI is boiling the planet, but with every iteration of copilot, all those automatic checks are reran! on the first mentioned PR, the checks have been running for 20 minutes when I'm reading it, and there's like a dozen of them!

other projects have to pay for processing time on github actions!!

this is insanity

[–] [email protected] 0 points 1 day ago

No real understanding of what it's doing, it's just guessing.

Are they talking about the LLMs or the people who think just chatting with the LLM will fix it? :)

[–] [email protected] 0 points 1 day ago

Someone should write a script that estimates how much time has been spent re-fondling LLMPRs on Github.

[–] [email protected] 0 points 1 day ago (1 children)

you all joke, but my mind is so expanded by stimulants that I, and only I, can see how this dogshit code will one day purchase all the car manufacturers and build murderbots

[–] [email protected] 0 points 1 day ago (1 children)

Look, im def on team Murderbot, but when ~~we~~ the AI's start building them I really hope Martha Wells gets some kickbacks at least.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago)

I love how Wells has given us both a great series of stories AND a jokey terminator analog to diffuse the mAnLy trope of building and/or fighting terminators.

[–] [email protected] 0 points 1 day ago (5 children)

Is there a reason why that ai "evolution" thing don't work for code? In theory shouldn't it be decent at least

[–] [email protected] 0 points 12 hours ago* (last edited 12 hours ago)

If you're referring to genetic algorithms, those work by giving the computer some type of target to gun for that's easy to measure and then letting the computer go loose with randomly changing bits from the original object. I guess in your mind, that'd be randomly evolving the codebase and then submitting the best version.

There's a lot of problems with the idea of genetic codebase that I'm sure you can figure out on your own, but I'll give you one for free: "better code" is a very hard thing for computers to measure.

[–] [email protected] 0 points 23 hours ago* (last edited 23 hours ago) (1 children)

For LLMs specifically? Code is not text, aside from the most clinical, dictionary definition of "text".

But even then, it also fails at writing coherent short or longform, so even if code was "just text" it'd fail equally badly.

[–] [email protected] 0 points 18 hours ago (1 children)

That's too bad I was hoping to be able to one day revive dead/abandoned mods using some form of ai, I was thinking of training it off of how the old mods funtioned in the last working version maybe it could find what needs to be changed

[–] [email protected] 0 points 18 hours ago (1 children)
[–] [email protected] 0 points 17 hours ago (2 children)

I think that what we've got here is a genuine victim of the hype.

[–] [email protected] 0 points 13 hours ago (3 children)

Ah I didn't realize it was that big of an impossibility to get ai to update old minecraft mods no one else is interested in, no way I'll ever be able to learn to code without going back to highschool

[–] [email protected] 0 points 12 hours ago

You can definitely learn how to code, I believe in you.

[–] [email protected] 0 points 13 hours ago (1 children)

It's alright! There's a multibillion dollar advertising operation working to convince us that generative AI can do these sorts of things. Plus, it's always tough to go through someone else's work, much less mods from a decade ago that were written by ambitious amateurs. I couldn't read my own code after a couple of months if I wasn't such an absurd over-commenter.

If you want a chill intro to the real situation, I highly recommend this episode of On the Media that had Ed Zitron on. You could knock it out over a commute or two no problem: https://www.wnycstudios.org/podcasts/otm/articles/brooke-talks-ai-with-ed-zitron

[–] [email protected] 0 points 12 hours ago (1 children)

This all bad news, guess I'll never see an emulator written based on how the original game plays, but that would only matter if it was any more dmca proof than humans doing it and I know even less about that

[–] [email protected] 0 points 12 hours ago

your posts keep just slinging words together and it’s just fucking weird

[–] [email protected] 0 points 13 hours ago (1 children)

once again: the fuck is this post

[–] [email protected] 0 points 12 hours ago (1 children)
[–] [email protected] 0 points 12 hours ago (1 children)

negative reactions? to your own shitty posts?

well fuck damn, I wonder what’s confusing

[–] [email protected] 0 points 12 hours ago (2 children)

Ok it just seemed like you were confused but if you don't like what I posted that makes sense

[–] [email protected] 0 points 12 hours ago

yeah, I think that's enough

[–] [email protected] 0 points 12 hours ago

yes, you’ve gotta be right, that must be exactly what’s happening. absolutely no other possibilities.

[–] [email protected] 0 points 13 hours ago

you could be right

[–] [email protected] 0 points 1 day ago (1 children)

To elaborate on the other answers about alphaevolve. the LLM portion is only a component of alphaevolve, the LLM is the generator of random mutations in the evolutionary process. The LLM promoters like to emphasize the involvement of LLMs, but separate from the evolutionary algorithm guiding the process through repeated generations, LLM is as likely to write good code as a dose of radiation is likely to spontaneously mutate you to be able to breathe underwater.

And the evolutionary aspect requires a lot of compute, they don't specify in their whitepaper how big their population is or the number of generations, but it might be hundreds or thousands of attempted solutions repeated for dozens or hundreds of generations, so that means you are running the LLM for thousands or tens of thousands of attempted solutions and testing that code against the evaluation function everytime to generate one piece of optimized code. This isn't an approach that is remotely affordable or even feasible for software development, even if you reworked your entire software development process to something like test driven development on steroids in order to try to write enough tests to use them in the evaluation function (and you would probably get stuck on this step, because it outright isn't possible for most practical real world software).

Alphaevolve's successes are all very specific very well defined and constrained problems, finding specific algorithms as opposed to general software development

[–] [email protected] 0 points 12 hours ago

Imagine if it was trying to build a feature for a client who needs to look at a demo every time.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago)

zbyte64 gave a great answer. I visualize it like this:

Writing software that does a thing correctly within well defined time and space constraints is nothing like climbing a smooth gradient to a cozy global maximum.

On a good day, it's like hopping on a pogo stick around a spiky, discontinuous, weirdly-connected n-dimensional manifold filled with landmines (for large values of n).

The landmines don't just explode. Sometimes they have unpredictable comedic effects, such as ruining your weekend two months from now.

Evolution is simply the wrong tool for the job.

[–] [email protected] 0 points 1 day ago (1 children)

Talking about Alpha Evolve https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ ?

First, Microsoft isn't using this yet but even if they were it doesn't work in this context. What Google did was they wrote a fitness function to tune the Generative process. Why not have some rubric that scores the code as our fitness function? Because the function needs to be continuous for this to work well, no sudden cliffs. But also they didn't address how this would work in a multi-objective space, this technique doesn't let the LLM make reasonable trade offs between complexity and speed.

[–] [email protected] 0 points 1 day ago

I forgot about alpha evolve, with all the flashy titles about I figured it wasn't a big deal, I was more talking about the low level stuff I guess like "ai learns to play mario/walk" but I imagine it follows the same logic the other comment talks about

[–] [email protected] 0 points 1 day ago (1 children)

Ah yes the typical workflow for LLM generated changes:

  1. LLM produces nonsense at the behest of employee A.
  2. Employee B leaves a bunch of edits and suggestions to hammer it into something that almost kind of makes sense in a soul-sucking error prone process that takes twice as long as just writing the dang code.
  3. Code submitted!
  4. Employee A gets promoted.
[–] [email protected] 0 points 1 day ago (1 children)

I just looked at the first PR out of curiosity, and wow...

this isn't integrated with tests

That's the part that surprised me the most. It failed the existing automation. Even after prompted to fix the failing tests, it proudly added a commit "fixing" it (it still didn't pass). Then the dev had to step in and say why the test was failing and how to fix the code to make it pass...something that copilot should really be able to check. With this much handholding all of this could have been done much faster and cleaner without any AI involvement at all.

[–] [email protected] 0 points 1 day ago

The point is to get open source maintainers to further train their program because they already scraped all our code. I wonder if this will become a larger trend among corporate owned open source projects.