this post was submitted on 13 May 2025
1 points (100.0% liked)

TechTakes

1869 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 week ago (29 children)

How? It's just like googling stuff but less annoying

[–] [email protected] 0 points 1 week ago (23 children)

Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.

For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.

[–] [email protected] 0 points 1 week ago (13 children)

Stack overflow resulted in people with highly specialised examples that wouldn't suit your application. It's easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax

[–] [email protected] 0 points 1 week ago

You've inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren't even at the level of occasionally making decent open source contributions.

load more comments (12 replies)
load more comments (21 replies)
load more comments (26 replies)