this post was submitted on 17 Dec 2024
1 points (100.0% liked)

TechTakes

1489 readers
18 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 6 days ago (2 children)

"despite the many people who have shown time and time and time again that it definitely does not do fine detail well and will often present shit that just 10000% was not in the source material, I still believe that it is right all the time and gives me perfectly clean code. it is them, not I, that are the rubes"

[–] [email protected] 0 points 6 days ago (1 children)

The problem with stuff like this is not knowing when you dont know. People who had not read the books SSC Scott was reviewing didnt know he had missed the points (or not read the book at all) till people pointed it out in the comments. But the reviews stay up.

Anyway this stuff always feels like a huge motte bailey, where we go from 'it has some uses' to 'it has some uses if you are a domain expert who checks the output diligently' back to 'some general use'.

[–] [email protected] 0 points 2 days ago (1 children)

A lot of the "I'm a senior engineer and it's useful" people seem to just assume that they're just so fucking good that they'll obviously know when the machine lies to them so it's fine. Which is one, hubris, two, why the fuck are you even using it then if you already have to be omniscient to verify the output??

[–] [email protected] 0 points 1 day ago

"If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.

[–] [email protected] 0 points 6 days ago

Ahah I'm totally with you, I just personally know people that love it because they have never learned how to use a search engine. And these generalist generative AIs are trained on gobbled up internet basically, while also generating so many dangerous mistakes, I've read enough horror stories.

I'm in science and I'm not interested in ChatGPT, wouldn't trust it with a pancake recipe. Even if it was useful to me I wouldn't trust the vendor lock-in or enshittification that's gonna come after I get dependent on aa tool in the cloud.

A local LLM on cheap or widely available hardware with reproducible input / output? Then I'm interested.