this post was submitted on 14 Aug 2024
0 points (NaN% liked)

TechTakes

1398 readers
24 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Rather excellent post about LLMs being wrongness machines, lawyerbrain, and this fucking guy

alt textA stupid-looking picture of Sammy A stilised like the poster for the movie "Her". The title is "hurr" with a byline "a Sam Altman duh story".

Cosigned by the author I also include my two cents expounding on the cheque checker ML. Read the article first!

addendum

The most consequential failure mode — that both the text (...) and the numeric (...) converge on the same value that happens to be wrong (...) — is vanishingly unlikely. Even if that does happen, it's still not the end of the world.

I think extremely important is that this is a kind of error that even a human operator could conceivably make. It's not some unexplainable machine error, likely the scribbles were just exceedingly illegible on that one cheque. We're not introducing a completely new dangerous failure mode.

Compare that to, for example, using an LLM in lieu of a person in customer service. The failure mode here is that the system can manufacture things whole cloth and tell you to do a stupid and/or dangerous thing. Like tell you to put glue on pizza. No human operator would ever do that, and even if, then that's straight-up a prosecutable crime with a clear person responsible. Per previous analogy, it'd be a human operator that knowingly inputs fraudulent information from a cheque. But then again, there would be a human signature on the transaction and a person responsible.

So not only is a gigantic LLM matrix a terrible heuristic for most tasks - eg "how to solve my customer problem" - it introduces failure modes that are outlandish, essentially impossible with a human (or a specialised ML system) and leave no chain of responsibility. It's a real stinky ball of bull.

top 1 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 3 months ago

OT: Is there a standardised way of including alt text for pictures on Lemmy?