this post was submitted on 02 Apr 2024
1 points (100.0% liked)
Videos
14271 readers
147 users here now
For sharing interesting videos from around the Web!
Rules
- Videos only
- Follow the global Mastodon.World rules and the Lemmy.World TOS while posting and commenting.
- Don't be a jerk
- No advertising
- No political videos, post those to [email protected] instead.
- Avoid clickbait titles. (Tip: Use dearrow)
- Link directly to the video source and not for example an embedded video in an article or tracked sharing link.
- Duplicate posts may be removed
Note: bans may apply to both [email protected] and [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There seems to be a misunderstanding of how LLM's and statistical modelling work. Neither of these can solve their accuracy as they operate based on a probability distribution and only find correlations in ones and zeros. LLM's generate the probability distribution internally, without supervision (a "black box"). They're only as "smart" as the human-generated input data, and will always find false positives and false negatives. This is unavoidable. There simply is no critical thought or intelligence whatsoever — only mimicry.
I'm not saying LLM's won't shakeup employment, find their niche, and make many jobs redundant, or that critical general AI advances won't occur, just that LLM's simply can't replace human decision making or control, and doing so is a disaster waiting to happen — the best they can do is speed up certain tasks, but a human will always be needed to determine if the results make (real world) sense.
Feels like a bit of a loop back there. "It can only ever be as smart as human output. So we'll always need humans." To... What? Create equivalent mistakes? Maybe LLMs in their current form won't be the drop in replacement, but it's a critical milestone and a sign of what's around the corner. So these concerns are still relevant.
Should have finished reading the comment:
You're right, but not in the way you think.
It's only a matter of time before these compankes start trying to simulate human brains. We need state recognition of legal personhood for digital humans /before/ corporations start torturing them for profit.
This is why I invoked Moore's law earlier. People have already estimated how many petaflops or exaflops we need to simulate a brain's worth of neurons and a complete connectome. We currently don't have enough computer power. But if the exponential growth continues, we will get there.