this post was submitted on 03 May 2025
1423 points (99.3% liked)
memes
14525 readers
4112 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Unless you just died or are about to, you can't really confidently make that statement.
There's no technical reason to think we won't in the next ~20-50 years. We may not, and there may be a technical reason why we can't, but the previous big technical hurdles were the amount of compute needed and that computers couldn't handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more "neurons" than are in the human brain, let alone the power that will be available to them in 30 years.
I don't think that's true. Parameter counts are more akin to neural connections, and the human brain has something like 100 trillion connections.
Was it? I thought it was always about we haven't quite figure it out what thinking really is
I mean, no, not really. We know what thinking is. It's neurons firing in your brain in varying patterns.
What we don't know is the exact wiring of those neurons in our brain. So that's the current challenge.
But previously, we couldn't even effectively simulate neurons firing in a brain, AI algorithms are called that because they effectively can simulate the way that neurons fire (just using silicon) and that makes them really good at all the fuzzy pattern matching problems that computers used to be really bad at.
So now the challenge is figuring out the wiring of our brains, and/or figuring out a way of creating intelligence that doesn't use the wiring of our brains. Both are entirely possible now that we can experiment and build and combine simulated neurons at ballpark the same scale as the human brain.
Aren't you just saying the same thing? We know it has something to do with the neurons but couldn't figure it out exactly how
The distinction is that it's not 'something to do with neurons', it's 'neurons firing and signalling each other'.
Like, we know the exact mechanism by which thinking happens, we just don't know the precise wiring pattern necessary to recreate the way that we think in particular.
And previously, we couldn't effectively simulate that mechanism with computer chips, now we can.
Other than that nobody has any idea how to go about it? The things called "AI" today are not precursors to AGI. The search for strong AI is still nowhere close to any breakthroughs.
Assuming that the path to AGI involves something akin to all the intelligence we see in nature (i.e. brains and neurons), then modern AI algorithms' ability to simulate neurons using silicon and math is inarguably and objectively a precursor.
Machine learning, renamed "AI" with the LLM boom, does not simulate intelligence. It integrates feedback loops, which is kind of like learning and it uses a network of nodes which kind of look like neurons if you squint from a distance. These networks have been around for many decades, I've built a bunch myself in college, and they're at their core just polynomial functions with a lot of parameters. Current technology allows very large networks and networks of networks, but it's still not in any way similar to brains.
There is separate research into simulating neurons and brains, but that is separate from machine learning.
Also we don't actually understand how our brains work at the level where we could copy them. We understand some things and have some educated guesses on others, but overall it's pretty much a mistery still.
There's no technical reason to think we will in the next ~20-50 years, either.
There's plenty of economic reasons to think we will as long as it's technically possible.
there's plenty of reason to believe that, whether we have it or not, some billionaire asshole is going to force you to believe and respect his corportate AI as if it's sentient (while simultaneously treating it like slave labor)