this post was submitted on 16 Nov 2024
39 points (93.3% liked)

Technology

966 readers
116 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 hours ago

“Intelligence” is not the same as consciousness. We don’t know what consciousness is and therefore cannot create it in something else. We can’t even reliably recognise it in anything else, we only know other humans have consciousness cause we ourselves have it.

It's true that intelligence and consciousness aren't the same thing. However, I disagree that we can't create it in something else without understanding it. Ultimately, consciousness arises from patterns being expressed within the firings of neurons within the brain. It's a byproduct of the the physical events occurring within our neural architecture. Therefore, if we create a neural network that mimic our brain and exhibits the same types of patterns then it stands to reason that it would also exhibit consciousness.

I think there are several paths available here. One is to simulate the brain in a virtual environment which would be an extension of the work being done by the OpenWorm project. You just build a really detailed physical simulation which is basically a question of having sufficient computing power.

Another approach is to try and understand the algorithms within the brain, to learn how these patterns form and how the brain is structured, then to implement these algorithms. This is the approach that Jeff Hawkins has been pursuing and he wrote a good book on the subject. I'm personally a fan of this approach because it posits a theory of how and why different brain regions work, then compares the functioning of the artificial implementation with its biological analogue. If both exhibit similar behaviors then we can say they both implement the same algorithm.

“AI” is a fad. Anyone who has played around with the AI models knows they aren’t actually thinking, but collating and systemising information.

The current large language model approach is indeed a far, but that's not totality of AI research that's currently happening. It's just getting a lot of attention because it looks superficially impressive.

We simply cannot make human brains out of computers.

There is zero basis for this assertion. The whole point here is that computing power is not developing in a linear fashion. We don't know what will be possible in a decade, and much less in a century. However, given the rate of progress that happened in the past half a century, it's pretty clear that huge leaps could be possible.

Also worth noting that we don't need to have an equivalent of the entire human brain. Much of the brain deals with stuff like regulating the body and maintaining homeostasis. Furthermore, turns out that even a small portion of the brain can still exhibit the properties we care about https://www.rifters.com/crawl/?p=6116

At the end of the day, there is absolutely nothing magical about the human brain. It's a biological computer that evolved through natural selection. There's no reason to think that what it's doing cannot be reverse engineered and implemented on a different substrate.

The key point I'm making is that while timelines of centuries or even millennia might seem long from a human standpoint, these are blinks of an eye from cosmic point of view.