this post was submitted on 29 Jan 2024
262 points (100.0% liked)

Technology

37724 readers
443 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 9 months ago (2 children)

Anything a human can be trained to do, a neural network can be trained to do.

Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today's neural nets as a space shuttle is to a paper airoplane.

[–] [email protected] 2 points 9 months ago (1 children)

https://www.youtube.com/watch?v=wXxrmussq4E

Have you not been paying attention to robotics recently? Opening doors is a solved problem with consumer grade hardware and software at this point.

[–] [email protected] 1 points 9 months ago

I wouldn't say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

Try getting them to even open a door

For now there is: AI vs. Stairs, you may need to wait for a future video for "AI vs. Doors" 🤷

BTW, that is a rudimentary neural network.

[–] [email protected] 2 points 9 months ago (1 children)

I've seen a million of such demos but simulations like these are nothing like the real world. Moravec's paradox will make neural nets look like toddlers for a long time to come yet.

[–] [email protected] 1 points 9 months ago

Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.

Moravec's paradox holds true because of two fronts:

  1. Computing resources required
  2. Lack of formal description of a behavior

But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we're brute-forcing away the lack of a formal description of the algorithm.

We're now looking towards neuromorphic hardware on the trillion-"core" scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler... before you copy the first trained NN to an identical body and re-training costs drop to O(0)... which is much less than even training a million toddlers at once.