this post was submitted on 29 Jan 2024
262 points (100.0% liked)
Technology
37750 readers
336 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've seen a million of such demos but simulations like these are nothing like the real world. Moravec's paradox will make neural nets look like toddlers for a long time to come yet.
Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.
Moravec's paradox holds true because of two fronts:
But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we're brute-forcing away the lack of a formal description of the algorithm.
We're now looking towards neuromorphic hardware on the trillion-"core" scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler... before you copy the first trained NN to an identical body and re-training costs drop to O(0)... which is much less than even training a million toddlers at once.