this post was submitted on 01 Jun 2024
500 points (97.9% liked)
Technology
59111 readers
3213 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If this works, it's noteworthy. I don't know if similar results have been achieved before because I don't follow developments that closely, but I expect that biological computing is going to catch a lot more attention in the near-to-mid-term future. Because of the efficiency and increasingly tight constraints imposed on humans due to environmental pressure, I foresee it eventually eclipse silicon-based computing.
They sneak that in there as if it's just a cool little fact, but this should be the real headline. I can't believe they just left it at that. Deep learning can not be the future of AI, because it doesn't facilitate continuous learning. Active inference is a term that will probably be thrown about a lot more in the coming months and years, and as evidenced by all kinds of living things around us, wetware architectures are highly suitable for the purpose of instantiating agents doing active inference.
tbh this research has been ongoing for a while. this guy has been working on this problem for years in his homelab. it’s also known that this could be a step toward better efficiency.
this definitely doesn’t spell the end of digital electronics. at the end of the day, we’re still going to want light switches, and it’s not practical to have a butter spreading robot that can experience an existential crisis. neural networks, both organic and artificial, perform more or less the same function: given some input, predict an output and attempt to learn from that outcome. the neat part is when you pile on a trillion of them, you get a being that can adapt to scenarios it’s not familiar with efficiently.
you’ll notice they’re not advertising any experimental results with regard to prediction benchmarks. that’s because 1) this actually isn’t large scale enough to compete with state of the art ANNs, 2) the relatively low resolution (16 bit) means inputs and outputs will be simple, and 3) this is more of a SaaS product than an introduction to organic computing as a concept.
it looks like a neat API if you want to start messing with these concepts without having to build a lab.
Here is an alternative Piped link(s):
this guy has been working on this problem for years in his homelab
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
It's been in development for a while: https://ieeexplore.ieee.org/abstract/document/1396377?casa_token=-gOCNaYaKZIAAAAA:Z0pSQkyDBjv6ITghDSt5YnbvrkA88fAfQV_ISknUF_5XURVI5N995YNaTVLUtacS7cTsOs7o
Even before the above paper, I recall efforts to connect (rat) brains to computers in the late 90s/early 2000s. https://link.springer.com/article/10.1023/A:1012407611130