Very unconfirmed so far but if there was indeed a new, and reasonably good, LLM trained without nvidia chips it would signal that alternatives exist, at least for the smaller models, and hopefully would drive down prices of chips needed for training or using such models by individuals and small groups as well, allowing for a lot more specialization of small LLMs for all sorts of tasks.
this post was submitted on 01 Oct 2024
10 points (100.0% liked)
Technology
967 readers
50 users here now
A tech news sub for communists
founded 2 years ago
MODERATORS
It's worth noting that there are also distributed efforts like Petals that avoid the need to have a big data centre to train and run models https://github.com/bigscience-workshop/petals