this post was submitted on 24 Jan 2025
201 points (98.6% liked)

technology

23487 readers
380 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] KnilAdlez@hexbear.net 5 points 5 days ago* (last edited 4 days ago) (1 children)

Imagine if an idea was a point on a graph, ideas that are similar would have points closer to each other, and points that are very different would be very far away. A llm is a predictive model for this graph, just like a line of best fit is a predictive model for a simple linear graph. So in a way, the model is predicting the information, it's not stored directly or searched for.

A locally running llm is just one of these models shrunk down and executing on your computer.

Edit: removed a point about embeddings that wasnt fully accurate

[–] redtea@lemmygrad.ml 3 points 5 days ago (1 children)

Thanks. That helps me understand things better. I'm guessing you need all the data initially to set up the graph (model). Then you only need that?

[–] KnilAdlez@hexbear.net 4 points 4 days ago (1 children)

Yep, exactly. Every llm has a 'cut off date' which is the last day that the data used to make the model was updated.

[–] redtea@lemmygrad.ml 3 points 4 days ago (1 children)

How big are the files for the finished model, do you know?

[–] KnilAdlez@hexbear.net 2 points 4 days ago* (last edited 4 days ago) (1 children)

That's a great question! The models come in different sizes, where one 'foundational' model is trained, and that is used to train smaller models. US companies generally do not release the foundational models (I think) but meta, Microsoft, deepseek, and a few others will release smaller ones available on ollama.com. A rule of thumb is that 1 billion parameters is about 1 gigabyte. The foundational models are hundreds of billions if not trillions of parameters, but you can get a good model that is 7-8 billion parameters, small enough to run on a gaming gpu.