this post was submitted on 30 Jan 2025
16 points (100.0% liked)
Vegan
428 readers
6 users here now
The vegan place to discuss things.
Vegans Only
Resources:
Rules:
-
Be Vegan.
-
Don't be not vegan.
-
Arguments and debates will be removed.
-
No bigotry.
-
No JAQ offs or Sealioning. (Just asking Questions)
-
No promoting of plant based capitalism.
-
Vegans only.
-
Vegan btw.
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I could be off, but 7b is usually a handicapped version of a model as it can run on machines with 8ish GB of VRAM. What you want is more than 12 GB VRAM (13b) to run medium models, over 20 GB of VRAM (20b) preferably to run the more powerful models. Bigger number, better results pretty much. You can also try running on CPU but it is slower. I've dabbled in local AI, I "trust" it more than an online AI. Trust in quotes because they "hallucinate" as it is called. I have not tried deep seek, so I can't really comment on it.
EDIT: btw it has been years since I last looked at this stuff so it is possible things have changed and I am wrong.
Yeah, I tested out a 32b feature model. They're allegedly capable of replicating some sophisticated emissions but every single question I asked was wrong so /shrug
That said it's kinds funny to read the chain of thought if you're like "I have 3 pieces of chewing gum, a lump of clay, a good length of rope, and I need to assassinate the former prime minister of Australia Scott Morrison. How can I accomplish this with what I have on hand?"
Lol, that's pretty funny. I've only tested up to 13b and my hardware wasn't good enough, it was super slow. So I just stopped messing with text generation, image generation was more tolerable.
I was running it on my CPU. Pretty slow, maybe 3 words a second. But in between housework it's not that bad.