this post was submitted on 14 Feb 2024
481 points (97.4% liked)

Technology

59392 readers
2817 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pennomi@lemmy.world 11 points 9 months ago (2 children)

Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.

[–] TipRing@lemmy.world 8 points 9 months ago

Also for an interface, I'd recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.

[–] DarkThoughts@kbin.social 4 points 9 months ago (1 children)

I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

[–] exu@feditown.com 4 points 9 months ago

You'll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.