this post was submitted on 16 Sep 2024
6 points (87.5% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

I just found https://www.arliai.com/ who offer LLM inference for quite cheap. Without rate-limits and unlimited token generation. No-logging policy and they have an OpenAI compatible API.

I've been using runpod.io previously but that's a whole different service as they sell compute and the customers have to build their own Docker images and run them in their cloud, by the hour/second.

Should I switch to ArliAI? Does anyone have some experience with them? Or can recommend another nice inference service? I still refuse to pay $1.000 for a GPU and then also pay for electricity when I can use some $5/month cloud service and it'd last me 16 years before I reach the price of buying a decent GPU...

Edit: Saw their $5 tier only includes models up to 12B parameters, so I'm not sure anymore. For larger models I'd need to pay close to what other inference services cost.

Edit2: I discarded the idea. 7B parameter models and one 12B one is a bit small to pay for. I can do that at home thanks to llama.cpp

you are viewing a single comment's thread
view the rest of the comments
[–] tpWinthropeIII@lemmy.world 2 points 1 month ago (1 children)

I know that people are using P40 and P100 GPUs. These are outdated but still work with some software stacks / applications. The P40 GPU, once very cheap for the amount of VRAM, is no longer as cheap as it was probably because folks have been picking them up for inference.

I'm getting a lot done with an NVidia GTX 1080 which only has 8GB VRAM. I can run a quant of dolphin Mixtral 7x8B and it works well enough. It takes minutes to load, almost too long for me, but after that I get 3-5 TPS with some acceptable delay between questions.

I can even run Miqu quants at 2 or 3 bits. It's super smart even at these low quant levels.

llama 3.1 8B runs great with this 1080 8BG GPU at 4_K_M and also 5 or 6_K_M. I believe I can run gemma 9B f16 at 8 bpw.

[–] hendrik@palaver.p3x.de 1 points 1 month ago* (last edited 1 month ago) (1 children)

Hmmh, I lately use mistral-nemo which is 12B parameters. Since I'm more a programmer than a gamer, I didn't put a graphics card into my PC, and I believe it's too old to accomodate any recent one. (older PCIe generation, only x8 ports) I'd have to replace everything. And then I might as well go for a Radeon RX 7900 XTS or something. That's $1.000(?) but has 24GB of VRAM. I don't think buying an entire PC and then going for an old GPU will make me happy. And thanks to llama.cpp I get about 2 tokens per second just on the CPU. It'd have to be a considerable step up to be worth it. And last time I checked even a P40 was like $300+ and it's super old and unclear if it'll continue to be supported in the major frameworks. I'm not sure. I still lean towards paying for cloud GPU compute.

Thanks for the numbers on your setup. That certainly helps weighing my options. Maybe some of my friends have some upgrades planned and want to give me their older 8GB NVidia cards...

[–] tpWinthropeIII@lemmy.world 2 points 1 month ago (1 children)

I tried Mistral Nemo 12B instruct this morning. It's actually quite good. I'd say it's close to dolphin mistral 8x7B which is a monster in size and very smart, about 45 or 50GB. So I'd say Arli is a good deal Mistral Nemo 12B for 4 or $5 per month and privacy so they claim.

If you don't mind logging for some questions, you can get access to very good or if not the best models at lmsys.org without monetary cost. Just go to the "Arena". This is where you contribute with your blind evaluation by voting which of two is better. I often get models like 4o and sonnet 3.5 by Anthropic, google's best, etc., and at other times many good 70B models. You see two answers at once and vote your favorite between the two. In return, you get "free" access.

Be careful with AMD GPUs as they are not as well supported for local AI. However, support is gaining ground. Some people are doing it but it takes effort and hassle, from what I've read.

[–] hendrik@palaver.p3x.de 1 points 1 month ago

Thanks. I'll try lmsys, but ultimately I do mind privacy. But I also fool around.

Yeah, I know about AMD GPUs. Nvidia has quite a monopoly on AI and as everyone uses their hardware and software frameworks, that's what's supported best. At least currently. My predicion is: that's about to change. But their competitors didn't do a great job. But I've been annoyed with Nvidia's stupid Linux drivers for so long, (I mean that also changed,) but I'd like to give my money to someone else, and swallow that pill. If I decide to do it anyways.

Thanks for the info. I think I can do something with that. Mistral-Nemo is pretty awesome for its size. Intelligent, can write prose, dialogue or answer questions, it's completely uncensored out of the box...