this post was submitted on 23 Jul 2024
226 points (96.3% liked)

Technology

58115 readers
4871 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] doodledup@lemmy.world 22 points 1 month ago (4 children)

Alexa and LLMs are fundamentally not too different from each other. It's just a slightly different architecture and most importantly a much larger network.

The problem with LLMs is that they require immense compute power.

I don't see how LLMs will get into the households any time soon. It's not economical.

[–] admin@lemmy.my-box.dev 15 points 1 month ago

The problem with LLMs is that they require immense compute power.

To train. But you can run a relatively simple one like phi-3 on quite modest hardware.

[–] hedgehog@ttrpg.network 4 points 1 month ago (1 children)

I don't see how LLMs will get into the households any time soon. It's not economical.

I can run an LLM on my phone, on my tablet, on my laptop, on my desktop, or on my server. Heck, I could run a small model on the Raspberry PI 5 if I wanted. And none of those devices have dedicated chips for AI.

The problem with LLMs is that they require immense compute power.

Not really, particularly if you’re talking about the usage of smaller models. Running an LLM on your GPU and sending it queries isn’t going to use more energy than using your GPU to game for the same amount of time would.

[–] doodledup@lemmy.world 3 points 1 month ago (1 children)

I think when people talk about LLMs replacing Alexa they mean the much more capable models with billions of parameters. The small models that a Raspberry-Pi can run are no use really.

[–] hedgehog@ttrpg.network 5 points 1 month ago (1 children)

The models I’m talking about that a PI 5 can run have billions of parameters, though. For example, Mistral 7B (here’s a guide to running it on the PI 5) has roughly 7 Billion parameters. By quantizing each parameter to 4 bits, it only takes up 3.5 GB in RAM, making it easily fit in the 8 GB model’s memory. If you have a GPU with 8+ GB of VRAM (most cards from the past few years have 8 GB or more - the 1070, 2060 Super, and 3050 and each better card in that generation hit that mark), you have enough VRAM and more than enough speed to run Q4 versions of the 13B models (which have roughly 13 Billion parameters), and if you have one with 24 GB of VRAM, like the 3090, then you can run Q4 versions of the 30B models.

Apple Silicon Macs can also competently run inference for these models - for them, the limiting factor is system RAM, not VRAM, though. And it’s not like you’ll need a Mac as even Microsoft is investing in ARM CPUs with dedicated AI chips.

[–] doodledup@lemmy.world 2 points 1 month ago

Thanks for sharing that. I have a Raspberry-Pi 4B laying around and getting dusty. I might try this.

[–] Halcyon@discuss.tchncs.de 1 points 1 month ago

The immense computing power for AI is needed for training LLMs, it's far less for running a pre-trained model on a local machine.

[–] helenslunch@feddit.nl -1 points 1 month ago (2 children)

The problem with LLMs is that they require immense compute power. I don't see how LLMs will get into the households any time soon. It's not economical.

You realize the current systems run in the cloud?

[–] doodledup@lemmy.world 1 points 1 month ago

Well yea. You could slap Gemini Google-Home today. You wouldn't even need a new device for that probably. The reason they don't do that is econimical.

My point is that LLMs aren't replacing those devices. They are the same thing essentially. Just one a trimmed version of the other for economic reasons.