this post was submitted on 14 Feb 2024
547 points (97.4% liked)

Technology

55989 readers
4617 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ikidd@lemmy.world 16 points 5 months ago (2 children)

Using Voice Assist pipeline via the HASS cloud subscription works a heck of a lot better than locally. Locally it takes about 15 seconds to respond, via the Nabu Casa server it's about 1 second. I've considered dedicating a box to the containers it's instantiating to do this to get faster response.

[–] Blackmist@feddit.uk 2 points 5 months ago (1 children)

What hardware is it running on that takes 15 seconds? I've not actually tried it myself as I've got a poor little RPi 3, and I don't want to scare it.

[–] ikidd@lemmy.world 2 points 5 months ago

The M5stack atom echo. The hardware is the same, but if you change the pipeline in the back end between the two, that's where the delay happens. You can run the Whisper stack locally or on another box locally but I think you'd want a good GPU on it to offload the NL processing to. Which is probably what happens when you're using the Nabu Casa pipeline.

[–] bitwolf@lemmy.one 1 points 5 months ago* (last edited 5 months ago) (1 children)

Do you think throwing a coral TPU on there would help?

I saw it helps a ton with Frigate facial recognition.
I was planning to do that on my Yellow once I can get the display thing that's pictured in the article.

[–] ikidd@lemmy.world 1 points 5 months ago

Idk if any LLMs are set up to operate on anything except GPUs, its an interesting question.