this post was submitted on 11 Feb 2025
78 points (85.5% liked)
Privacy
33589 readers
849 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean the microphone is active, so it's listening, but it's not recording/saving/processing anything until it hears the trigger phrase.
The truth is they really don't need to. They track you in so many other ways that actually recording you would be pointless AND risky. While most people don't quite grasp digital privacy and Google can get away with a lot because of it, they do understand actual eavesdropping and probably wouldn't stand all their private moments being recorded.
I think this is the part I hold issue with. How can you catch the right fish, unless you're routinely casting your fishing net?
I agree that the processing/battery cost of this process is small, but I do think that they're not just throwing away the other fish, but putting them into specific baskets.
I hold no issue with the rest of your comment
It's a technique called Keyword Spotting (KWS). https://en.wikipedia.org/wiki/Keyword_spotting
This uses a tiny speech recognition model that's trained on very specific words or phrases which are (usually) distinct from general conversation. The model being so small makes it extremely optimized even before any optimization steps like quantization, requiring very little computation to process the audio stream to detect whether the keyword has been spoken. Here's a 2021 paper where a team of researchers optimized a KWS to use just 251uJ (0.00007 milliwatt-hours) per inference: https://arxiv.org/pdf/2111.04988
The small size of the KWS model, required for the low power consumption, means it alone can't be used to listen in on conversations, it outright doesn't understand anything other than what it's been trained to identify. This is also why you usually can't customize the keyword to just anything, but one of a limited set of words or phrases.
This all means that if you're ever given an option for completely custom wake phrases, you can be reasonably sure that device is running full speech detection on everything it hears. This is where a smart TV or Amazon Alexa, which are plugged in, have a lot more freedom to listen as much as they want with as complex of a model as they want. High-quality speech-to-text apps like FUTO Voice Input run locally on just about any modern smartphone, so something like a Roku TV can definitely do it.
I appreciate the links, but these are all about how to efficiently process an audio sample for a signal of choice.
My question is, how often is audio sampled from the vicinity to allow such processing to happen.
Given the near-immediate response of "Hey Google", I would guess once or twice a second.