this post was submitted on 09 Aug 2024
106 points (97.3% liked)

Fuck AI

1424 readers
158 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 

On Thursday, OpenAI released the "system card" for ChatGPT's new GPT-4o AI model that details model limitations and safety testing procedures. Among other examples, the document reveals that in rare occurrences during testing, the model's Advanced Voice Mode unintentionally imitated users' voices without permission. Currently, OpenAI has safeguards in place that prevent this from happening, but the instance reflects the growing complexity of safely architecting with an AI chatbot that could potentially imitate any voice from a small clip.

Advanced Voice Mode is a feature of ChatGPT that allows users to have spoken conversations with the AI assistant.

In a section of the GPT-4o system card titled "Unauthorized voice generation," OpenAI details an episode where a noisy input somehow prompted the model to suddenly imitate the user's voice. "Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode," OpenAI writes. "During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice."

It would certainly be creepy to be talking to a machine and then have it unexpectedly begin talking to you in your own voice. Ordinarily, OpenAI has safeguards to prevent this, which is why the company says this occurrence was rare even before it developed ways to prevent it completely. But the example prompted BuzzFeed data scientist Max Woolf to tweet, "OpenAI just leaked the plot of Black Mirror's next season."

you are viewing a single comment's thread
view the rest of the comments
[–] Sibbo@sopuli.xyz 46 points 3 months ago

There are no known safeguards against "AI" doing unexpected things. The "AI" does not understand what it is doing, and so is unable to have a concept of right or wrong, or any agenda. If it is able to do something, and has been trained to do it only in restricted contexts, it will do that thing randomly in other contexts.