this post was submitted on 24 Sep 2024
26 points (96.4% liked)

Pulse of Truth

407 readers
134 users here now

Cyber Security news and links to cyber security stories that could make you go hmmm. The content is exactly as it is consumed through RSS feeds and wont be edited (except for the occasional encoding errors).

This community is automagically fed by an instance of Dittybopper.

founded 11 months ago
MODERATORS
 

Emails, documents, and other untrusted content can plant malicious memories.

top 1 comments
sorted by: hot top controversial new old
[–] pelespirit@sh.itjust.works 13 points 4 days ago

Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.

Rehberger privately reported the finding to OpenAI in May. That same month, the company closed the report ticket. A month later, the researcher submitted a new disclosure statement. This time, he included a PoC that caused the ChatGPT app for macOS to send a verbatim copy of all user input and ChatGPT output to a server of his choice. All a target needed to do was instruct the LLM to view a web link that hosted a malicious image. From then on, all input and output to and from ChatGPT was sent to the attacker's website.