this post was submitted on 12 Mar 2024
11 points (100.0% liked)

TechTakes

1543 readers
227 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Jailbreaking LLMs with ASCII Art. Turns out LLMs are still computer programs and sanitizing inputs is hard.

NSFW as it isn't a bad take by techpeople, but research showing that the AI creating diamondoid virusses because we were mean to it fears are overblown. It cannot follow simple (for us intelligent humans) instructions not to do certain things.

LLMs are extremely good at parsing things however.

you are viewing a single comment's thread
view the rest of the comments
[–] dgerard@awful.systems 3 points 10 months ago

the paper (PDF)

hilariously simple and stupid