this post was submitted on 05 Oct 2024
32 points (78.6% liked)

Cybersecurity

5853 readers
55 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Telorand@reddthat.com 19 points 2 months ago (1 children)

Clickbait title. It's just LLMs doing what they're designed to do. Since they're basically complex iterative algorithms, the person in question did a thing using a tool they didn't fully understand, and that had consequences.

People should be looking at LLMs like Monkey Paws instead of "assistants."

[–] treadful@lemmy.zip 11 points 2 months ago (2 children)

Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic's Claude language model.

The Python-based tool was designed to generate and execute bash commands based on natural language input.

Saying the person didn't understand what they were doing is quite a mischaracterization. That said, they absolutely knew the risks they were taking and are using this story for free advertising.

Still neat to think about though.

[–] Tar_alcaran@sh.itjust.works 7 points 2 months ago

The Python-based tool was designed to generate and execute bash commands based on natural language input.

Emphasis mine, because anyone who does this might as well let a toddler bash the keyboard. The toddler will most likely just break the keyboard, instead of the whole machine.

[–] Telorand@reddthat.com 1 points 2 months ago

Notice that I didn't say they didn't know what they were doing. I said they didn't fully understand what they were doing. I doubt they set out with the goal of letting an LLM run amok and fuck things up.

I do QA for a living, and even when we do trial and error, we have mitigation plans in place for when things go wrong. The fact that they're a CEO of Redwood Research doesn't mean they did their homework on the model they trained.

Still, I agree that it's interesting that it did that stuff at all. It would be nice if they went into more depth as to why it did those things, since they mention that it's a custom model using Claude.