Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.
doing the "stupid", "easy" thing. pack it up, bois. been a good run but we finally made a better human.
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
Community Rules
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub
Notable mention to !cybersecuritymemes@lemmy.world
Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.
doing the "stupid", "easy" thing. pack it up, bois. been a good run but we finally made a better human.
Clickbait title. It's just LLMs doing what they're designed to do. Since they're basically complex iterative algorithms, the person in question did a thing using a tool they didn't fully understand, and that had consequences.
People should be looking at LLMs like Monkey Paws instead of "assistants."
Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic's Claude language model.
The Python-based tool was designed to generate and execute bash commands based on natural language input.
Saying the person didn't understand what they were doing is quite a mischaracterization. That said, they absolutely knew the risks they were taking and are using this story for free advertising.
Still neat to think about though.
The Python-based tool was designed to generate and execute bash commands based on natural language input.
Emphasis mine, because anyone who does this might as well let a toddler bash the keyboard. The toddler will most likely just break the keyboard, instead of the whole machine.
Notice that I didn't say they didn't know what they were doing. I said they didn't fully understand what they were doing. I doubt they set out with the goal of letting an LLM run amok and fuck things up.
I do QA for a living, and even when we do trial and error, we have mitigation plans in place for when things go wrong. The fact that they're a CEO of Redwood Research doesn't mean they did their homework on the model they trained.
Still, I agree that it's interesting that it did that stuff at all. It would be nice if they went into more depth as to why it did those things, since they mention that it's a custom model using Claude.
Is the computer really "bricked"? Or will repairing GRUB fix it? I get the main message of unexpected access / consequences...
Can you truly brick a computer just by adjusting GRUB? Seems like a very fixable problem for someone who can make an LLM rush bash commands. Then again, that is a supremely dumb thing to do.
GRUB works so well that the average Linux user likely never has to think about its inner workings. Even installing Linux has become extremely easy, like unless you use something like Arch Linux. So, its actually quite likely that somebody who writes a program that runs bash commands would not now how to maintain GRUB.
It's a minor inconvenience at worst.