439
this post was submitted on 07 May 2025
439 points (99.8% liked)
Linux
8223 readers
330 users here now
A community for everything relating to the GNU/Linux operating system
Also check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I still don't get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever
It's simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540
What are the odds that you're actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.
Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it's possible that there's some more malicious idea behind it.
The user who submitted the report that Stenberg considered the "last straw" seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it's possible that by using an LLM to automate making reports, they're making some money despite having a low success rate.