this post was submitted on 05 Sep 2024
150 points (99.3% liked)

technology

23281 readers
233 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

you are viewing a single comment's thread
view the rest of the comments
[–] SkingradGuard@hexbear.net 16 points 2 months ago (2 children)

Who would've guessed that inflated predictive algorithms can't perfrom well because they're just unable to understand anything shocked-pikachu

[–] UlyssesT@hexbear.net 10 points 2 months ago

But if enough rain forest is burned and enough waste carbon is dumped into the air, those predictive algorithms are that much closer to understanding everything! morshupls

[–] invalidusernamelol@hexbear.net 3 points 2 months ago

I still think in development environments, limited LLM systems can be used in tandem with other systems like linters and OG snippets to help maintain style and simplify boilerplate.

I use Co-Pilot at work because I do development on the side and need something to help me bash out simple scripts really fast that use our apis. The codebase we have is big enough now (50,000 ish lines and hundreds of files) so it tends to pick up primarily on the context of the codebase. It does still fallback to the general context pretty often though and that's a fucking pain.

Having the benefits of an LLM trained on your own code and examples without the drawbacks of it occasionally just injecting random bullshit from its training data would be great.