this post was submitted on 10 Apr 2024
123 points (95.6% liked)

Technology

58306 readers
4480 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The increasing power of the latest artificial intelligence systems is stretching traditional evaluation methods to breaking point, posing a challenge to businesses and public bodies over how best to work with the fast-evolving technology.

Flaws in the evaluation criteria commonly used to gauge performance, accuracy and safety are being exposed as more models come to market, according to people who build, test and invest in AI tools. The traditional tools are easy to manipulate and too narrow for the complexity of the latest models, they said.

The accelerating technology race sparked by the 2022 release of OpenAI’s chatbot ChatGPT and fed by tens of billions of dollars from venture capitalists and big tech companies, such as Microsoft, Google and Amazon, has obliterated many older yardsticks for assessing AI’s progress.

you are viewing a single comment's thread
view the rest of the comments
[–] eleitl@lemmy.ml 25 points 5 months ago (1 children)

There is no reliable risk assessment for truly intelligent, autonomous systems. Let's stop pretending that it can exist.

[–] webghost0101@sopuli.xyz 7 points 5 months ago

We’re still a way of though.