this post was submitted on 24 Jan 2025
92 points (100.0% liked)

technology

23472 readers
196 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] peppersky@hexbear.net 17 points 2 days ago (1 children)

They use synthetic AI generated benchmarks

It's computer silicon blowing itself basically

[โ€“] Devorlon@lemmy.zip 5 points 2 days ago* (last edited 2 days ago)

I've been researching this for uni at you're not too far off. There's a bunch of benchmarks out there and LLMs are ran against a set of questions and are given a score based on its response.

The questions can be multiple choice or open ended. If they're open then it'll be marked by another LLM.

There's a couple initiatives to create benchmarks with known answers that are updated frequently, so they don't need to marked by another LLM, but where the questions aren't in the testing LLMs training dataset. This is because a lot of advancements in LLMs with these benchmarks is just the creators including the text questions and answers in the training data.