this post was submitted on 16 Feb 2025
16 points (83.3% liked)

Artificial Intelligence

161 readers
1 users here now

Chat about and share AI stuff

founded 2 years ago
MODERATORS
 

Archived

Here is the data at Hugging Face.

A team of international researchers from leading academic institutions and tech companies upended the AI reasoning landscape on Wednesday with a new model that matched—and occasionally surpassed—one of China's most sophisticated AI systems: DeepSeek.

OpenThinker-32B, developed by the Open Thoughts consortium, achieved a 90.6% accuracy score on the MATH500 benchmark, edging past DeepSeek's 89.4%.

The model also outperformed DeepSeek on general problem-solving tasks, scoring 61.6 on the GPQA-Diamond benchmark compared to DeepSeek's 57.6. On the LCBv2 benchmark, it hit a solid 68.9, showing strong performance across diverse testing scenarios.

...

you are viewing a single comment's thread
view the rest of the comments
[–] notfromhere@lemmy.ml 3 points 6 days ago

This is referring to probably the Qwen 32B R1 Distill which is a fine tune by DeepSeek of Qwen 32B. This is not referring to R1 671B.