mal099

joined 1 year ago
[–] mal099@kbin.social 4 points 1 year ago (3 children)

@rastilin is making some unproven assumptions here. But it is true that the "math question" dataset consists only of prime numbers, so if the first version thought every number was prime and the second thought no numbers were prime, we would see this exact behavior. Source:

For this dataset, we query the primality of 500 randomly chosen primes between 1,000 and 20,000; the correct answer is always Yes.

From Zhang et al. (2023), the paper they took the dataset from.

[–] mal099@kbin.social 3 points 1 year ago

Damn, you're right. The study has not been peer reviewed yet according to the article, and in my opinion, it really shows. For anyone who doesn't want to actually read the study:

They took the set of questions from a different study (which is fine). The original study had a set of 500 randomly chosen prime numbers and asked ChatGPT if they were prime, and to support its reasoning. They did this to see if in the cases where ChatGPT got the question wrong, ChatGPT would try to support its wrong answer with more faulty reasoning - a dataset with only prime numbers is perfectly fine for this initial question.

The study in the article appears to be trying to answer two questions - is there significant drift in the answers ChatGPT gives, and is ChatGPT getting better or worse at answering questions. The dataset is perfectly fine for answering the first question, but completely inadequate for answering the second, since an AI that simply thinks all numbers are prime would be judged as having perfect accuracy! Some good peer review would never let that kind of thing slide.

[–] mal099@kbin.social 2 points 1 year ago

I would steal this argument, but if it can be reposted here for free, then I don't think anybody really owns it. 🤔

view more: ‹ prev next ›