this post was submitted on 08 Nov 2023
13 points (78.3% liked)

Science

3072 readers
59 users here now

General discussions about "science" itself

Be sure to also check out these other Fediverse science communities:

https://lemmy.ml/c/science

https://beehaw.org/c/science

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] Doombot1@lemmy.one 11 points 10 months ago (1 children)

Interesting that the article ends with “The new ChatGPT catcher even performed well with introductions from journals it wasn’t trained on”. Isn’t that the whole point? If you just judge a model based on what it was trained on, you just get a biased model. I can’t remember the exact word for it but it’s essentially over-relying on your own dataset. So of course it will get near-100% accuracy on what it was trained with. I’d be curious to see what the accuracy on other papers is.

[–] MagosInformaticus@sopuli.xyz 4 points 10 months ago (1 children)
[–] Doombot1@lemmy.one 1 points 10 months ago

There we go, thanks for the addition! I did a lot of ML/DL stuff about 2 years ago but just couldn’t remember the term.

[–] Aurenkin@sh.itjust.works 8 points 10 months ago* (last edited 10 months ago) (2 children)

Smells like bullshit. The graphs they showed in the source paper with their accuracy at like 100% for every test seem even more like bullshit. Did they run the model over the training data or what?

Maybe I'm wrong but text is just way too high signal to noise medium to be able to tell if it was written by an AI. The false positives would be high enough that it's effectively useless. Does anyone have another perspective on this? If I'm missing some nuance here I'd love to understand more.

[–] Shdwdrgn@mander.xyz 4 points 10 months ago (1 children)

I'm just guessing here, but if you have a way to train your model with the same or similar types of data as the model you are comparing with, then you have a high probability of detecting when the output is similar. To put it another way, if I ask chatGPT its favorite red condiment and it answers "mustard", then if my model is trained to give the same answer it makes it easy to detect something that was likely created by an AI. If the answer doesn't match what my model spits out, then it has a higher chance of being human-generated. Does that make sense?

[–] Aurenkin@sh.itjust.works 3 points 10 months ago (1 children)

Yeah that makes sense. I'm still very sceptical though because as your example illustrates, it's perfectly valid for a human to answer "mustard" as well, plus there is an element of randomness inserted into the model output. Maybe it's doable but I'm unconvinced that you can meaningfully distinguish between human and AI written text. Unless you make a detector that looks for "As a large language model..." Then maybe it can detect ChatGPT specifically.

[–] Shdwdrgn@mander.xyz 2 points 10 months ago (1 children)

Agreed, even a perfectly trained clone of chatGPT wouldn't get that high a hit rate, although I do think that the larger the article being compared, the better its chances would be of making an accurate prediction. The thing is that we soon won't actually be able to tell the difference as computers get smarter. Sees like right now the only practical application is for kids to cheat on their homework, but what happens when it gets smart enough to write actual research papers with unique proofs?

[–] Hanabie@sh.itjust.works 1 points 10 months ago* (last edited 10 months ago)

If it writes research papers, that research still has to come from somewhere. Even if the whole study was performed by AI itself, how would that deligitimise the research? Science isn't art, it's irrelevant who the performing agent is. (As long as it's not stolen)

[–] Communist@lemmy.ml 4 points 10 months ago* (last edited 10 months ago)

It is very easy to get to those numbers if you don't include the rate of false positives. That is all there is to this, really.