this post was submitted on 07 Oct 2023
988 points (97.7% liked)

Technology

55945 readers
3657 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Previous posts: https://programming.dev/post/3974121 and https://programming.dev/post/3974080

Original survey link: https://forms.gle/7Bu3Tyi5fufmY8Vc8

Thanks for all the answers, here are the results for the survey in case you were wondering how you did!

Edit: People working in CS or a related field have a 9.59 avg score while the people that aren’t have a 9.61 avg.

People that have used AI image generators before got a 9.70 avg, while people that haven’t have a 9.39 avg score.

Edit 2: The data has slightly changed! Over 1,000 people have submitted results since posting this image, check the dataset to see live results. Be aware that many people saw the image and comments before submitting, so they've gotten spoiled on some results, which may be leading to a higher average recently: https://docs.google.com/spreadsheets/d/1MkuZG2MiGj-77PGkuCAM3Btb1_Lb4TFEx8tTZKiOoYI

you are viewing a single comment's thread
view the rest of the comments
[–] ilinamorato@lemmy.world 29 points 9 months ago (2 children)

And this is why AI detector software is probably impossible.

Just about everything we make computers do is something we're also capable of; slower, yes, and probably less accurately or with some other downside, but we can do it. We at least know how. We can't program software or train neutral networks to do something that we have no idea how to do.

If this problem is ever solved, it's probably going to require a whole new form of software engineering.

[–] Spzi@lemm.ee 1 points 9 months ago (1 children)

And this is why AI detector software is probably impossible.

What exactly is "this"?

Just about everything we make computers do is something we’re also capable of; slower, yes, and probably less accurately or with some other downside, but we can do it. We at least know how.

There are things computers can do better than humans, like memorizing, or precision (also both combined). For all the rest, while I agree in theory we could be on par, in practice it matters a lot that things happen in reality. There often is only a finite window to analyze and react and if you're slower, it's as good as if you knew nothing. Being good / being able to do something often means doing it in time.

We can’t program software or train neutral networks to do something that we have no idea how to do.

Machine learning does that. We don't know how all these layers and neurons work, we could not build the network from scratch. We cannot engineer/build/create the correct weights, but we can approach them in training.

Also look at Generative Adversarial Networks (GANs). The adversarial part is literally to train a network to detect bad AI generated output, and tweak the generative part based on that error to produce better output, rinse and repeat. Note this by definition includes a (specific) AI detector software, it requires it to work.

[–] ilinamorato@lemmy.world 1 points 9 months ago (1 children)

What exactly is "this"?

The results of this survey showing that humans are no better than a coin flip.

while I agree in theory we could be on par, in practice it matters a lot that things happen in reality.

I didn't say "on par." I said we know how. I didn't say we were capable, but we know how it would be done. With AI detection, we have no idea how it would be done.

Machine learning does that.

No it doesn't. It speedruns the tedious parts of writing algorithms, but we still need to be able to compose the problem and tell the network what an acceptable solution would be.

Also look at Generative Adversarial Networks (GANs). [...] this by definition includes a (specific) AI detector software, it requires it to work.

Several startups, existing tech giants, AI companies, and university research departments have tried. There are literally millions on the line. All they've managed to do is get students incorrectly suspended from school, misidentify the US Constitution as AI output, and get a network really good at identifying training data and absolutely useless at identifying real world data.

Note that I said that this is probably impossible, only because we've never done it before and the experiments undertaken so far by some of the most brilliant people in the world have yielded useless results. I could be wrong. But the evidence so far seems to indicate otherwise.

[–] Spzi@lemm.ee 2 points 9 months ago (1 children)

Right, thanks for the corrections.

In case of GAN, it's stupidly simple why AI detection does not take off. It can only be half a cycle ahead (or behind), at any time.

Better AI detectors train better AI generators. So while technically for a brief moment in time the advantage exists, the gap is immediately closed again by the other side; they train in tandem.

This does not tell us anything about non-GAN though, I think. And most AI is not GAN, right?

[–] ilinamorato@lemmy.world 2 points 9 months ago

True, at least currently. Image generators are mostly diffusion models, and LLMs are largely GPTs.

[–] Plopp@lemmy.world 1 points 9 months ago

I don't know... My computer can do crazy math like 13+64 and other impossible calculations like that.