416
submitted 10 months ago by L4s@lemmy.world to c/technology@lemmy.world

Police in England installed an AI camera system along a major road. It caught almost 300 drivers in its first 3 days.::An AI camera system installed along a major road in England caught 300 offenses in its first 3 days.There were 180 seat belt offenses and 117 mobile phone

you are viewing a single comment's thread
view the rest of the comments
[-] MaxPower@feddit.de 146 points 10 months ago* (last edited 10 months ago)

Photos flagged by the AI are then sent to a person for review.

If an offense was correctly identified, the driver is then sent either a notice of warning or intended prosecution, depending on the severity of the offense.

The AI just "identifying" offenses is the easy part. It would be interesting to know whether the AI indeed correctly identified 300 offenses or if the person reviewing the AI's images acted on 300 offenses. That's potentially a huge difference and would have been the relevant part of the news.

[-] tmRgwnM9b87eJUPq@lemmy.world 37 points 10 months ago

The system we use in NL is called “monocam”. A few years ago it caught 95% of all offenders.

This means that AI had at most 5% false negatives.

I wonder if they have improved the system in the mean time.

https://nos.nl/artikel/2481555-nieuwe-slimme-camera-s-aangeschaft-om-appende-bestuurders-te-betrappen

[-] zephr_c@lemm.ee 45 points 10 months ago

Nobody cares about false negatives. As long as the number isn't something so massive that the system is completely useless false negatives in an automatic system are not a problem.

What are the false positives? Every single false positive is a gross injustice. If you can't come up with a number for that, then you haven't even evaluated your system.

[-] tmRgwnM9b87eJUPq@lemmy.world 18 points 10 months ago* (last edited 10 months ago)

The system works with AI signaling phone usage by driving.

Then a human will verify the photo.

AI is used to respect people’s privacy.

The combination of the AI detection+human review leads to a 5% false negative rate, and most probably 0% false positive.

This means that the AI missed at most 5% positives, but probably less because of the human reviewer not being 100% sure there was an offense.

[-] zephr_c@lemm.ee 10 points 10 months ago

Look, I'm not saying it's a bad system. Maybe it's great. "Most probably 0%" is meaningless though. If all you've got is gut feelings about it, then you don't know anything about it. Humans make mistakes in the best of circumstances, and they get way, way worse when you're telling them that they're evaluating something that's already pretty reliable. You need to know it's not giving false positive, not have a warm fuzzy feeling about it.

Again, I don't know if someone else has already done that. Maybe they have. I don't live in the Netherlands. I don't trust it until I see the numbers that matter though, and the more numbers that don't matter I see without the ones that do, the less I trust it.

[-] tmRgwnM9b87eJUPq@lemmy.world 1 points 10 months ago

The fine contains a letter, a picture and payment information. If the person really wasn’t using their phone, they can file a complaint and the fine will be dismissed. Seems pretty simple to me.

However, I have not heard any complaints about it in the news and an embarrassing amount of fines has been given for this offense.

[-] zephr_c@lemm.ee 1 points 10 months ago

For a post on a site like this that kind of anecdote is plenty to add to a conversation, and it does actually make me feel a tiny bit better about the whole thing, but when you lead with statistics you're implying a level of research and knowledge that goes beyond just anecdotal. It's not really fair to you or any of us, but using the numbers that sound good to avoid using the ones that reveal flaws is one of the most popular ways for marketing teams and governments to deceive people. You should always be skeptical of that kind of thing.

[-] CalvinCopyright@lemmy.world 0 points 10 months ago

Heh. Heh heh. You think that you can... file a complaint, and get a fine dismissed just like that. Heh heh heh. God, you're naive. Or stupid. Or a paid propagandist. Or just plain rich enough for your reaction to a fine to be 'meh'.

Criminality is predicated on convenience. If it's easy for an authority to throw out fines and hard for the populace to dismiss those fines, guess what's going to happen? There's going to be fines applied that shouldn't have been, but that the people who are getting fined literally can't put in the effort to get dismissed. And that's not justice in the slightest. 'Innocent until proven guilty', you troll. Heard that phrase before??

[-] tmRgwnM9b87eJUPq@lemmy.world 1 points 10 months ago

Just wow.

I bet you do not live in The Netherlands. We have a standardized process to complain against a fine.

If the picture doesn’t prove with certainty that you were holding a phone, complain to the address in the letter or just don’t pay the €359 fine and talk to a judge about it.

[-] Tywele@lemmy.dbzer0.com 18 points 10 months ago

How do they know that they caught 95% of all offenders if they didn't catch the remaining 5%? Wouldn't that be unknowable?

[-] lasagna@programming.dev 20 points 10 months ago* (last edited 10 months ago)

Welcome to the world of training datasets.

There are many ways to go about it, but for a limited number they'd probably use human analysts.

But in general, they'd put a lot more effort into a chunk of data and use that as the truth. It's not a perfect method but it's good enough.

[-] Hamartiogonic@sopuli.xyz 7 points 10 months ago* (last edited 10 months ago)

The article didn’t really clarify that part, so it’s impossible to tell. My guess is, they tested the system by intentionally driving under it with a phone in your hand a 100 times. If the camera caught 95 of those, that’s how you would get the 95% catch rate. That setup has the a priori information on about the true state of the driver, but testing takes a while.

However, that’s not the only way to test a system like this. They could have tested it with normal drivers instead. To borrow a medical term, you could say that this is an “in vivo” test. If they did that, there was no a priori information about the true state of each driver. They could still report a different 95% value though. What if 95% of the positives were human verified to be true positives and the remaining 5% were false positives. In a setup like that we have no information about true or false negatives, so this kind of test setup has some limitations. I guess you could count the number of cars labeled negative, but we just can’t know how many of them were true negatives unless you get a bunch of humans to review an inordinate amount of footage. Even then you still wouldn’t know for sure, because humans make mistakes too.

In practical terms, it would still be a really good test, because you can easily have thousands of people drive under the camera within a very short period of time. You don’t know anything about the negatives, but do you really need to. This isn’t a diagnostic test where you need to calculate sensitivity, specificity, positive predictive value and negative predictive value. I mean, it would be really nice if you did, but do you really have to?

[-] tmRgwnM9b87eJUPq@lemmy.world 4 points 10 months ago

Just to clarify the result: the article states that AI and human review leads to 95%.

Could also be that the human is flagging actual positives, found by the AI, as false positives.

[-] echodot@feddit.uk 3 points 10 months ago

You wouldn't need people to actually drive past the camera, you could just do that in testing when the AI was still in development in software, you wouldn't need the physical hardware.

You could just get CCTV footage from traffic cameras and feeds that into the AI system. Then you could have humans go through independently of the AI and tag any incident they saw in a infraction on. If the AI system gets 95% of the human spotted infractions then the system is 95% accurate. Of course this ignores the possibility that both the human and the AI miss something but that would be impossible to calculate for.

[-] Hamartiogonic@sopuli.xyz 1 points 10 months ago

That’s the sensible way to do it in early stages of development. Once you’re reasonably happy with the trained model, you need to test the entire system to see if each part actually works together. At that point, it could be sensible to run the two types of experiments I outlined. Different tests different stages.

[-] jopepa@lemmy.world 3 points 10 months ago

I think 95% were correct reports is what they mean. There could be a massive population of other offenders that continue sexting and driving or worse. One monocam won’t ever be enough we need many monocams. Polymonocams.

[-] tmRgwnM9b87eJUPq@lemmy.world 1 points 10 months ago

I suspect they sent through a controlled set of cars where they tested all kinds of scenarios.

Other option would be to do a human review after installing it for a day.

load more comments (15 replies)
this post was submitted on 21 Aug 2023
416 points (94.1% liked)

Technology

55692 readers
2623 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS