this post was submitted on 03 Apr 2024
961 points (99.4% liked)

Technology

59656 readers
3044 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[–] emptyother@programming.dev 95 points 7 months ago (8 children)

How long until we got upscalers of various sorts built into tech that shouldn't have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges' screens? Cheap security cams with "enhanced night vision" might get somebody jailed.

I love the AI tech. But its future worries me.

[–] someguy3@lemmy.world 28 points 7 months ago* (last edited 7 months ago) (1 children)

Dehance! [Click click click.]

[–] meco03211@lemmy.world 8 points 7 months ago (1 children)
[–] aeronmelon@lemmy.world 4 points 7 months ago

That scene gets replayed in my mind three or four times a month.

[–] Jimmycakes@lemmy.world 18 points 7 months ago (1 children)

It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it's beneficial once the bullshit ai stops making money.

Lol, you think the masses will stop falling for it in gimmicks? Just look at the state of the world.

[–] GenderNeutralBro@lemmy.sdf.org 16 points 7 months ago (3 children)

AI-based video codecs are on the way. This isn't necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That's a good thing for film and TV, but a bad thing for, say, security cameras.

The devil's in the details and "AI" is way too broad a term. There are a lot of ways this could be implemented.

[–] jeeva@lemmy.world 13 points 7 months ago (2 children)

I don't think loss is what people are worried about, really - more injecting details that fit the training data but don't exist in the source.

Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?

[–] Mango@lemmy.world 3 points 7 months ago (1 children)
[–] elephantium@lemmy.world 2 points 7 months ago (1 children)
[–] Mango@lemmy.world 3 points 7 months ago
[–] GenderNeutralBro@lemmy.sdf.org 1 points 7 months ago

In the context of video encoding, any manufactured/hallucinated detail would count as "loss". Loss is anything that's not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.

As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.

[–] DarkenLM@kbin.social 6 points 7 months ago (4 children)

I don't think AI codecs will be anything revolutionary. There are plenty of lossless codecs already, but if you want more detail, you'll need a better physical sensor, and I doubt there's anything that can be done to go around that (that actually represents what exists, not an hallucination).

[–] foggenbooty@lemmy.world 2 points 7 months ago

It's an interesting thought experiment, but we don't actually see what really exists, our brains essentially are AI vision, filling in things we don't actually perceive. Examples are movement while we're blinking, objects and colors in our peripheral vision, the state of objects when our eyes dart around, etc.

The difference is we can't go back frame by frame and analyze these "hallucinations" since they're not recorded. I think AI enhanced video will actually bring us closer to what humans see even if some of the data doesn't "exist", but the article is correct that it should never be used as evidence.

[–] Natanael@slrpnk.net 1 points 7 months ago

I think there's a possibility for long format video of stable scenes to use ML for higher compression ratios by deriving a video specific model of the objects in the frame and then describing their movements (essentially reducing the actual frames to wire frame models instead of image frames, then painting them in from the model).

But that's a very specific thing that probably only work well for certain types of video content (think animated stuff)

[–] Hexarei@programming.dev 1 points 7 months ago

Nvidia's rtx video upscaling is trying to be just that: DLSS but you run it on a video stream instead of a game running on your own hardware. They've posited the idea of game streaming becoming lower bit rate just so you can upscale it locally, which to me sounds like complete garbage

[–] GenderNeutralBro@lemmy.sdf.org 1 points 7 months ago

There are plenty of lossless codecs already

It remains to be seen, of course, but I expect to be able to get lossless (or nearly-lossless) video at a much lower bitrate, at the expense of a much larger and more compute/memory-intensive codec.

The way I see it working is that the codec would include a general-purpose model, and video files would be encoded for that model + a file-level plugin model (like a LoRA) that's fitted for that specific video.

[–] Buelldozer 2 points 7 months ago

AI-based video codecs are on the way.

Arguably already here.

Look at this description of Samsungs mobile AI for their S24 phone and newer tablets:

AI-powered image and video editing

Galaxy AI also features various image and video editing features. If you have an image that is not level (horizontally or vertically) with respect to the object, scene, or subject, you can correct its angle without losing other parts of the image. The blank parts of that angle-corrected image are filled with Generative AI-powered content. The image editor tries to fill in the blank parts of the image with AI-generated content that suits the best. You can also erase objects or subjects in an image. Another feature lets you select an object/subject in an image and change its position, angle, or size.

It can also turn normal videos into slow-motion videos. While a video is playing, you need to hold the screen for the duration of the video that you want to be converted into slow-motion, and AI will generate frames and insert them between real frames to create a slow-motion effect.

[–] MudMan@fedia.io 13 points 7 months ago* (last edited 7 months ago)

Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than "dumb" compression in terms of loss of data or detail, but you don't want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn't really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that's better or worse for the game's pace and excitement is a different question.

The thing is, ML tech isn't a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That's not a problem if done correctly. The issue is everybody is both assuming "generative AI" chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.

I'm not particularly afraid of "AI tech", but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.

[–] elephantium@lemmy.world 3 points 7 months ago

Cheap security cams with “enhanced night vision” might get somebody jailed.

Might? We've been arresting the wrong people based on shitty facial recognition for at least 5 years now. This article has examples from 2019.

On one hand, the potential of this type of technology is impressive. OTOH, the failures are super disturbing.

[–] CileTheSane@lemmy.ca 2 points 7 months ago

It's already being used for things it shouldn't be.

[–] dojan@lemmy.world 2 points 7 months ago (1 children)

Probably not far. NVidia has had machine learning enhanced upscaling of video games for years at this point, and now they've also implemented similar tech but for frame interpolation. The rendered output might be 720p at 20FPS but will be presented at 1080p 60FPS.

It's not a stretch to assume you could apply similar tech elsewhere. Non-ML enhanced, yet still decently sophisticated frame interpolation and upscaling has been around for ages.

[–] MrPoopbutt@lemmy.world 6 points 7 months ago (2 children)

Nvidias game upscaling has access to game data and also training data generated by gameplay to make footage that is appealing to the gamers eye and not necessarily accurate. Security (or other) cameras don't have access to this extra data and the use case for video in courts is to be accurate, not pleasing.

Your comparison is apples to oranges.

[–] dojan@lemmy.world 8 points 7 months ago (1 children)

No, I think you misunderstood what I'm trying to say. We already have tech that uses machine learning to upscale stuff in real-time, but I'm not that it's accurate on things like court videos. I don't think we'll ever get to a point where it can be accurate as evidence because by the very nature of the tech it's making up detail, not enhancing it. You can't enhance what isn't there. It's not turning nothing into accurate data, it's guessing based on input and what it's been trained on.

Prime example right here, this is the objectively best version of Alice in Wonderland, produced by BBC in 1999, and released on VHS. As far as I can tell there was never a high quality version available. Someone used machine learning to upscale it, and overall it looks great, but there are scenes (such as the one that's linked) where you can clearly see the flaws. Tina Majorino has no face, because in the original data, there wasn't enough detail to discern a face.

Now we could obviously train a model to recognise "criminal activity", like stabbing, shooting, what have you. Then, however, you end up with models that mistake one thing for another, like scratching your temple turning into driving while on the phone, now if instead of detecting something, the model's job is to fill in missing data we have a recipe for disaster.

Any evidence that has had machine learning involved should be treated with at least as much scrutiny as a forensic sketch, while while they can be useful in investigations, generally don't carry much weight as evidence. That said, a forensic sketch is created through collaboration with an artist and a witness, so there is intent behind those. Machine generated artwork lacks intent, you can tweak the parameters until it generates roughly what you want, but it's honestly better to just hire an artist and get exactly what you want.

[–] PipedLinkBot@feddit.rocks 2 points 7 months ago

Here is an alternative Piped link(s):

Prime example right here

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] Buelldozer 3 points 7 months ago* (last edited 7 months ago) (1 children)

Security (or other) cameras don’t have access to this extra data

Samsung's AI on their latest phones and tablets does EXACTLY what @MrPoopbutt@lemmy.world is describing. It will literally create data including parts of scenes and even full frames, in order to make video look better.

So while a true security camera may not be able to do it there's now widely available consumer products that WILL. You're also forgetting that even Security Camera footage can be processed through software so footage from those isn't immune to AI fiddling either.

[–] MrPoopbutt@lemmy.world 2 points 7 months ago

Would that not fall under the "enhanced" evidence that is banned by this court decision?

[–] Bread@sh.itjust.works 1 points 7 months ago (1 children)

The real question is could we ever really trust photographs before AI? Image manipulation has been a thing long before the digital camera and Photoshop. What makes these images we see actually real? Cameras have been miscapturing image data for as long as they have existed. Do the light levels in a photo match what was actually there according to the human eye? Usually not. What makes a photo real?

[–] emptyother@programming.dev 1 points 7 months ago

They can. But theres a reasonable level of trust that a security feed has been kept secure and not tampered with by the owner if he doesnt have a motive. But what if not even the owner know that somewhere in their tech chain, maybe the camera, maybe the screen, maybe the storage device, maybe all 3, the image was "improved". No evidence of tampering. We'll have the police blaming Count Rugen for a bank robbery he didnt do, but the camera clearly shows a six fingered man!