this post was submitted on 03 Apr 2024
961 points (99.4% liked)

Technology

59656 readers
3044 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[–] TheBest@midwest.social 21 points 7 months ago* (last edited 7 months ago) (5 children)

This actually opens an interesting debate.

Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what's happening.

Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?

We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?

My post is more tangentially related to original article, but I'm still curious as what the common consensus is.

[–] GamingChairModel@lemmy.world 13 points 7 months ago (2 children)

Every photo you take with your phone is post processed.

Years ago, I remember looking at satellite photos of some city, and there was a rainbow colored airplane trail on one of the photos. It was explained that for a lot of satellites, they just use a black and white imaging sensor, and take 3 photos while rotating a red/green/blue filter over that sensor, then combining the images digitally into RGB data for a color image. For most things, the process worked pretty seamlessly. But for rapidly moving objects, like white airplanes, the delay between the capture of red/green/blue channel created artifacts in the image that weren't present in the actual truth of the reality being recorded. Is that specific satellite method all that different from how modern camera sensors process color, through tiny physical RGB filters over specific subpixels?

Even with conventional photography, even analog film, there's image artifacts that derive from how the photo is taken, rather than what is true of the subject of the photograph. Bokeh/depth of field, motion blur, rolling shutter, and physical filters change the resulting image in a way that is caused by the camera, not the appearance of the subject. Sometimes it makes for interesting artistic effects. But it isn't truth in itself, but rather evidence of some truth, that needs to be filtered through an understanding of how the image was captured.

Like the Mitch Hedberg joke:

I think Bigfoot is blurry, that's the problem. It's not the photographer's fault. Bigfoot is blurry, and that's extra scary to me.

So yeah, at a certain point, for evidentiary proof in court, someone will need to prove some kind of chain of custody that the image being shown in court is derived from some reliable and truthful method of capturing what actually happened in a particular time and place. For the most part, it's simple today: i took a picture with a normal camera, and I can testify that it came out of the camera like this, without any further editing. As the chain of image creation starts to include more processing between photons on the sensor and digital file being displayed on a screen or printed onto paper, we'll need to remain mindful of the areas where that can be tripped up.

[–] TheBest@midwest.social 2 points 7 months ago

Fantasitc expansion of my thought. This is something that isn't going to be answered with an exact scientific value but will have to decided based on our human experiences with the tech. Interesting times ahead.

[–] NoRodent@lemmy.world 2 points 7 months ago

The crazy part is that your brain is doing similar processing all the time too. Ever heard of the blindspot? Your brain has literally zero data there but uses "content-aware fill" to hide it from you. Or the fact, that your eyes are constantly scanning across objects and your brain is merging them into a panorama on the fly because only a small part of your field of vision has high enough fidelity. It will also create fake "frames" (look up stopped-clock illusion) for the time your eyes are moving where you should see a blur instead. There's more stuff like this, a lot of it manifests itself in various optical illusions. So not even our own eyes capture the "truth". And then of course the (in)accuracy of memory when trying to recall what we've seen, that's an entirely different can of worms.

[–] Buelldozer 5 points 7 months ago

We can currently make the judgement call that a phones camera is still a fair representation of the truth

No you can't. Samsung's AI is out there now and it absolutely will add data to images and video in order to make them look better. Not just adjust an image but actually add data...on it's own. If you take an off angle photo and then tell it to straighten it it will take your photo, re-orient, and then "make up" what should have been in the corner. It will do the same thing for video. With video it also has the ability flat out add frames in order to do the slow-motion effect or smooth out playback if the recording was janky.

Samsung has it out there now so Apple and rest of the horde will surely be quick in rolling it out.

[–] ricecake@sh.itjust.works 4 points 7 months ago (2 children)

Computational photography in general gets tricky because it relies on your answer to the question "Is a photograph supposed to reflect reality, or should it reflect human perception?"

We like to think those are the same, but they're not. Your brain only has a loose interest in reality and is much more focused on utility. Deleting the irrelevant, making important things literally bigger, enhancing contrast and color to make details stand out more.
You "see" a reconstruction of reality continuously updated by your eyes, which work fundamentally differently than a camera.

Applying different expose settings to different parts of an image, or reconstructing a video scene based on optic data captured over the entire video doesn't capture what the sensor captured but it can come much closer to representing what the human holding the camera perceived.
Low light photography is a great illustration of this, because we see a person walk from light to dark and our brains will shamelessly remember what color their shirt was and that grass is green and update your perception, as well as using a much longer "exposure" time to capture more light data to maintain color perception in low light conditions, even though we might not have enough actual light to make those determinations without clues.

I think most people want a snapshot of what they perceived at the moment.
I like the trend of the camera capturing the image, and also storing the "plain" image. There's also capturing the raw image data, which is basically a dump of the cameras optic sensor data. It's basically what the automatic post processing is tweaking, and what human photographers use to correct light balance and stuff.

[–] TheBest@midwest.social 1 points 7 months ago (1 children)

Great points! Thanks for expanding. I agree with your point that people most often want a recreation of what was perceived. Its going to make this whole AI enhanced eviidence even more nuanced when the tech improves.

[–] ricecake@sh.itjust.works 1 points 7 months ago

I think the "best" possible outcome is that AI images are essentially treated as witness data, as opposed to direct evidence. (Best is meant in terms of how we treat AI enhanced images, not justice outcomes. I don't think we should use them for such things until they're significantly better developed, if ever)

Because the image is essentially at that point a neural networks interpretation of the image that it captured, which is functionally similar to a human testifying to what they believe they saw in an image.

I think it could have a use if used in conjunction with the original or raw image, and the network can explain what drive it's interpretation, which is a tricky thing for a lot of neural network based systems.
That brings it much closer to how doctors are using them for imaging analysis. It doesn't supplant the original, but points to part of it with an interpretation, and a synopsis of why it things that blob is a tumor/gun.

[–] Natanael@slrpnk.net 1 points 7 months ago

There's different types of computational photography, the ones which ensures to capture enough sensor data to then interpolate in a way which accurately simulates a different camera/lighting setup are in a way "more realistic" than the ones which heavily really on complex algorithms to do stuff like deblurring. My point is essentially that the calculations done has to be founded in physics rather than in just trying to produce something artistic.

[–] fuzzzerd@programming.dev 4 points 7 months ago (1 children)

This is what I was wondering about as I read the article. At what point does the post processing on the device become too much?

[–] Natanael@slrpnk.net 1 points 7 months ago (1 children)

When it generates additional data instead of just interpolating captured data

[–] fuzzzerd@programming.dev 1 points 7 months ago (1 children)

What would you classify google or apple portrait mode as? It's definitely doing something. We can probably agree, at this point it's still a reasonably enhanced version of what was really there, but maybe a Snapchat filter that turns you into a dog is obviously too much. The question is where in that spectrum is the AI or algorithm too much?

[–] Natanael@slrpnk.net 1 points 7 months ago* (last edited 7 months ago)

It varies, there's definitely generative pieces involved but they try to not make it blatant

If we're talking evidence in court then it's practically speaking more important if the photographer themselves can testify about how accurate they think it is and how well it corresponds to what they saw. Any significantly AI edited photo effectively becomes as strong evidence as a diary entry written by a person on the scene, it backs up their testimony to a certain degree by checking for the witness' consistency over time instead of trusting it directly. The photo can lie just as much as the diary entry can, so it's a test for credibility instead.

If you use face swap then those photos are likely nearly unusable. Editing for colors and contrast, etc, still usable. Upscaling depends entirely on what the testimony is about. Identifying a person that's just a pixelated blob? Nope, won't do. Same with verifying what a scene looked like, such as identifying very pixelated objects, not OK. But upscaling a clear photo which you just wanted to be larger, where the photographer can attest to who the subject is? Still usable.

[–] jballs@sh.itjust.works 3 points 7 months ago

I was wondering that exact same thing. If I take a portrait photo on my Android phone, it instantly applies a ton of filters. If I had taken a picture of two people, and then one of those people murders the other shortly afterwards, could my picture be used as evidence to show they were together just before the murder? Or would it be inadmissible because it was an AI-doctored photo?