this post was submitted on 01 Dec 2023
259 points (82.1% liked)

Technology

59440 readers
3610 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A U.K. woman was photographed standing in a mirror where her reflections didn't match, but not because of a glitch in the Matrix. Instead, it's a simple iPhone computational photography mistake.

you are viewing a single comment's thread
view the rest of the comments
[–] xantoxis@lemmy.world 36 points 11 months ago (2 children)

Program against it? It's a camera. Put what's on the light sensor into the file, you're done. They programmed to make this happen, by pretending that multiple images are the same image.

[–] ninekeysdown@lemmy.world 3 points 11 months ago (3 children)

That’s over simplified. There’s only so much you can get on a sensor at the sizes in mobile devices. To compensate there’s A LOT of processing that goes on. Even higher end DSLR cameras are doing post processing.

Even shooting RAW like you’re suggesting involves some amount of post processing for things like lens corrections.

It’s all that post processing that allows us to have things like HDR images for example. It also allows us to compensate for various lighting and motion changes.

Mobile phone cameras are more about the software than the hardware these days

[–] cmnybo@discuss.tchncs.de 11 points 11 months ago (2 children)

With a DSLR, the person editing the pictures has full control over what post processing is done to the RAW files.

[–] ninekeysdown@lemmy.world 1 points 11 months ago (1 children)

Correct, I was referring to RAW shot on mobile not a proper DLSR. I guess I should have been more clear about that. Sorry!

[–] uzay@infosec.pub 2 points 11 months ago

You might be confounding a RAW photo file and the way it is displayed. A RAW file isn't even actually an image file, it's a container containing the sensor pixel information, metadata, and a pre-generated JPG thumbnail. To actually display an image, the viewer application either has to interpret the sensor data into an image (possible with changes according to its liking) or just display the contained JPG. On mobile phones I think it's most likely that the JPG is generated with pre-applied post-processing and displayed that way. That doesn't mean the RAW file has any post-processing applied to it though.

[–] falkerie71@sh.itjust.works 1 points 11 months ago

Regular people aren't going to edit their photos or are willing to deal with huge RAW files. They want good pictures straight out of the camera that are easy to share.

[–] randombullet@feddit.de 1 points 11 months ago (1 children)

Raw files from cameras have meta data that tells raw converters the info of which color profile and lenses it's taken with, but any camera worth using professionally doesn't have any native corrections on raw files. However, in special cases as with lenses with high distortion, the raw files have a distortion profile on by default.

[–] ninekeysdown@lemmy.world -1 points 11 months ago

Correct, I was referring to RAW shot on mobile devices not a proper DSLR. That was my observations based off of using the iPhone raw and android raw formats.

This isn’t my area of expertise so if I’m wrong about that aspect too let me know! 😃

[–] ricecake@sh.itjust.works 2 points 11 months ago

What's on the light sensor when? There's no shutter, it can just capture a continuous stream of light indefinitely.

Most people want a rough representation of what's hitting the sensor when they push the button. But they don't actually care about the sensor, they care about what they can see, which doesn't include the blur from the camera wobbling, or the slight blur of the subject moving.
They want the lighting to match how they perceived the scene, even though that isn't what the sensor picked up, because your brain edits what you see before you comprehend the image.

Doing those corrections is a small step to incorporating discontinuities in the capture window for better results.