Focus AFTER the picture is taken?

[FONT=“Georgia”]As I understand it, the camera is somehow recording information from all parts of the scene at once, including the direction of the light, then the software inside puts together the data into an image. But because all of the information is recorded, the image can be re-rendered in a different way.

As least, that’s what I get from their explanation.

It sounds very sci-fi, but it seems like it makes sense.

[/FONT]

it will awesome to see pictures in 3d.

If you think about it, when a normal camera (or even your eyes) “focuses” on a picture, you adjust the size/angle of the lens in order to gather light in a certain way, in order to focus on that point.

If you can “focus” on all points at once, you can pull this off. Just those examples would be VERY difficult to fake, because you’d have to be able to take the picture from the same point while focusing on all those different things.

Can you still adjust your aperture? Or is aperture not important anymore.
For example, if you have a cheap lens that can go to f 3.5, can you shoot an image and later (after the picture is taken) choose what this picture will look like if it was shot at f1.8 ?

Can you still adjust shutter speed and other stuff to these camera’s to set it fully manually?
As I understand from their movie here: Lytro’s “Light Field” Camera Adds Fabled Third Dimension to Photography - Core77
As seen on the YouTube video, those pictures are moving, as in movies, but just for a few seconds? So you can only do it with a tripod to avoid blurred images if this picture takes a few seconds?

In the future, the next generation will never understand anymore what focal length, aperture, … is all about.

Another disadvantage is the image size, which is around 100mb per picture I understood from their information?

Such a huge camera just doesn’t seem that user-friendly:

But that’s just my opinion…

[FONT=“Georgia”]Tripod!

I guess you could also fake it by taking a small aperture photo so everything’s sharp, then selectively blur; but how time consuming.[/FONT]

[FONT=“Georgia”]As I understand, the value of it would have to be that you can get all the information about a scene without using small apertures. Because if not, SLRs would already be able to do what they did.

I suppose if the camera really can somehow know how deep subjects are in a scene, it’d make the post-production much, much simpler. But what a massive raw file it would be!

If that’s really the case, it doesn’t sound like it would be practical for regular consumers. The people who take snap-shots don’t (usually) want to take time to post-process; Not for a Facebook pic or whatever their use usually is.

In my head I’m imagining this thing as the digital, modern version of a large format camera. Like, it’d be big and heavy. The files would be too large to shoot like mad (like a usual photo shoot) so it’d be you’d carefully plan your shots, set up your tripod and your scene, then take a few dozen photos to fill up your memory card before you have to shop. Process them back at your computer and you’d get the best of the best quality of photos of the age, because they’d be in 3D.

But I figure I’m totally wrong.

If their selling point is that you could focus after taking pictures, I guess they really are making this thing for regular consumers.

[/FONT]

Yeah, you could use a tripod… but it’d be hard to get the skateboarder to hover in mid air while you refocus all the shots. :wink: (Granted, most of us on here could have easily photoshopped him in, but still… gotta trust something =p).

But which aperture is that camera using… I guess all those lenses that blend all those images together to one image, will shoot at a specific aperture? Or is it like the Adobe front multi lens adapter (google ‘adobe light field camera lens’), where it’s just a screw up, where you can still set your aperture.

So than I guess, you won’t be able to adjust your image settings via the postprocessing software, with such settings as:
“show me how the image would be at focal length 50mm”
“show me how the image would be at focal length 85mm”,…
“show me how the image would be at aperture F 1.8”
“show me how the image would be at aperture F 16”
Because your image is already set at a specific focal length and aperture when shot.

It uses a different type of sensing I believe, in that it detects light rays coming in rather than just light intensities across a CCD. Something like that anyway!

[FONT=“Georgia”]Your guess is as good as mine, man.

We’ll see what this is all about when they release it I guess.

[/FONT]

focusing through different angles its a great idea
i wanna buy one like that
i love 3d images

That would be super cool! :slight_smile:

Good point samanime! :lol:

The camera is supposed to priced for the average consumer. They haven’t set a price yet but they did say that it would be between $1 and $10,000. So who knows. The interesting thing is: the sensor is completely different then any camera sensor currently; supposed to gather light from every angle and then use that to allow us to refocus the image. The general idea behind it seems solid. I’m excited to see how all this will work out.

I think that this is something that I’ll believe it when I see it. If it was Canon or Nikon annoucing this break through then I’d have some confidence in the annoucement.

But in this case it’s some unknown start up company, if this was software it’s would be known as vapourware.

The 3d camera is out of reach for average.So it should take in consideration to make it available for all so everybody has the taste of 3d picture.

Thread closed due to old age.