Вы находитесь на странице: 1из 4

More info to know beforehand:

Light Field Volume (LFV) --- A fully sampled LFV includes ray data (the speed and intensity of light) from all directions at any
location within the captured volume.
-- Within an LFV, virtual views may be generated from any point, facing any direction, with any field of
view.
-- Allows for a breakthrough in the sense of presence in VR.

How a Lytro Camera Works -MLA: Micro Lens Array


-- Hundreds of thousands of lenses that are added behind the main lens of the camera for the purpose of
separating the white light into its individual colors.
-- With this, the
camera is able to
record more
detailed
representation of
the image -- one
that can be
manipulated even
further in
postproduction.
-- This also allows
you to capture the
4D image,
including
directionality.
-- At the entrance
pupil (see Img_2),
you capture whats
called a
subaperture,
which can be
thought of as a
camera at each
microlens position
because each
entrance pupil

collects the full-resolution image.


-- This allows for stereoscopic imaging from a single lense (because it is really a ton of separate cameras).
-- The subaperture method allows for a restructuring of f-stop after the fact because you can increase the
amount of apertures that the program considers when generating an image.
-- The space between subapertures is what allows for the camera to simulate head movement after the
fact.
-- Since the camera captures the state of light at the moment of capture, it gives you the ability to
refocus afterwards (sorry focus-pullers).
-- You can also track
the focus to an object
in the threedimensional space.

-- You can also


computationally alter
the frame-rate and
shutter-angle without
any artifacts.

-- An oversampling of a scene allows for increased manipulation in dynamic range (the distance on the
scale between the brightest whites and the darkest blacks) -- measured in f/stops.

-- This camera can capture a larger dynamic range than any traditional camera, or, if you want, can not.
-- Extreme low-light capture capabilities.
-- No more exposure triangle.
-

No longer bound to a relationship between shutter speed, ISO, and aperture.


Your aperture is always completely open (to be manipulated in post)
For ISO, you just need to make sure that the sensor is properly exposed because the dynamic
range of a properly exposed video can be adjusted in post.

-- LUT-like preloads can be


imported to the Lytro software for
manipulation of lens-matching
looks.
-

If you want a Cooke lens,


you can import a preset
that manipulates the
image to look like a
Cooke lens.

-- VFX:
-- Depth Screen Removal -- You
have a new relationship with the
image, X,Y, and Z.

You have depth-mapping, and can tell the computer where all the pixels are in a threedimensional space.
You can look around objects and isolate them in a space.

-- You can add additional info to specific X,Y,Z positions in the captured space.
-- Motion-mapping - You have the ability to triangulate objects in a three-dimensional space, so taking
this information into a 3D modeling software will allow you to recreate the scene with ease.
-- UV coordinates and light-field data will allow you to remodel the light by uploading LUT lighting
schemes.
-- (This is not as cool when you
cant see it moving, but click here
for more info).

Вам также может понравиться