Вы находитесь на странице: 1из 25


Hidayet Gozeten

Roles and jobs of a production crew for digital video projects

Typically, the executive producer is someone who finances a film, but isnt 100% involved on
the day-to-day creative process during production. For short films commissioned by
businesses, this may be the CEO who signs off on the project
The producer is the initial contact for the project. They talk with the client to
arrange the high-level goals and expectations. Its their responsibility to assemble
the production team. The director typically comes first. From there, they put
together the necessary crew members. The producer usually stays involved
throughout the project lifecycle: pre-production, production and post-production.

The director is typically the most involved person on and off set. They assist with assembling the right
crew to get the job done. They make adjustments to the script to keep the video on budget and on time.
They oversee all parts of the production. Questions get funneled up to them.
And, when dealing with talent, the director should be the only one directing them. They oversee technical
details as well, such as camera position, use of lighting and anything that effects the final product.

Reports To: Manager, Engineering Maintenance Summary: Provide engineering
support to a growing state of the art HD television facility. Responsibilities:
Support and maintain multiple studio production environments. Support and
maintain large post-production and SAN systems. Support and maintain
advanced television graphics systems. Support and maintain transmission
systems and earth station. Experience: Advanced troubleshooting and problem
solving skills with a willingness to learn. Advanced knowledge of current and
television systems and formats. Understanding of fiber optic and copper
networking technologies. Candidate should be a self-starter and a team player.
be d
The Script Writer works with the Researcher and the team to provide
the exact wording, the
script, to be used for the video project. The role of the Script Writer
involves reviewing the
research information to determine what facts might best convey the
videos message, then
paraphrasing the research materials. In many cases, the Script Writer
creates original writing that
may not be directly related to the factual information, such as dialogue
between characters or
The Film Editor will facilitate the process of viewing the footage with the team, deciding
what shots should be used, and
making the final edits.work.
The Film Editor will add music,
consistent transitions, correct titles, text, and credits, while providing the video with an
overall look and feel that meets the project objectives. The Film Editor may work closely
with the Producer(s), Writers, and Set Designer to guide the overall effects of the film and
lead the review of progress to the team during meetings. The Film Editor should be able to
select the best scenes and combine these with appropriate and effective special effects.
Your video may require detailed, time-consuming, art that needs to be produced using
another application such as Photoshop or Flash. If so, a graphic artist may be required
who will work closely with the Film Editor. The Film Editor may complete basic editing
transitions and also may supply special graphical enhancements to the video.

On larger budget productions, it is common to have a Camera Operator working under
the direction of the DP. In a multi-camera shoot, there will typically be a Camera
Operator for each camera.

Looking for a smart and solid researcher for a tv show
shooting in Atlanta, GA. It is a legal television show and we
need someone who can help gather gather data, track and
contact and conduct pre-interviews with witnesses.

creates visuals to reinforce important

A presenter host or hostess is a person or organization responsible for the running of a
public event. A museum or university, for example, may be the presenter or host of an
exhibit. In films, a presenter (but not a host) is a usually a well-known executive producer
credited with introducing a film or filmmaker to a larger audience. For example: "Presented
by Cecil B. DeMille".

We are seeking a qualified Project Manager to be the point person between client and the
production and post-production teams. The Project Manager will work with the executive
producer, creative director, line producer, and production manager, as well as editorial. The
Project Manager will oversee the planning, implementation, and tracking of specific short-form
projects (TV commercials, digital videos and corporate videos). Productions range between
$20k to $200k. We are looking for a self-motivated, confident individual who will anticipate
needs and is detail-oriented. Excellent oral and written communication skills are a must. An
understanding of post-production is LEAD
DESIRED SKILLS: Leadership: Candidate must keep
A lead programmer is a software engineer
in charge of one or more software
projects. Alternative titles include development lead, technical lead, lead software
engineer, software design engineer lead (SDE lead), software development
manager, software manager, or lead application developer. When primarily
contributing in a high-level enterprise software design role, the title software
architect (or similar) is often used. All of these titles can have different meanings
depending on the context.
Us: The Security Awareness team seeks an active, inventive mind to help build on our
existing stable of online video, and to help give inspired inspiration to future shows
and productions. Our content is focused mainly on Security Awareness - teaching a
company of 300,000 the best ways to stay safe online. A very small sample of what we
produce can be found here You: A writer and producer who can knock out a three page
script in the morning and manage a shoot in the afternoon. Regardless of your
background, you're a quick thinker who has a knack for producing off-the-wall ideas
that end up as brilliant, original content. You have an existing portfolio of engaging,
unique videos, and you're never short on ideas. A love of the subject

We are seeking a talented and experienced non union Art Director to work with on
an upcoming project. In mid September we will be shooting a series of comedic
commercial spots for which scenic design and art direction will be absolutely key.
The spot will contain up to 4 scenarios that will be shot both at our studio and on
location. These scenarios will require set dressing in environments such as a barber
shop and doctor's office. Please submit a cover letter and portfolio to apply. This is a
great opportunity as we are seeking a long term connection.

Shortly after shooting begins, the editor begins to organize the footage and
arranges individual shots into one continuous sequence. Even in a single
scene, dozens of different shots have to be chosen and assembled from
hundreds of feet of film. The editor's choices about which shots to use, and
the order in which to place them, have a profound effect on the appearance
of the final film.

Sound engineer needed for an infomercial. First half of the day will be filming
reaction shots at a Dance studio. Second half will be impromptu soundbite
testimonials from people on the street. You will be approaching people on the street
and asking several questions and recording responses. Please send resume, gear list,
and availability.

The Researcher is responsible for finding, analyzing, and compiling the information
for the video project. Research may include, but is not limited to the following: interviews,
surveys, primary source materials (documents, photos, music, etc.), and finding facts or
All research should be conducted using credible library resources. Depending on the size of
project, the Researcher may also assume
the role of Script Writer.
We are looking for full time Designer / Animators. We are looking for an artist with
strong design, animation and typography skills. We need a organized team players
able to work well with tight deadlines. Must include reel. Candidates should be wellversed in Adobe After Effects, Photoshop, Illustrator as well as Cinema 4D. Must be
able to commute to our Los Angeles office.

A presenter host or hostess is a person or organization responsible
for the running of a public event. A museum or university, for
example, may be the presenter or host of an exhibit. In films, a
presenter (but not a host) is a usually a well-known executive
producer credited with introducing a film or filmmaker to a larger
audience. For example: "Presented by Cecil B. DeMille".

Rule of Thirds
The Rule of Thirds is the basic composition principle to follow for well balanced and interesting
shots. The concept of the Rule of Thirds is to imagine breaking an image down into thirds
horizontally and vertically. You will then want to frame your shot so that your elements are along
the lines and preferably at one of the four points where the lines intersect to make your shot
more interesting.

Lead Room
For balance within your frames when shooting people and objects that are not facing the
camera, you will want to leave space in front of them.

Head Room
When a person is the main subject of a shot, make sure that there is a small amount
of space above their head. You dont want too much space, but you dont want to cut
off their head either.


The aspect ratio of an image describes the proportional relationship
between its width and its height. It is commonly expressed as two
numbers separated by a colon, as in 16:9. For an x:y aspect ratio, no
matter how big or small the image is, if the width is divided into x units of
equal length and the height is measured using this same length unit, the
height will be measured to be y units. In, for example, a group of images
that all have an aspect ratio of 16:9, one image might be 16 inches wide
and 9 inches high, another 16 centimeters wide and 9 centimeters high,
and a third might be 8 yards wide and 4.5 yards high.

A point along a timeline or path that defines where and how the settings
for an effect will change. One or more settings can then be interpolated
from keyframe to keyframe to create the appearance of a smoothly
change over a series of frames or along a motion path. See also
Gaze estimation from an image of an eye of the line PQ. Lets assume that the
length of this line segment is L. We then scale L to screen proportions, say L is D in
screen scale, and given the direction of gaze movement, we move the cursor i've
done so far. I calibrated the gaze of the user by looking at TLCP, TRCP and BLCP
where CP = calibration point; a screen point used for calibration B = bottom T = top
L= left R = right gaze_width

Precise real-time pointing of any payload FLIR Motion Control
Systems, Inc. (MCS) offers a complete line of high-performance pantilt devices for real-time, computer-controlled positioning of virtually
any payload including thermal cameras, video cameras, IP cameras,
laser rangefinders, and microwave antennas. Whatever your motion
control device needs, MCSs innovative technology and years of
application experience can help you create the optimal solution.
FLIR Motion Control Systems pan and tilt mounts help lower your
development risk and increase your first-time application success
through innovation, adaptability, quality, and durability.

Movement across frames left to right, up, down, nose room/following/leading, match action
(cut on
action), & light to dark.
Perspective (from Latin: perspicere to see through) in the graphic arts is an
approximate representation, on a flat surface (such as paper), of an image as it is
seen by the eye. The two most characteristic features of perspective are that
objects are smaller as their distance from the observer increases; and that they are
subject to foreshortening, meaning that an object's dimensions along the line of
sight are shorter than its dimensions across the line of sight. Italian Renaissance
painters and architects including Filippo Brunelleschi, Masaccio, Paolo Uccello, Piero
della Francesca and Luca Pacioli studied linear perspective, wrote treatises on it,
and incorporated it into their artworks, thus contributing to the mathematics of art.

Basic camera techniques

Zooming is one camera move that most people are probably familiar with. It involves
changing the focal length of the lens to make the subject appear closer or further away in
the frame. Most video cameras today have built-in zoom features. Some have manual
zooms as well, and many have several zoom speeds. Zooming is one of the most
frequently-used camera moves and one of the most overused. Use it carefully.

By using a large aperture value (f/1.4, f/2.0) you will be able to create a shallow
depth of field. This effectively leaves one part of the frame in focus while blurring
others, such as the foreground or background. When you change the focus in the
shot from the foreground to the background youre doing another advanced camera
shot called a rack focus.

An editorial transition popular during the silent period utilizing
a diaphragm placed in front of the lens and which, when
opened (iris in) or closed (iris out), functions like a fade in or
fade out. A partially opened iris can also be used to focus
attention on a detail of the scene in the manner of vignetting.

One of the most common problems that I see in beginner photographer
images are shots with incorrect color. Weve all seen them portraits
where your subjects teeth and eyeballs (and everything else) has a
yellowish tinge. Learn what causes this and how to combat it with this
tutorial on White Balance.
Make it a circular polariser. This is the perfect beginner's filter, and one that will
have the biggest effect on your day to day photography, giving holiday skies a
vibrant blue tone and accentuating the contrast between the sky and passing clouds
to afford your images greater texture. Although you can add blue to your images in
Photoshop or a similar post-production editing tool, the effect is never as believable
when done that way as it is when shot using a lens.

Lighting techniques
The key light is vital for video production lighting: it is placed about 45 degrees to
the subject, either left or right, usually above and aimed down between 30 and 45degrees. It is the dominant light. Position this light as you would if it were the only
light you had. From this, you'll have defining shadows on the face which would be
lost if the light were on a similar axis to the camera, but you'll notice that, in a room
with no other lighting, it will create deep, dark shadows. Toning down those shadows
is the job of the next light.
The fill is usually two or three stops dimmer than the key light, and its placement is at a
near 45-degree angle on the opposite side of the camera, often on a level with the subject's
face. The fill light is a reaction to the key light, and its ultimate placement depends on the
function of the fill - what shadows does it create? Where do you need to reduce them for
better video production lighting?The fill light can be the same size as the key light in
wattage and bulb size, but you might then place it further away than the key. Watch as the
fill drives back the shadows; though the lighting is not nearly as harsh, these two together
still present a very two-dimensional view. The job of the third and final light is to create a
sense of distance between the subject
the background,
giving an illusion of a third
on the
screen. called a rim or shoulder light, is aimed at the subject's back, and,
back light,
like the key light, it is usually 45-degrees off the axis and shines down upon the subject. This
creates a bright rim around part of the subject, creating an outline which then appears to
separate the shoulders from the background. The back light should be at least as bright as
the key, often brighter.

Basic video-shot vocabulary

A Long Shot is even further back, showing whole buildings. People are smaller than
their surroundings. One effective use of the long shot is in the training scene in the
first Rocky film. Sylvester Stallone jogs through the streets of Philadelphia in a series
of medium shots and medium closeups, then runs up the steps of the art museum.
As the sun rises, the director cuts to a long shot, showing Stallone against the
background of the city, leaping victoriously into the air. What does this scene tell
us? That Rocky is one little man, dwarfed by his surroundings, but that he is
tenacious and will overcome.

A Medium Long shot usually shows an entire person, head to foot. This gives you the opportunity to
show much more of the environment - a street, for example, or a cityscape. Multiple people can
typically interact in a medium shot, being seen from head to foot. An example is the opening scene of
Star Wars Episode IV: A New Hope, where storm troopers have blown up the door of Princess Leia's
space ship and the menacing figure of Darth Vader stalks through the gaping hole into the smokefilled corridor. The medium shot allows us to see not only the evil Vader, but also dead rebel soldiers
and storm troopers that snap to attention as he enters. All of these elements contain valuable
information for the visual story we are being told.


An Extreme Long Shot doesn't have people as its focus, but rather
the surrounding environment. Think of John Ford's extreme long
shots of Monument Valley with tiny little stage coaches running
across them in the 1939 epic Stagecoach. The hugeness of nature
fills the frame with people shown as very small. People are usually
indistinguishable from one another or distinguishable only by their
gait and clothes.

MPEG stands for Moving Picture Expert Group in
charge of the development of standards for
coded representation of digital audio and video.
There are several audio/video formats which
bear this group's name, such as MPEG1, MPEG2,
MPEG1 format is often used in digital cameras
and camcorders to capture small, easily
transferable motion video clips. It is also the
compression format used to create Video CDs. In
addition, The well-known MP3 audio format is
part of the MPEG1 codec.
MPEG2 format, a video standard developed by
MPEG group, is often used in digital TVs, DVD
movies and in SVCDs. It is not a successor for
MPEG1, but an addition instead. both of these
formats have their own purposes in life. MPEG1
for latest
usage and
compression method
standardized by MPEG group, is used for both streaming
and downloadable web content, and is also the video format employed by a growing number of
portable video recorders. One of the best-known MPEG4 encoders is DivX which since version 5
has been fully standard-compliant MPEG4 encoder.
MPEG7 doesn't itself offer any new encoding features and it is not meant for representing
audio/video content, unlike its siblings MPEG1, MPEG2 and MPEG4. Instead, it offers metadata
information for audio and video files, allowing searching and indexing of audio/video data based
on the information about the content instead of searching the actual content bitstream.
MPEG7 is based on XML and therefor is universal and all the existing tools that support XML
parsing should be able to read the data as well, provided that they can ignore binary parts of the
MPEG7 is not used at the moment, but it is under serious development and standardization
process at the moment and hopefully we see first fully featured MPEG-7 tools within few years.

AVI stands for Audio Video Interleaved and developed by Microsoft.
An AVI file can use different codecs and formats so there is no set format for
an AVI file unlike for example standard VCD video which sets a standard for
resolution, bitrates, and codecs used. Most commonly used video codecs that
use AVI structure are M-JPEG and DivX.

MOV is a file extension used by the QuickTime-wrapped files.
QuickTime Content (.mov, .qt), developed by Apple Computer, is a file
format for storing and playing back movies with sound. This flexible
format isn't limited to Macintosh operating systems. It's also commonly
used in Windows systems, and other types of computing platforms.

The MP4 File Extension type is primarily associated with 'MPEG-4'. You often
see BIFS associated with MPEG-4. BIFS stands for "Binary Format for Scenes".
BIFS provides a complete framework for the presentation engine of MPEG-4
terminals. Play MP4 files with various players; just download the correct
CODEC from the player's Website. For a summary of how to view an MPEG-4
movie on a Sony PSP, see the FAQ.
Because both the MOV and MP4 container formats can use the same MPEG-4
formats, they are mostly interchangeable in a QuickTime-only environment.

Analog video is a video signal transferred by an analog signal. An analog color video signal
contains luminance, brightness (Y) and chrominance (C) of an analog television image. When
combined into one channel, it is called composite video as is the case, among others with
NTSC, PAL and SECAM. Analog video may be carried in separate channels, as in two channel
S-Video (YC) and multi-channel component video formats. Analog video is used in both
consumer and professional television production applications.
Digital video signal formats with higher quality have been adopted, including serial digital
interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI)
and DisplayPort Interface, though analog video interfaces are still used and widely
available. There exist different adaptors and
In many cases, the software that comes with your card will work fine. It is the software
supported by the card, and what was tested to give the optimal results. However, other
alternatives include VirtualDub, VirtualVCR, WinDV, DScaler and iuVCR just to name a few.
Try a freebie or a trial edition of any of these packages to see if you can get better results. If
capturing AVI, consider the HuffyUV codec or the Lagarith codec. Capturing MPEG-1 or MPEG2 works great on some cards that are designed for it, but not all. Not all cards will cooperate
with all software and all codecs, at least from a dropped frames point of view.
The current generation of video coding is known by three names; H.264, MPEG4 part 10, and
Advanced Video Coding (AVC). As with earlier codecs, H.264 employs spatial and temporal
compression. Spatial compression is used on a single frame of video as described previously.
These types of frames are known as I-frames. An I-Frame is the first picture in a GOP. Temporal
compression takes advantage of the fact that little information changes between subsequent
frames. Changes are a result of motion, although changes in zoom or camera movement can
result in almost every pixel changing. Vectors are used to describe this motion and are applied
to a block. A global vector is used if the encoder determines all pixels moved together, as is the
case with camera panning. In addition, a difference signal is used to fine tune any error that
results. H.264 allows variable block sizes and is able to code motion as fine as a pixel. The
decoder uses this information to determine how the current frame should look based on the

The sending station determines the video's resolution and consequently, the load
on the network. This is irrespective of the size of the monitor used to display the
video. Observing the video is not a reliable method to estimate load. Common high
definition formats are 720i, 1080i, 1080p, etc. The numerical value of the format
represents the number of rows in the frame. The aspect ratio of high definition is
16:9 which results in 1920 columns. There is work underway on 2160p resolution
and UHDV (7,680 x 4320). This format was first demonstrated by NHK over an IP
network and used 600 Mb/s of bandwidth. Video load on the network is likely to
increase over time due to the demand for high quality images. In addition to high
resolution, there is also a proliferation of lower quality video that is often tunneled
in HTTP or in some cases, HTTPS, and SSL. Typical resolutions include CIF (352x288)
and 4CIF (704x576). These numbers were chosen as integers of the 16x16 macro
that is used by the DCT (22x18) and (44x36) macro blocks respectively.


Typical BW

QCIF (1/4 CIF)








1 Mb/s



Analog, 4.2Mhz

720 HD


1-8 mb/s

1080 HD


5-8 mb/s h.264

12+ mb/s mpg2


640x480 max





Camera limits

128 - 512K+

hat's the difference between linear and non-linear video editin

There are two kinds of video editing software available, linear video editing software and nonlinear video editing software. When it comes to choose the editing software, many people
confused about what's the difference between the two kind of video editing program and
which is better for their video editing. At first we should to know what's linear editing and nonlinear editing.
What's linear video editing?
According to wikipedia, linear video editing is a video editing post-production process of
selecting, arranging and modifying images and sound in a predetermined, ordered sequence.
Regardless of whether it was captured by a video camera, tapeless camcorder, or recorded in a
television studio on a video tape recorder (VTR) the content must be accessed sequentially. For
the most part video editing software has replaced linear editing.
What's non-linear video editing?
The non-linear editing is a method that allows you to access any frame in a digital video clip
regardless of sequence in the clip. The freedom to access any frame, and use a cut-and-paste
method, similar to the ease of cutting and pasting text in a word processor, and allows you to
easily include fades, transitions, and other effects that cannot be achieved with linear editing.
Currently most editing software are non-linear video editing software due to the high demand of
editing requirements.

w to output digital video to analog and analog video to digita

In electronics, a digital-to-analog converter (DAC, D/A, DA, D2A, or D-toA) is a device that converts a digital signal into an analog signal. An
analog-to-digital converter (ADC) performs the reverse function.
There are several DAC architectures; the suitability of a DAC for a
particular application is determined by six main parameters: physical
size, power consumption, resolution, maximum sampling frequency,
accuracy and cost.[citation needed] Due to the complexity and the need
for precisely matched components, all but the most specialized DACs are
implemented as integrated circuits (ICs). Digital-to-analog conversion can
degrade a signal, so a DAC should be specified that has insignificant
errors in terms of the application.
DACs are commonly used in music players to convert digital data streams
into analog audio signals. They are also used in televisions and mobile
phones to convert digital video data into analog video signals which
connect to the screen drivers to display monochrome or color images.
These two applications use DACs at opposite ends of the speed/resolution
trade-off. The audio DAC is a low speed high resolution type while the
video DAC is a high speed low to medium resolution type. Discrete DACs
would typically be extremely high speed low resolution power hungry
types, as used in military radar systems. Very high speed test equipment,
especially sampling oscilloscopes, may also use discrete DACs.

Chroma key or Color key

Chroma key compositing, or chroma keying, is a special effects / postproduction technique for compositing (layering) two images or video
streams together based on color hues (chroma range). The technique has
been used heavily in many fields to remove a background from the
subject of a photo or video particularly the newscasting, motion picture
and videogame industries. A color range in the foreground footage is
made transparent, allowing separately filmed background footage or a
static image to be inserted into the scene. The chroma keying technique
is commonly used in video production and post-production. This technique
is also referred to as color keying, colour-separation overlay (CSO;
primarily by the BBC[2]), or by various terms for specific color-related
variants such as green screen, and blue screen chroma keying can be
done with backgrounds of any color that are uniform and distinct, but
green and blue backgrounds are more commonly used because they differ
most distinctly in hue from most human skin colors. No part of the subject
being filmed or photographed may duplicate the color used as the
It is commonly used for weather forecast broadcasts, wherein a news
presenter is usually seen standing in front of a large CGI map during live
television newscasts, though in actuality it is a large blue or green
background. When using a blue screen, different weather maps are added
on the parts of the image where the color is blue. If the news presenter
wears blue clothes, his or her clothes will also be replaced with the
background video. Chroma keying is also common in the entertainment

Frame and Fps

Frame rate, also known as frame frequency, is the frequency (rate) at
which an imaging device displays consecutive images called frames. The
term applies equally to film and video cameras, computer graphics, and
motion capture systems. Frame rate is usually expressed in frames per
second (FPS).
One of the most common ways to provide a simple measure of graphics
performance in game titles is frame rate, expressed in frames per
second. However, this measure can be quite deceiving, especially with
today's faster video hardware. While it may provide some measure of
performance, when it comes to making judgments regarding
optimization, FPS is a very poor means of measuring application

Cutaway and Jump Cut

In film and video, a cutaway shot is the interruption of a continuously filmed action
by inserting a view of something else.[1] It is usually, although not always, followed
by a cut back to the first shot, when the cutaway avoids a jump cut.[2] The cutaway
shot does not necessarily contribute any dramatic content of its own, but is used to
help the editor assemble a longer sequence.[3] For this reason, editors choose
cutaway shots related to the main action, such as another action or object in the
same location.[4] For example, if the main shot is of a man walking down an alley,
possible cutaways may include a shot of a cat on a nearby dumpster or a shot of a
person watching from a window overhead.

Similarly, a cutaway scene is the interruption of a scene with the insertion of

another scene, generally unrelated or only peripherally related to the original scene.
The interruption is usually quick, and is usually, although not always, ended by a
return to the original scene. The effect is of commentary to the original scene,
frequently comic
Cut between such two shots: they are made from similar camera angle, similar
distance away from object, have similar image composition.
Jump cut looks like a technical mistake. You can get a typical jump cut by removing
several frames from a single shot. And audience will think exactly the same way:
something is missing.
Jump cut is generally not acceptable, but as you will find later, one segment of this
book is devoted in doing jump cut intentionally.
Because editing is creating!

rsistence of Vision and Rule of Thirds

The basic principle behind the rule of
thirds is to imagine breaking an image
down into thirds (both horizontally and
vertically) so that you have 9 parts. As

As youre taking an image you would have done this

in your mind through your viewfinder or in the LCD
display that you use to frame your shot.
With this grid in mind the rule of thirds now
identifies four important parts of the image that you
should consider placing points of interest in as you
frame your image.
Not only this but it also gives you four lines that
are also useful positions for elements in your photo.
The rule of thirds 2

The theory is that if you place points of interest in the intersections or along the lines that your
photo becomes more balanced and will enable a viewer of the image to interact with it more
Studies have shown that when viewing images that peoples eyes usually go to one of the
intersection points most naturally rather than the center of the shot using the rule of thirds works
with this natural way of viewing an image rather than working against it.

In addition to the above picture

of the bee where the bees eye
becomes the point of focus here
are some of examples:

Persistence of vision refers to the optical illusion whereby multiple discrete images
blend into a single image in the human mind and believed to be the explanation for
motion perception in cinema and animated films. Like other illusions of visual
perception, it is produced by certain characteristics of the visual system.