Вы находитесь на странице: 1из 12

1.

a Still Images
Still images are visual representations that do not move. Text is ideal for
transmitting information in a highly articulate manner that can be consistently
interpreted irrespective of the user. Still images, however, allow the content
creator to convey information which can be more freely interpreted by the user.
A picture does indeed paint a thousand words but the meaning of the picture
will vary from user to user.

1.b Image Quality


Image quality (often Image Quality Assessment, IQA) is a characteristic
of an image that measures the perceived image degradation (typically,
compared to an ideal or perfect image). Imaging systems may introduce some
amounts of distortion or artefacts in the signal – for example by transcoding –
which affects the subjectively experienced quality and Quality of Experience for
end users.

There are several techniques and metrics that can be measured objectively and
automatically evaluated by a computer program. They can be classified
depending on the availability of a reference image or features from a reference
image:

 Full-reference (FR) methods – FR metrics try to assess the quality of a test


image by comparing it with a reference image that is assumed to have
perfect quality, e.g. the original of an image versus a JPEG-compressed
version of the image.
 Reduced-reference (RR) methods – RR metrics assess the quality of a test
and reference image based on a comparison of features extracted from both
images.
 No-reference (NR) methods – NR metrics try to assess the quality of a test
image without any reference to the original one.
Image quality metrics can also be classified in terms of measuring only one
specific type of degradation (e.g., blurring, blocking, or ringing), or taking into
account all possible signal distortions, that is, multiple kinds of artefacts.
1.c Types of Images
Binary Image – Binary Images are the simplest type of images and can take on
two values, typically black and white, or 0 and 1. A binary image is referred to
as a 1-bit image because it takes only 1 binary digit to represent each pixel.
These types of images are frequently used in applications where the only
information required is general shape or outline, for example optical character
recognition(OCR).
Binary images are often created from gray-scale images via a threshold
operation, where every pixel above the threshold value is turned white(‘1’),
and those below it are turned black(‘0’).

Gray-scale Images – Gray-scale images are referred to as monochrome(one-


color) images. They contain gray-level information, no colour information. The
number of bits used for each pixel determines the number of different fray
levels available. The typical gray-scale image contains 8 bits/pixel data, which
allows us to have 256 different gray levels.

Colour Images – Colour Images can be modelled as three-band monochrome


image data where each band of data corresponds to a different colour. The
actual information stored in the digital image data is the gray-level information
is each spectral band.
Typical colour images are represented as Red, Green and Blue(RGB).

Multispectral Images – Multispectral images typically contain information


outside the normal human perceptual range. This may include infrared, uv, x-
ray, acoustic, or radar data. These are not images in the usual sense because the
information represented is not directly visible by the human system. However,
the information is often represented is visual form by mapping the different
spectral bands to RGB components.
2. Features of Image editing application
Selection
One of the prerequisites for many of the applications mentioned below is a
method of selecting part(s) of an image, thus applying a change selectively
without affecting the entire picture. Most graphics programs have several means
of accomplishing this, such as:

 a marquee tool for selecting rectangular or other regular polygon-shaped


regions,
 a lasso tool for freehand selection of a region,
 a magic wand tool that selects objects or regions in the image defined by
proximity of colour or luminescence,
 vector-based pen tools,
as well as more advanced facilities such as edge detection, masking, alpha
compositing, and colour and channel-based extraction. The border of a selected
area in an image is often animated with the marching ants effect to help the user
to distinguish the selection border from the image background.
Layers
Another feature common to many graphics applications is that of Layers, which
are analogous to sheets of transparent acetate (each containing separate
elements that make up a combined picture), stacked on top of each other, each
capable of being individually positioned, altered and blended with the layers
below, without affecting any of the elements on the other layers. This is a
fundamental workflow which has become the norm for the majority of
programs on the market today, and enables maximum flexibility for the user
while maintaining non-destructive editing principles and ease of use.
Image size alteration
Image editors can resize images in a process often called image scaling, making
them larger, or smaller. High image resolution cameras can produce large
images which are often reduced in size for Internet use. Image editor programs
use a mathematical process called resamplingto calculate new pixel values
whose spacing is larger or smaller than the original pixel values. Images for
Internet use are kept small, say 640 x 480 pixels which would equal
0.3 megapixels.
Cropping an image
Digital editors are used to crop images. Cropping creates a new image by
selecting a desired rectangular portion from the image being cropped. The
unwanted part of the image is discarded. Image cropping does not reduce the
resolution of the area cropped. Best results are obtained when the original image
has a high resolution. A primary reason for cropping is to improve the image
composition in the new image.
Cutting out a part of an image from the background
Using a selection tool, the outline of the figure or element in the picture is
traced/selected, and then the background is removed. Depending on how
intricate the "edge" is this may be more or less difficult to do cleanly. For
example, individual hairs can require a lot of work. Hence the use of the "green
screen" technique (chroma key) which allows one to easily remove the
background.
Histogram
Image editors have provisions to create an image histogram of the image being
edited. The histogram plots the number of pixels in the image (vertical axis)
with a particular brightness value (horizontal axis). Algorithms in the digital
editor allow the user to visually adjust the brightness value of each pixel and to
dynamically display the results as adjustments are made. Improvements in
picture brightness and contrast can thus be obtained.[1]
Noise reduction
Image editors may feature a number of algorithms which can add or
remove noise in an image. Some JPEG artefacts can be removed; dust and
scratches can be removed and an image can be de-speckled. Noise reduction
merely estimates the state of the scene without the noise and is not a substitute
for obtaining a "cleaner" image. Excessive noise reduction leads to a loss of
detail, and its application is hence subject to a trade-off between the
undesirability of the noise itself and that of the reduction artefacts.
Noise tends to invade images when pictures are taken in low light settings. A
new picture can be given an 'antiqued' effect by adding uniform monochrome
noise.
Removal of unwanted elements
Most image editors can be used to remove unwanted branches, etc., using a
"clone" tool. Removing these distracting elements draws focus to the subject,
improving overall composition.
Selective colour change
Some image editors have colour swapping abilities to selectively change the
colour of specific items in an image, given that the selected items are within a
specific colour range.
Image orientation
image editors are capable of altering an image to be rotated in any direction and
to any degree. Mirror images can be created and images can be
horizontally flipped or vertically flopped. A small rotation of several degrees is
often enough to level the horizon, correct verticality (of a building, for
example), or both. Rotated images usually require cropping afterwards, in order
to remove the resulting gaps at the image edges.
Perspective control and distortion
Some image editors allow the user to distort (or "transform") the shape of an
image. While this might also be useful for special effects, it is the preferred
method of correcting the typical perspective distortion which results from
photographs being taken at an oblique angle to a rectilinear subject. Care is
needed while performing this task, as the image is reprocessed
using interpolation of adjacent pixels, which may reduce overall
image definition. The effect mimics the use of a perspective control lens, which
achieves a similar correction in-camera without loss of definition.
Lens correction
Photo manipulation packages have functions to correct images for various
lens distortions including pincushion, fisheye and barrel distortions. The
corrections are in most cases subtle, but can improve the appearance of some
photographs.
Enhancing images
In computer graphics, the process of improving the quality of a digitally stored
image by manipulating the image with software. It is quite easy, for example, to
make an image lighter or darker, or to increase or decrease contrast. Advanced
photo enhancement software also supports many filters for altering images in
various ways. Programs specialized for image enhancements are sometimes
called image editors.

3. Image Compression Techniques


Lossy and lossless image compression
Image compression may be lossy or lossless. Lossless compression is preferred
for archival purposes and often for medical imaging, technical drawings, clip
art, or comics. Lossy compression methods, especially when used at low bit
rates, introduce compression artifacts. Lossy methods are especially suitable for
natural images such as photographs in applications where minor (sometimes
imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in
bit rate. Lossy compression that produces negligible differences may be called
visually lossless.
Methods for lossless image compression are:

 Run-length encoding – used in default method in PCX and as one of possible


in BMP, TGA, TIFF
 Area image compression
 DPCM and Predictive Coding
 Entropy encoding
 Adaptive dictionary algorithms such as LZW – used in GIF and TIFF
 DEFLATE – used in PNG, MNG, and TIFF
 Chain codes

Methods for lossy compression:

 Reducing the colour space to the most common colours in the image. The
selected colours are specified in the colour palette in the header of the
compressed image. Each pixel just references the index of a colour in the
colour palette, this method can be combined with dithering to
avoid posterization.
 Chroma subsampling. This takes advantage of the fact that the human eye
perceives spatial changes of brightness more sharply than those of colour, by
averaging or dropping some of the chrominance information in the image.
 Transform coding. This is the most commonly used method. In particular,
a Fourier-related transform such as the Discrete Cosine Transform (DCT) is
widely used: N. Ahmed, T. Natarajan and K.R.Rao, "Discrete Cosine
Transform," IEEE Trans. Computers, 90–93, Jan. 1974. The DCT is
sometimes referred to as "DCT-II" in the context of a family of discrete
cosine transforms; e.g., see discrete cosine transform. The more recently
developed wavelet transform is also used extensively, followed
by quantization and entropy coding.
 Fractal compression.

4. Three Animation Techniques


4.1 Traditional animation
Traditional animation (also called cel animation or hand-drawn animation) was
the process used for most animated films of the 20th century. The individual
frames of a traditionally animated film are photographs of drawings, first drawn
on paper. To create the illusion of movement, each drawing differs slightly from
the one before it. The animators' drawings are traced or photocopied onto
transparent acetate sheets called cels, which are filled in with paints in assigned
colors or tones on the side opposite the line drawings. The completed character
cels are photographed one-by-one against a painted background by a rostrum
camera onto motion picture film.
The traditional cel animation process became obsolete by the beginning of the
21st century. Today, animators' drawings and the backgrounds are either
scanned into or drawn directly into a computer system. Various software
programs are used to colour the drawings and simulate camera movement and
effects. The final animated piece is output to one of several delivery media,
including traditional 35 mm film and newer media with digital video. The
"look" of traditional cel animation is still preserved, and the character animators'
work has remained essentially the same over the past 70 years. Some animation
producers have used the term "tradigital" (a play on the words "traditional" and
"digital") to describe cel animation that uses significant computer technology.

4.2 Computer animation


Computer animation encompasses a variety of techniques, the unifying factor
being that the animation is created digitally on a computer. 2D animation
techniques tend to focus on image manipulation while 3D techniques usually
build virtual worlds in which characters and objects move and interact. 3D
animation can create images that seem real to the viewer.
2D animation
2D animation figures are created or edited on the computer using 2D bitmap
graphics and 2D vector graphics.This includes automated computerized
versions of traditional animation techniques, interpolated morphing, onion
skinning and interpolated rotoscoping.
2D animation has many applications, including analog computer
animation, Flash animation, and PowerPoint animation. Cinemagraphs are still
photographs in the form of an animated GIF file of which part is animated.
Final line advection animation is a technique used in 2D animation, to give
artists and animators more influence and control over the final product as
everything is done within the same department. Speaking about using this
approach in Paperman, John Kahrs said that "Our animators can change things,
actually erase away the CG underlayer if they want, and change the profile of
the arm.
3D animation
3D animation is digitally modeled and manipulated by an animator. The
animator usually starts by creating a 3D polygon mesh to manipulate. A mesh
typically includes many vertices that are connected by edges and faces, which
give the visual appearance of form to a 3D object or 3D environment.
Sometimes, the mesh is given an internal digital skeletal structure called
an armature that can be used to control the mesh by weighting the vertices. This
process is called rigging and can be used in conjunction with keyframes to
create movement.
Other techniques can be applied, mathematical functions (e.g., gravity, particle
simulations), simulated fur or hair, and effects, fire and water simulations.These
techniques fall under the category of 3D dynamics.
3D terms
 Cel-shaded animation is used to mimic traditional animation using computer
software. Shading looks stark, with less blending of colors. Examples
include Skyland (2007, France), The Iron Giant (1999, United
States), Futurama (Fox, 1999) Appleseed Ex Machina (2007, Japan), The
Legend of Zelda: The Wind Waker (2002, Japan), The Legend of Zelda:
Breath of the Wild (2017, Japan)

 Machinima – Films created by screen capturing in video games and virtual


worlds. The term originated from the software introduction in the
1980s demoscene, as well as the 1990s recordings of the first-person
shooter video game Quake.
 Motion capture is used when live-action actors wear special suits that allow
computers to copy their movements into CG characters. Examples
include Polar Express(2004, US), Beowulf (2007, US), A Christmas
Carol (2009, US), The Adventures of Tintin (2011, US) kochadiiyan (2014,
India).
 Photo-realistic animation is used primarily for animation that attempts to
resemble real life, using advanced rendering that mimics in detail skin,
plants, water, fire, clouds, etc. Examples include Up (2009, US), How to
Train Your Dragon (2010, US), Ice Age (2002, US).

4.3 Mechanical animation


 Animatronics is the use of mechatronics to create machines that seem
animate rather than robotic.
 Audio-Animatronics and Autonomatronics is a form
of robotics animation, combined with 3-D animation, created by Walt
Disney Imagineering for shows and attractions at Disney theme parks
move and make noise (generally a recorded speech or song).[89] They are
fixed to whatever supports them. They can sit and stand, and they cannot
walk. An Audio-Animatron is different from an android-type robot in
that it uses prerecorded movements and sounds, rather than responding to
external stimuli. In 2009, Disney created an interactive version of the
technology called Autonomatronics.[90]
 Linear Animation Generator is a form of animation by using static
picture frames installed in a tunnel or a shaft. The animation illusion is
created by putting the viewer in a linear motion, parallel to the installed
picture frames.[91] The concept and the technical solution were invented
in 2007 by Mihai Girlovan in Romania.
 Chuckimation is a type of animation created by the makers of the television
series Action League Now! in which characters/props are thrown, or
chucked from off camera or wiggled around to simulate talking by unseen
hands.
 Puppetry is a form of theatre or performance animation that involves the
manipulation of puppets. It is very ancient and is believed to have originated
3000 years BC. Puppetry takes many forms, they all share the process of
animating inanimate performing objects. Puppetry is used in almost all
human societies both as entertainment – in performance – and ceremonially
in rituals, celebrations, and carnivals. Most puppetry involves storytelling.
 Zoetrope is a device that produces the illusion of motion from a rapid
succession of static pictures. The term zoetrope is from the Greek words ζωή
(zoē), meaning "alive, active", and τρόπος (tropos), meaning "turn", with
"zoetrope" taken to mean "active turn" or "wheel of life".

5. Animation Software

Autodesk - Many leading companies in the games (US site), film,


television (US site) and advertising industries rely on Autodesk 3D animation
software to create compelling digital content. In today's climate of quickly
rising consumer expectations, tight deadlines and shrinking budgets, Autodesk
can help you to build a modern production pipeline for:

 Fully animated features


 Visual effects (US site) and post-production
 Game design (US site)
 Previsualisation and virtual film-making

6. Multimedia Software Tools


Digital Audio Cool edit, Sound forge, Pro tools
Graphic and Image Editing Picasa, Adobe PhotoShop, GIMP
Video Editing iMovie, Adobe Premiere, Final Cut Pro
Animation Java3D, DirectX, OpenGL

7.a JPEG Standards


"JPEG" stands for Joint Photographic Experts Group, the name of the
committee that created the JPEG standard and also other still picture coding
standards. The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. In 1987
ISO TC 97 became ISO/IEC JTC1 and in 1992 CCITT became ITU-T.
Currently on the JTC1 side JPEG is one of two sub-groups of ISO/IEC Joint
Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC
1/SC 29/WG 1) – titled as Coding of still pictures. On the ITU-T side ITU-T
SG16 is the respective body. The original JPEG Group was organized in
1986, issuing the first JPEG standard in 1992, which was approved in
September 1992 as ITU-T Recommendation T.81 and in 1994
as ISO/IEC 10918-1.
The JPEG standard specifies the codec, which defines how an image is
compressed into a stream of bytes and decompressed back into an image, but
not the file format used to contain that stream. The Exif and JFIF standards
define the commonly used file formats for interchange of JPEG-compressed
images.

7.b MPEG Standards


The MPEG standards consist of different Parts. Each part covers a certain
aspect of the whole specification. The standards also
specify Profiles and Levels. Profiles are intended to define a set of tools that are
available, and Levels define the range of appropriate values for the properties
associated with them. Some of the approved MPEG standards were revised by
later amendments and/or new editions. MPEG has standardized the following
compression formats and ancillary standards:

 MPEG-1 (1993): Coding of moving pictures and associated audio for digital
storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial
version is known as a lossy file format and is the first MPEG compression
standard for audio and video. It is commonly limited to about 1.5 Mbit/s
although the specification is capable of much higher bit rates. It was
basically designed to allow moving pictures and sound to be encoded into
the bitrate of a Compact Disc. It is used on Video CD and can be used for
low-quality video on DVD Video. It was used in digital satellite/cable TV
services before MPEG-2 became widespread. To meet the low bit
requirement, MPEG-1 downsamples the images, as well as uses picture rates
of only 24–30 Hz, resulting in a moderate quality. It includes the popular
MPEG-1 Audio Layer III (MP3) audio compression format.
 MPEG-2 (1995): Generic coding of moving pictures and associated audio
information (ISO/IEC 13818). Transport, video and audio standards for
broadcast-quality television. MPEG-2 standard was considerably broader in
scope and of wider appeal – supporting interlacing and high definition.
MPEG-2 is considered important because it has been chosen as the
compression scheme for over-the-air digital
television ATSC, DVB and ISDB, digital satellite TV services like Dish
Network, digital cable television signals, SVCD and DVD Video. It is also
used on Blu-ray Discs, but these normally use MPEG-4 Part 10 or
SMPTE VC-1 for high-definition content.
 MPEG-3: MPEG-3 dealt with standardizing scalable and multi-resolution
compression and was intended for HDTV compression but was found to be
redundant and was merged with MPEG-2; as a result there is no MPEG-3
standard. MPEG-3 is not to be confused with MP3, which is MPEG-1 or
MPEG-2 Audio Layer III.
 MPEG-4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4
uses further coding tools with additional complexity to achieve higher
compression factors than MPEG-2. In addition to more efficient coding of
video, MPEG-4 moves closer to computer graphics applications. In more
complex profiles, the MPEG-4 decoder effectively becomes a rendering
processor and the compressed bitstream describes three-dimensional shapes
and surface texture. MPEG-4 supports Intellectual Property Management
and Protection (IPMP), which provides the facility to use proprietary
technologies to manage and protect content like digital rights
management. It also supports MPEG-J, a fully programmatic solution for
creation of custom interactive multimedia applications (Java
application environment with a Java API) and many other features. Several
new higher-efficiency video standards (newer than MPEG-2 Video) are
included, notably:
 MPEG-4 Part 2 (or Simple and Advanced Simple Profile) and
 MPEG-4 AVC (or MPEG-4 Part 10 or H.264). MPEG-4 AVC may be
used on HD DVD and Blu-ray Discs, along with VC-1 and MPEG-2.
MPEG-4 has been chosen as the compression scheme for over-the-air in Brazil
(ISDB-TB), based on original digital television from Japan (ISDB-T).
In addition, the following standards, while not sequential advances to the video
encoding standard as with MPEG-1 through MPEG-4, are referred to by similar
notation:

 MPEG-7 (2002): Multimedia content description interface. (ISO/IEC 15938)


 MPEG-21 (2001): Multimedia framework (MPEG-21). (ISO/IEC 21000)
MPEG describes this standard as a multimedia framework and provides for
intellectual property management and protection.
Moreover, more recently than other standards above, MPEG has started
following international standards; each of the standards holds multiple MPEG
technologies for a way of application. (For example, MPEG-A includes a
number of technologies on multimedia application format.)

 MPEG-A (2007): Multimedia application format (MPEG-A). (ISO/IEC


23000) (e.g., Purpose for multimedia application formats, MPEG music
player application format, MPEG photo player application format and
others)
 MPEG-B (2006): MPEG systems technologies. (ISO/IEC 23001)
(e.g., Binary MPEG format for XML, Fragment Request Units, Bitstream
Syntax Description Language (BSDL) and others)
 MPEG-C (2006): MPEG video technologies. (ISO/IEC 23002) (e.g.,
Accuracy requirements for implementation of integer-output 8x8 inverse
discrete cosine transform and others)
 MPEG-D (2007): MPEG audio technologies. (ISO/IEC 23003) (e.g., MPEG
Surround,] SAOC-Spatial Audio Object Coding and USAC-Unified Speech
and Audio Coding)
 MPEG-E (2007): Multimedia Middleware. (ISO/IEC 23004) (a.k.a. M3W)
(e.g., Architecture, Multimedia application programming interface (API),
Component model and others)
 Supplemental media technologies (2008). (ISO/IEC 29116) Part 1: Media
streaming application format protocols will be revised in MPEG-M; Part 4 –
MPEG extensible middleware (MXM) protocols.
 MPEG-V (2011): Media context and control. (ISO/IEC 23005) (a.k.a.
Information exchange with Virtual Worlds) (e.g., Avatar characteristics,
Sensor information, Architecture and others)
 MPEG-M (2010): MPEG eXtensible Middleware (MXM). (ISO/IEC 23006)
(e.g., MXM architecture and technologies, API, MPEG extensible
middleware (MXM) protocols)
 MPEG-U (2010): Rich media user interfaces. (ISO/IEC 23007) (e.g.,
Widgets)
 MPEG-H (2013): High Efficiency Coding and Media Delivery in
Heterogeneous Environments. (ISO/IEC 23008) Part 1 – MPEG media
transport; Part 2 – High Efficiency Video Coding; Part 3 – 3D Audio.

Вам также может понравиться