Академический Документы
Профессиональный Документы
Культура Документы
What is a video?
A video can be technically described as a motion of picture frames caught in continuous succession
in time, the time lag being so less that the entities in the subsequent picture frames seem to be in a
continuous motion.
What is streaming?
Streaming is a process so called because the picture and sound data is constantly received by, and
normally displayed to the end-user while it is being delivered by the provider, the server, to be
viewed in real time, without actually having to download the content first.by-step Streaming
Using streaming media files is as easy as browsing the Web, but there's a lot that goes on
behind the scenes to make the process possible:
1.Using your Web browser, you find a site that features streaming video or audio.
2.You find the file you want to access, and you click the image, link or embedded player
with your mouse.
3.The Web server hosting the Web page requests the file from the streaming server.
4.The software on the streaming server breaks the file into pieces and sends them to
your computer using real-time protocols.
5.The browser plug-in, standalone player or Flash application on your computer
decodes and displays the data as it arrives.
All of this requires three basic components -- a player, a server and a stream of data that are all
compatible with each other.
Creating and distributing a streaming video or audio file requires its own process:
1.You record a high-quality video or audio file using film or a digital recorder.
2.You digitize this data by importing it to your computer and, if necessary, converting it
with editing software.
3.If you're creating a streaming video, you make the image size smaller and reduce the
frame rate.
4.A codec on your computer compresses the file and encodes it to the right format.
5.You upload the file to a server
6.The server streams the file to users' computers.
Protocols involving streaming of media:
Streaming video and audio use protocols that allow the transfer of data in real time. They
break files into very small pieces and send them to a specific location in a specific order.
These protocols include:
These protocols act like an added layer to the protocols that govern Web traffic. So when the
real-time protocols are streaming the data where it needs to go, the other Web protocols are
still working in the background. These protocols also work together to balance the load on the
server. If too many people try to access a file at the same time, the server can delay the start
of some streams until others have finished.
The Real-time Transport Protocol (or RTP) defines a standardized packet format for
delivering audio and video over the Internet. It can also be used in conjunction with RTSP
protocol which enhances the field of multimedia applications.
It was originally designed as a multicast protocol, but has since been applied in many unicast
applications. It is frequently used in streaming media systems (in conjunction with RTSP) as
well as videoconferencing and push to talk systems, making it the technical foundation of the
Voice over IP industry. It goes along with the RTCP and is built on top of the User Datagram
Protocol (UDP). Applications using RTP are less sensitive to packet loss, but typically very
sensitive to delays, so UDP is a better choice than TCP for such applications.
The Real Time Streaming Protocol (RTSP) is a protocol for use in streaming media
systems which allows a client to remotely control a streaming media server, issuing VCR-like
commands such as "play" and "pause", and allowing time-based access to files on a server.
The sending of streaming data itself is not part of the RTSP protocol. Most RTSP servers use
the standards-based RTP as the transport protocol for the actual audio/video data, acting
somewhat as a metadata channel.
RTCP gathers statistics on a media connection and information such as bytes sent, packets
sent, lost packets, jitter, feedback and round trip delay. An application may use this
information to increase the quality of service, perhaps by limiting flow or using a different
codec.
What is H.264?
H.264 is a new video compression scheme that is becoming the worldwide digital video standard for consumer
electronics and personal computers.In particular, H.264 has already been selected as a key compression scheme
(codec) for the next generation of optical disc formats, HD-DVD and Blu-ray disc (sometimes referred to as BD
or BD-ROM) H.264 has been adopted by the Motion Picture Experts Group (MPEG) to be a key video
compression scheme in the MPEG-4 format for digital media exchange.H.264 is sometimes referred to as
“MPEG-4 Part 10” (part of the MPEG-4 specification), or as “AVC” (MPEG-4’s Advanced Video Coding).This
new compression scheme has been developed in response to technical factors and the needs of an evolving
market:
• High Definition video is becoming pervasive, and there is a strong need to store and transmit more efficiently
the higher quantity of data of HD (about 6 times more than Standard Definition video).
H.264 clearly has a bright future, mostly because it offers much better compression efficiency than previous
compression schemes. The improved efficiency translates into three main benefits, or a combination of them:
• Lower storage requirements: lower storage requirements will allow for large amounts of content to be
delivered on a single disc.
While a more detailed comparison of MPEG-2 to H.264 is given below, a high-level comparison is illustrated by
this graph.
Here we see that for a given bit-rate, the level of video quality or resolution (both of these contribute to greater
bit-rate) for H.264 can be higher than for MPEG-2.
It is likely that future delivery of Digital TV signals (both in SD and HD) will use H.264. For SD, the same
content at a given quality can be delivered with a lower bit-rate (allowing for more channels to be transmitted on
the same medium), or higher quality and/or higher resolution can be delivered at the same bit-rate. Future
Digital TV delivery vehicles include:
• Satellite
• Cable
• Over-the-Air broadcast
Some of the above are already turning to H.264 as a standard; worldwide, more are likely to announce that they
are following shortly.
High-definition video is gaining in popularity, aided by the falling cost of HD television sets. A key deployment
vehicle for High Definition content is likely to be optical discs carrying this content. Two optical disc formats
are currently proposed:
Blu-ray Disc, and HD-DVD. While these formats differ in several ways, both have chosen to adopt H.264 as
one of the key means of storing the HD video content. The high bit-rates that are used to encode the video on
these HD-discs will be particularly challenging today’s PCs; we will examine this further after we compare
MPEG-2 and H.264.
Comparison to MPEG-2
MPEG-2 is today’s dominant video compression scheme; it is used to encode video on DVDs, to stream internet
video and is the basis for most worldwide digital television (over-the air, cable and satellite).While MPEG-2 is a
video-only format, MPEG-4 is a more generic media exchange format, with H.264 as one of several video
compression schemes offered by MPEG-4.
There are numerous differences between these compression schemes, but a key point is that H.264 has been
developed to deliver much higher compression ratios than MPEG-2. However, this greater degree of
compression (up to 2-3 times more efficient than MPEG-2) comes at the expense of much higher computational
requirements. This additional computational complexity is widespread in the overall decoding process, but three
key techniques areas stand out in adding to the new overhead: Entropy encoding, smaller block size and In-loop
de-blocking.
Entropy encoding:
Entropy encoding is a technique used to store large amounts of data by examining the frequency of patterns
within it and encoding this in another, smaller, form. H.264 allows for a variety of entropy encoding schemes,
compared to the fixed scheme employed by MPEG-2. In particular, the new CABAC (Context-based Adaptive
Binary Arithmetic Coding) scheme adds 5-20% of compression efficiency but is much more computationally
demanding than MPEG-2’s entropy encoding.
MPEG-2, H.264, and other most other codecs treat portions of the video image in blocks, often processed in
isolation from each another. Independently of the number of video pixels in the image, the number of blocks has
an effect of the computational requirements.While MPEG-2 has a fixed block size of 16 pixels on a side
(referred as 16x16), H.264 permits the simultaneous mixing of different block sizes (down to 4x4 pixels). This
permits the codec to accurately define fine detail (with more, smaller blocks) while not having to ‘waste’ small
blocks on coarse detail. In this way, for example, patches of blue sky in a video image can use large
blocks,while the finer details of a forest in the frame could be encoded with smaller blocks.
In-loop de-blocking:
When the bit-rate of an MPEG-2 stream is low, the blocks (and specifically, the boundaries between them) can
be very visible and can clearly detract from the visual quality of the video. “De-blocking” is a post-processing
step that adaptively smoothes the edges between adjacent blocks.De-blocking is computationally “expensive”.
In the past, de-blocking has been an optional step in decoding, only enabled when it was possible for the
playback device (such as a PC) to perform it in real time. ATI has offered de-blocking capability for playback of
video for some time.
In H.264, however, In-loop de-blocking is introduced. The “in-loop” refers to when previously ‘de-blocked’
image data, in addition to being displayed, is actually used as part of the decoding of future frames; it is in the
decoding ‘loop’. Because of this, the de-blocking is no longer optional. It adds to the quality of the decoded
video, but also adds significantly to the computational overhead of H.264 decode.
The main objective of the emerging H.264 standard is to provide a means to achieve
substantially higher video quality as compared to what could be achieved using any of
the existing video coding standards. Nonetheless, the underlying approach of H.264 is
similar to that adopted in previous standards such as H.263 and MPEG-4, and consists of
the following four main stages:
Dividing each video frame into blocks of pixels so that processing of the video
frame can be conducted at the block level.
Exploiting the spatial redundancies that exist within the video frame by coding
some of the original blocks through spatial prediction, transform, quantization
and entropy coding (or variable-length coding).
Exploiting the temporal dependencies that exist between blocks in successive
frames, so that only changes between successive frames need to be encoded.
This is accomplished by using motion estimation and compensation. For any given
block, a search is performed in the previously coded one or more frames or in a
future frame to determine the motion vectors that are then used by the encoder
and the decoder to predict the subject block.
Exploiting any remaining spatial redundancies that exist within the video frame by
coding the residual blocks, for example, the difference between the original
blocks and the corresponding predicted blocks, again through transform,
quantization and entropy coding.
With H.264 a given video picture is divided into a number of small blocks referred to as
macroblocks. For example, a picture with QCIF resolution (176x144) is divided into 99
16x16 macroblocks. A similar macroblock segmentation is used for other frame sizes.
The luminance component of the picture is sampled at these frame resolutions, while the
chrominance components, Cb and Cr, are down-sampled by two in the horizontal and
vertical directions. In addition, a picture may be divided into an integer number of
"slices", which are valuable for resynchronization should some data be lost.
For regions with less spatial detail (flat regions), H.264 supports 16x16 intra prediction,
in which one of four prediction modes (DC, Vertical, Horizontal and Planar) is chosen for
the prediction of the entire luminance component of the macroblock. In addition, H.264
supports intra prediction for the 8x8 chrominance blocks also using four prediction
modes (DC, vertical, horizontal and planar). Finally, the prediction mode for each block
is efficiently coded by assigning shorter symbols to more likely modes, where the
probability of each mode is determined based on the modes used for coding of the
surrounding blocks.
GSM ARCHITECTURE
A connection between two people the a caller and the called person is basic of communication
network. To provide this service network must be able to set up and maintain a call.
In wireless mobile network establishment of call is a complex process as it must enable the user to
move at its own will providing they stay within the network service area. It order to achiev this
network has to performed three functions:
The GSM specification define two interfaces with the GSM network. The first one is between mobile
station (MS) and base station (BS) ,this is known as “air interface”. The need for this interface is to be
open since mobiles of different brands need to communicate with GSM network of different
suppliers.
The second interface is in between mobile service switching center (MSC) and base station controller
(BSC) known as “a interface”.
network. This decentralised intelligence is achieved by dividing the network into three subsystems:
The actual network required to establish a call is composed of NSS and BSS. The BSS is responsible
for radio path control and every call is connected through BSS. The NSS take care of call control
function. Calls are always connected by and through NSS.
The NMS is the operation and maintenance related part of netwok and it is needed for the control of
whole GSM network. The operator maintaines and observes network quality and service offered
through NMS. These three subsystem are linked through air, a , o&m interfaces.
MOBILE STATION:
The mobile station is a combination of terminal equipment and subscriber data. The terminal
equipment is known as mobile equipment and subscriber data is stored in a module named
SIM(subscriber identity module).
In GSM network SIM indentifies the user and it contains the identification number of the user and
the list of available networks.
SUBSYSTEMS AND NETWORK ELEMENT OF GSM:
The GSM network is divided into three subsystems namely network service subsystem,base station
subsystem, network management subsystem.
Call control:
It identifies the user , establish the call and clear the call after conversation is over.
Charging:
It collects the charging information about the call and transfer it to the billing center.
Mobility:
This is permanent data storage in HLR and temporary storage of relevant data in VLR .
The MSC is responsible for controlling call in mobile network. It identifies the origin and destination
of call and also type of call. It act as a bridge between mobile network and fixed network is called a
gateway MSC. The tasks performed by MSC involves:
1)call control
2)initiation of paging
VLR is a database which contain information about the subscriber currently being in the service area
of the MSC/VLR such as:
Identification no of subscriber.
Security information for authentication of sim card and for ciphering
Services that the subscriber can use.
The VLR is temporary and it contain information as long as the subscriber is in the service
area. It also contains the address of subscriber HOME LOCATION REGISTER.
HOME LOCATION REGISTER:
It maintains perment register of subscriber like identity number and service can be used. It
also track current location of subscriber.
AUTHENTICATION CENTER:
It provides security information to the network for the verification of sim cards.
OSA/PARLAY