Вы находитесь на странице: 1из 15

Video Streaming

What is a video?
A video can be technically described as a motion of picture frames caught in continuous succession
in time, the time lag being so less that the entities in the subsequent picture frames seem to be in a
continuous motion.

What is streaming?
Streaming is a process so called because the picture and sound data is constantly received by, and
normally displayed to the end-user while it is being delivered by the provider, the server, to be
viewed in real time, without actually having to download the content first.by-step Streaming

Using streaming media files is as easy as browsing the Web, but there's a lot that goes on
behind the scenes to make the process possible:

1.Using your Web browser, you find a site that features streaming video or audio.
2.You find the file you want to access, and you click the image, link or embedded player
with your mouse.
3.The Web server hosting the Web page requests the file from the streaming server.
4.The software on the streaming server breaks the file into pieces and sends them to
your computer using real-time protocols.
5.The browser plug-in, standalone player or Flash application on your computer
decodes and displays the data as it arrives.
All of this requires three basic components -- a player, a server and a stream of data that are all
compatible with each other.
Creating and distributing a streaming video or audio file requires its own process:

1.You record a high-quality video or audio file using film or a digital recorder.
2.You digitize this data by importing it to your computer and, if necessary, converting it
with editing software.
3.If you're creating a streaming video, you make the image size smaller and reduce the
frame rate.
4.A codec on your computer compresses the file and encodes it to the right format.
5.You upload the file to a server
6.The server streams the file to users' computers.
Protocols involving streaming of media:

Streaming video and audio use protocols that allow the transfer of data in real time. They
break files into very small pieces and send them to a specific location in a specific order.
These protocols include:

 Real-time transport protocol (RTP)


 Real-time streaming protocol (RTSP)
 Real-time transport control protocol (RTCP)

These protocols act like an added layer to the protocols that govern Web traffic. So when the
real-time protocols are streaming the data where it needs to go, the other Web protocols are
still working in the background. These protocols also work together to balance the load on the
server. If too many people try to access a file at the same time, the server can delay the start
of some streams until others have finished.

Real-time Transport Protocol

The Real-time Transport Protocol (or RTP) defines a standardized packet format for
delivering audio and video over the Internet. It can also be used in conjunction with RTSP
protocol which enhances the field of multimedia applications.
It was originally designed as a multicast protocol, but has since been applied in many unicast
applications. It is frequently used in streaming media systems (in conjunction with RTSP) as
well as videoconferencing and push to talk systems, making it the technical foundation of the
Voice over IP industry. It goes along with the RTCP and is built on top of the User Datagram
Protocol (UDP). Applications using RTP are less sensitive to packet loss, but typically very
sensitive to delays, so UDP is a better choice than TCP for such applications.

Real Time Streaming Protocol

The Real Time Streaming Protocol (RTSP) is a protocol for use in streaming media
systems which allows a client to remotely control a streaming media server, issuing VCR-like
commands such as "play" and "pause", and allowing time-based access to files on a server.

The sending of streaming data itself is not part of the RTSP protocol. Most RTSP servers use
the standards-based RTP as the transport protocol for the actual audio/video data, acting
somewhat as a metadata channel.

Real-time Transport Control Protocol

Real-time Transport Control Protocol (RTCP) is a sister protocol of the Real-time


Transport Protocol (RTP).RTCP provides out-of-band control information for an RTP flow.
It partners RTP in the delivery and packaging of multimedia data, but does not transport any
data itself. It is used periodically to transmit control packets to participants in a streaming
multimedia session. The primary function of RTCP is to provide feedback on the quality of
service being provided by RTP.

RTCP gathers statistics on a media connection and information such as bytes sent, packets
sent, lost packets, jitter, feedback and round trip delay. An application may use this
information to increase the quality of service, perhaps by limiting flow or using a different
codec.

Need for Video compression standards

Video compression standards provide a number of benefits, foremost of which is ensuring


interoperability, or communication between encoders and decoders made by different people or
different companies. In this way standards lower the risk for both consumer and manufacturer, and
this can lead to quicker acceptance and widespread use. In addition, these standards are designed
for a large variety of applications, and the resulting economies of scale lead to reduced cost and
further widespread use. Currently there are two families of video compression standards, performed
under the auspices of the International Telecommunications Union-Telecommunications and the
International Organization for Standardization (ISO). The first video compression standard to gain
widespread acceptance was the ITU H.261, which was designed for videoconferencing over the
integrated services digital network (ISDN).H.261 was adopted as a standard in 1990. It was designed
to operate at p =1,2, ..., 30 multiples of the baseline ISDN data rate, or p x 64 kb/s. In 1993,the ITU-T
initiated a standardization effort with the primary goal of video telephony over the public switched
telephone network (PSTN) (conventional analog telephone lines), where the total available data rate
is only about 33.6 kb/s. The video compression portion of the standard is H.263 and its first phase
was adopted in 1996. It is one member of the H.26x family of video coding standards in the domain
of the ITU-T Video Coding Experts Group (VCEG). An enhanced H.263, H.263 Version 2 (V2), was
finalized in 1997, and a completely new algorithm, originally referred to as H.26L, is currently being
finalized as H.264/AVC.The Moving Pictures Expert Group (MPEG) was established by the ISO in 1988
to develop a standard for compressing moving pictures (video) and associated audio on digital
storage media (CD-ROM). The resulting standard, commonly known as MPEG-1, was finalized in
1991 and achieves approximately VHS quality video and audio at about 1.5 Mb/s. A second phase of
their work, commonly known as MPEG-2, was an extension of MPEG-1 developed for application
toward digital television and for higher bit rates. A third standard, to be called MPEG-3, was
originally envisioned for higher bit rate applications such as HDTV, but it was recognized that those
applications could also be addressed within the context of MPEG-2; hence those goals were
wrapped into MPEG-2 (consequently, there is no MPEG-3 standard).A third phase of work, known as
MPEG-4, was designed to provide improved compression efficiency and error resilience features, as
well as increased functionality, including object-based processing, integration of both natural and
synthetic (computer generated) content, content-based interactivity.

Types of current and emerging video standards


Introduction to H.264

The International Telecommunications Union, Telecommunications


Standardization Sector (ITU-T) is now one of two formal organizations that
develop video coding standards - the other being the International
Standardization Organization/International Electro-technical Commission, Joint
Technical Committee 1 (ISO/IEC JTC1). The ITU-T video coding standards are
called recommendations, and they are denoted with H.26x (H.261, H.262,
H.263 and H.264). The ISO/IEC standards are denoted with MPEG-x ( MPEG-1,
MPEG-2 and MPEG-4).

What is H.264?

H.264 is a new video compression scheme that is becoming the worldwide digital video standard for consumer
electronics and personal computers.In particular, H.264 has already been selected as a key compression scheme
(codec) for the next generation of optical disc formats, HD-DVD and Blu-ray disc (sometimes referred to as BD
or BD-ROM) H.264 has been adopted by the Motion Picture Experts Group (MPEG) to be a key video
compression scheme in the MPEG-4 format for digital media exchange.H.264 is sometimes referred to as
“MPEG-4 Part 10” (part of the MPEG-4 specification), or as “AVC” (MPEG-4’s Advanced Video Coding).This
new compression scheme has been developed in response to technical factors and the needs of an evolving
market:

• MPEG-2 and other older video codecs are relatively inefficient.

• Much greater computational resources are available today.

• High Definition video is becoming pervasive, and there is a strong need to store and transmit more efficiently
the higher quantity of data of HD (about 6 times more than Standard Definition video).

Why H.264 is The Next Big Thing

Quality and Size (Bit-rate)

H.264 clearly has a bright future, mostly because it offers much better compression efficiency than previous
compression schemes. The improved efficiency translates into three main benefits, or a combination of them:

• Higher video quality at a given bit-rate: reduction in artifacts such as blockiness,

color bands etc.


• Higher resolution: as the video world transitions to High Definition, a mechanism is needed to deliver it.

• Lower storage requirements: lower storage requirements will allow for large amounts of content to be
delivered on a single disc.

While a more detailed comparison of MPEG-2 to H.264 is given below, a high-level comparison is illustrated by
this graph.
Here we see that for a given bit-rate, the level of video quality or resolution (both of these contribute to greater
bit-rate) for H.264 can be higher than for MPEG-2.

Next Generation Digital TV

It is likely that future delivery of Digital TV signals (both in SD and HD) will use H.264. For SD, the same
content at a given quality can be delivered with a lower bit-rate (allowing for more channels to be transmitted on
the same medium), or higher quality and/or higher resolution can be delivered at the same bit-rate. Future
Digital TV delivery vehicles include:

• Satellite

• Cable

• IPTV (over cable or DSL)

• Over-the-Air broadcast

Some of the above are already turning to H.264 as a standard; worldwide, more are likely to announce that they
are following shortly.

High-Definition Optical Discs

High-definition video is gaining in popularity, aided by the falling cost of HD television sets. A key deployment
vehicle for High Definition content is likely to be optical discs carrying this content. Two optical disc formats
are currently proposed:

Blu-ray Disc, and HD-DVD. While these formats differ in several ways, both have chosen to adopt H.264 as
one of the key means of storing the HD video content. The high bit-rates that are used to encode the video on
these HD-discs will be particularly challenging today’s PCs; we will examine this further after we compare
MPEG-2 and H.264.
Comparison to MPEG-2

MPEG-2 is today’s dominant video compression scheme; it is used to encode video on DVDs, to stream internet
video and is the basis for most worldwide digital television (over-the air, cable and satellite).While MPEG-2 is a
video-only format, MPEG-4 is a more generic media exchange format, with H.264 as one of several video
compression schemes offered by MPEG-4.

Differences between H.264 and MPEG-2 video decoding

There are numerous differences between these compression schemes, but a key point is that H.264 has been
developed to deliver much higher compression ratios than MPEG-2. However, this greater degree of
compression (up to 2-3 times more efficient than MPEG-2) comes at the expense of much higher computational
requirements. This additional computational complexity is widespread in the overall decoding process, but three
key techniques areas stand out in adding to the new overhead: Entropy encoding, smaller block size and In-loop
de-blocking.

Entropy encoding:

Entropy encoding is a technique used to store large amounts of data by examining the frequency of patterns
within it and encoding this in another, smaller, form. H.264 allows for a variety of entropy encoding schemes,
compared to the fixed scheme employed by MPEG-2. In particular, the new CABAC (Context-based Adaptive
Binary Arithmetic Coding) scheme adds 5-20% of compression efficiency but is much more computationally
demanding than MPEG-2’s entropy encoding.

Smaller block size:

MPEG-2, H.264, and other most other codecs treat portions of the video image in blocks, often processed in
isolation from each another. Independently of the number of video pixels in the image, the number of blocks has
an effect of the computational requirements.While MPEG-2 has a fixed block size of 16 pixels on a side
(referred as 16x16), H.264 permits the simultaneous mixing of different block sizes (down to 4x4 pixels). This
permits the codec to accurately define fine detail (with more, smaller blocks) while not having to ‘waste’ small
blocks on coarse detail. In this way, for example, patches of blue sky in a video image can use large
blocks,while the finer details of a forest in the frame could be encoded with smaller blocks.

In-loop de-blocking:
When the bit-rate of an MPEG-2 stream is low, the blocks (and specifically, the boundaries between them) can
be very visible and can clearly detract from the visual quality of the video. “De-blocking” is a post-processing
step that adaptively smoothes the edges between adjacent blocks.De-blocking is computationally “expensive”.
In the past, de-blocking has been an optional step in decoding, only enabled when it was possible for the
playback device (such as a PC) to perform it in real time. ATI has offered de-blocking capability for playback of
video for some time.

In H.264, however, In-loop de-blocking is introduced. The “in-loop” refers to when previously ‘de-blocked’
image data, in addition to being displayed, is actually used as part of the decoding of future frames; it is in the
decoding ‘loop’. Because of this, the de-blocking is no longer optional. It adds to the quality of the decoded
video, but also adds significantly to the computational overhead of H.264 decode.

Why H.264 is emerging as a standard supporting broadcast video


encoding
Our question regarding how high video quality using low bit-rate can be achieved using
H.264,has been answered as thus.

The main objective of the emerging H.264 standard is to provide a means to achieve
substantially higher video quality as compared to what could be achieved using any of
the existing video coding standards. Nonetheless, the underlying approach of H.264 is
similar to that adopted in previous standards such as H.263 and MPEG-4, and consists of
the following four main stages:

 Dividing each video frame into blocks of pixels so that processing of the video
frame can be conducted at the block level.
 Exploiting the spatial redundancies that exist within the video frame by coding
some of the original blocks through spatial prediction, transform, quantization
and entropy coding (or variable-length coding).
 Exploiting the temporal dependencies that exist between blocks in successive
frames, so that only changes between successive frames need to be encoded.
This is accomplished by using motion estimation and compensation. For any given
block, a search is performed in the previously coded one or more frames or in a
future frame to determine the motion vectors that are then used by the encoder
and the decoder to predict the subject block.
 Exploiting any remaining spatial redundancies that exist within the video frame by
coding the residual blocks, for example, the difference between the original
blocks and the corresponding predicted blocks, again through transform,
quantization and entropy coding.

With H.264 a given video picture is divided into a number of small blocks referred to as
macroblocks. For example, a picture with QCIF resolution (176x144) is divided into 99
16x16 macroblocks. A similar macroblock segmentation is used for other frame sizes.
The luminance component of the picture is sampled at these frame resolutions, while the
chrominance components, Cb and Cr, are down-sampled by two in the horizontal and
vertical directions. In addition, a picture may be divided into an integer number of
"slices", which are valuable for resynchronization should some data be lost.

The resulting frame is referred to as an I-picture. I-pictures are typically encoded by


directly applying a transform to the different macroblocks in the frame. Consequently,
encoded I-pictures are large in size since a large amount of information is usually
present in the frame, and no temporal information is used as part of the encoding
process. In order to increase the efficiency of the intra coding process in H.264, spatial
correlation between adjacent macroblocks in a given frame is exploited. The idea is
based on the observation that adjacent macroblocks tend to have similar properties. The
difference between the actual macroblock and its prediction is then coded, which results
in fewer bits to represent the macroblock of interest as compared to when applying the
transform directly to the macroblock itself.

For regions with less spatial detail (flat regions), H.264 supports 16x16 intra prediction,
in which one of four prediction modes (DC, Vertical, Horizontal and Planar) is chosen for
the prediction of the entire luminance component of the macroblock. In addition, H.264
supports intra prediction for the 8x8 chrominance blocks also using four prediction
modes (DC, vertical, horizontal and planar). Finally, the prediction mode for each block
is efficiently coded by assigning shorter symbols to more likely modes, where the
probability of each mode is determined based on the modes used for coding of the
surrounding blocks.

GSM ARCHITECTURE
A connection between two people the a caller and the called person is basic of communication
network. To provide this service network must be able to set up and maintain a call.

In wireless mobile network establishment of call is a complex process as it must enable the user to
move at its own will providing they stay within the network service area. It order to achiev this
network has to performed three functions:

a)where is the subscriber

b) who is the subscriber

c) what the subscriber want.

Open Interfaces of GSM

The GSM specification define two interfaces with the GSM network. The first one is between mobile
station (MS) and base station (BS) ,this is known as “air interface”. The need for this interface is to be
open since mobiles of different brands need to communicate with GSM network of different
suppliers.

The second interface is in between mobile service switching center (MSC) and base station controller
(BSC) known as “a interface”.

GSM makes use of distributed intelligence throughout the

network. This decentralised intelligence is achieved by dividing the network into three subsystems:

1) Network switching subsystem (NSS)

2) Base station subsystem(BSS)

3) Network management subsystem (NMS)

The actual network required to establish a call is composed of NSS and BSS. The BSS is responsible
for radio path control and every call is connected through BSS. The NSS take care of call control
function. Calls are always connected by and through NSS.

The NMS is the operation and maintenance related part of netwok and it is needed for the control of
whole GSM network. The operator maintaines and observes network quality and service offered
through NMS. These three subsystem are linked through air, a , o&m interfaces.

MOBILE STATION:

The mobile station is a combination of terminal equipment and subscriber data. The terminal
equipment is known as mobile equipment and subscriber data is stored in a module named
SIM(subscriber identity module).

In GSM network SIM indentifies the user and it contains the identification number of the user and
the list of available networks.
SUBSYSTEMS AND NETWORK ELEMENT OF GSM:

The GSM network is divided into three subsystems namely network service subsystem,base station
subsystem, network management subsystem.

NETWORK SWITCHING SUBSYSTEM:

It contains the network elements MSC,VLR,HLR,AC and EIR.

The main function of NSS are:

Call control:

It identifies the user , establish the call and clear the call after conversation is over.

Charging:

It collects the charging information about the call and transfer it to the billing center.

Mobility:

This maintains information about subscriber location.

Subscriber data handling:

This is permanent data storage in HLR and temporary storage of relevant data in VLR .

MOBILE SRVICE SWITCHING CENTER:

The MSC is responsible for controlling call in mobile network. It identifies the origin and destination
of call and also type of call. It act as a bridge between mobile network and fixed network is called a
gateway MSC. The tasks performed by MSC involves:

1)call control

2)initiation of paging

3) charging data collection

VISITOR LOCATION REGISTER(VLR)

VLR is a database which contain information about the subscriber currently being in the service area
of the MSC/VLR such as:

 Identification no of subscriber.
 Security information for authentication of sim card and for ciphering
 Services that the subscriber can use.
The VLR is temporary and it contain information as long as the subscriber is in the service
area. It also contains the address of subscriber HOME LOCATION REGISTER.
HOME LOCATION REGISTER:
It maintains perment register of subscriber like identity number and service can be used. It
also track current location of subscriber.
AUTHENTICATION CENTER:
It provides security information to the network for the verification of sim cards.

EQUIPMENT IDENTITY REGISTER(EIR):


EIR is used for security reasons and it is responsible for IMEI checking. This no consist of type
of approval code ,final assembly code and serial no of mobile station.
EIR maintains three lists:
 A number in white list is allowed to operate normally.
 Grey list is used to monitor the use of mobile equipment which is suspect to faulty.
 Black list contains those number which are reported stolen or not allowed to
operate.

OSA/PARLAY

Вам также может понравиться