Академический Документы
Профессиональный Документы
Культура Документы
discusses the general UMTS network and protocol shift of object position, then a pixel in the previous frame,
architecture. Section-4 classifies transmission errors into displaced by a motion vector, is used.
different classes and introduces the UEP and NEWPRED Assigning a motion vector to each pixel is very costly.
protection schemes.. Section-5 details the UEP scheme Instead, a group of pixels are motion compensated, such
while Section-6 details NEWPRED. In Section-7 the that the motion vector overhead per pixel can be very small.
simulation scenarios and results are presented. All the In a standard codec a block of 16 × 16 pixels, known as a
achieved results are then discussed from the network Macroblock (MB), are motion estimated and compensated.
It should be noted that motion estimation is only carried out
operators’ point of view in Section-8. Finally the conclusion
on the luminance parts of the pictures. A scaled version of
is drawn at the end in Section-9.
the same motion vector is used for compensation of
chrominance blocks, depending on the picture format. Every
MB is either interframe or intraframe coded. The decision
2. Motion-compensated predictive coding on the type of MB depends on the coding technique. Every
MB is divided into 8 × 8 luminance and chrominance pixel
blocks. Each block is then transformed via the DCT –
2.1 General Principles
Discrete Cosine Transform. There are four luminance
Image and video data compression refers to a process in blocks in each MB, but the number of chrominance blocks
which the amount of data used to represent images and depends on the color resolution. Then after quantization,
video is reduced to meet a bit rate requirement, while the variable length coding (entropy coding) is applied before the
quality of the reconstructed image or video satisfies a actual channel transmission.
requirement for a certain application. This needs to be
undertaken while ensuring the complexity of computation
involved is affordable for the application and end-devices. 2.2 Frame types
The statistical analysis of video signals indicates that there
An entire video frame sequence is divided into a Group of
is a strong correlation both between successive picture
Pictures (GOP) to assist random access into the frame
frames and within the picture elements themselves.
sequence and to add better error-resilience. The first coded
Theoretically decorrelation of these signals can lead to
frame in the group is an I-frame. It is followed by an
bandwidth compression without significantly affecting
arrangement for P and B frames. The GOP length is
image or video resolution. Moreover, the insensitivity of the
normally defined as the distance between two consecutive I-
human visual system to loss of certain spatio-temporal
frames as shown in Figure-2. The I-frame (Intra-coded)
visual information can be exploited for further bitrate
frame is coded using information only from itself. A
reduction. Hence, subjectively lossy compression techniques
Predictive-coded (P) frame is coded using motion
can be used to reduce video bitrates while maintaining an
compensated prediction from a past reference frame(s).
acceptable video quality.
While a Bidirectionally predictive-coded (B) frame is a
frame which is coded using motion and texture compensated
Figure 1 shows the general block diagram of a generic
prediction from a past and future reference frames.
interframe video codec Error! Reference source not found..
Figure 1. Generic interframe video codec Figure 2. Frame structures in a generic interframe video
codec.
In interframe predictive coding the difference between
pixels in the current frame and their prediction values from It is important to note here that the frame interval between
the previous frame are coded and transmitted. At the two consecutive I-frames and between two consecutive P-
receiving end after decoding the error signal of each pixel, it frames has a significant effect on the received video quality
is added to a similar prediction value to reconstruct the as well as on the transmission bitrate. It is actually a trade-
picture. The better the predictor, the smaller the error off between error-resilience capabilities and the required
signal, and hence the transmission bit rate. If the video operational bitrate.
scene is static, a good prediction for the current pixel is the A major disadvantage of this coding scheme is that
same pixel in the previous frame. However, when there is a transmission errors occurring in a frame which is used as a
motion, assuming the movement in the picture is only a reference frame for other P or B frames causes errors to
130 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 2, February 2010
propagate into the following video sequence. This The radio interface protocols are needed to set-up,
propagation continues until an intra-refresh is applied. reconfigure and release the RAB. The radio interface is
divided into three protocol layers:
In the presented work this hierarchy of frame types is
exploited. It should be obvious from the above discussion Physical layer (L1)
that the I-frame which acts as a reference frame for the Data Link layer (L2)
entire GOP needs to be protected the most from Network layer (L3)
transmission errors with perhaps weaker protection for P-
frames. Since no other frame depends on B-frames, errors The data link layer is split further into the Medium Access
occurring in B-frames affect just a single frame and do not Control (MAC) Error! Reference source not found., Radio
propagate into the video sequence. Link Control (RLC) Error! Reference source not found.,
Packet Data Convergence Protocol (PDCP) Error! Reference
As an illustrative example, the video compression standard, source not found. and Broadcast/Multicast Control (BMC)
MPEG-4, is used throughout. The rest of the paper is Error! Reference source not found.. Layer 3 and RLC are
organized into the following sections. Section-2 gives divided into Control (C) and User (U) planes as shown in
introduction of general principles of predictive interframe Figure-3.
MPEG-4 codec, frame hierarchies and dependencies
C-plane U-plane
involved. Section-3 discusses the general UMTS network signalling information
MAC L2/MAC
Transport
3. UMTS network and protocol architecture PHY
channels
L1
UMTS is a very complex communication system with a The service access point (SAP) between the MAC and
wide range of protocols working at different layers of its physical layers are provided by the transport channels
network architecture Error! Reference source not found.. (TrcCHs). While the SAP between RLC and MAC sub-
UMTS has been designed to support a wide range of layers are provided by the logical channels (LcCHs). We
applications with different Quality of Service (QoS) profiles will explain the RLC in further details as the services
Error! Reference source not found.. It is essential to provided by different RLC modes are exploited in order to
understand the overall architecture of the system before we provide different levels of UEP for video transmission.
consider details of transmission quality for a given
application.
UMTS can be briefly divided into two major functional 3.2 RLC modes
groupings: Access Stratum (AS) and Non-Access Stratum Different services provided by RLC sub-layers includes,
(NAS). The AS is the functional grouping of protocols Transparent Mode (TM) data transfer, Unacknowledged
specific to the access techniques while NAS aims at Mode (UM) data transfer, Acknowledged Mode (AM) data
different aspects for different types of network connections. transfer, maintenance of QoS as defined by upper layers and
The Radio Access Bearer (RAB) is a service provided by the notification of unrecoverable error Error! Reference source
AS to the NAS in order to transfer data between the user not found.. TM data transfer transmits upper layer PDUs
equipment (UE) and core network (CN). It uses different without adding any protocol information, possibly including
radio interface protocols at the Uu interface, those are segmentation/reassembly functionality. It ignores any errors
layered into three major parts as shown in Figure 3. in received PDUs and just passes them onto the upper layer
for further processing. UM data transfer transmits upper
layer PDUs without guaranteeing delivery to the peer entity.
PDUs which are received with errors are discarded without
(IJCNS) International Journal of Computer and Network Security, 131
Vol. 2, No. 2, February 2010
any notice to the upper layer or to the peer entity. AM data robust video communication system. Errors introduced
transfer transmits upper layer PDUs with guaranteed during transmission can lead to frame mismatch between
delivery to the peer entity. Error-free delivery is ensured by the encoder and the decoder, which can persist until the
means of retransmission. For this service, both in-sequence next intra refresh occurs. Where an upstream data channel
and out-of-sequence delivery are supported. exists from the decoder to the encoder, NEWPRED or
demand intra refresh can be used. NEWPRED is a
As mentioned before, in the presented scheme unequal error technique in which the reference frame for interframe
protection is applied to the different types of video frames coding is replaced adaptively according to the upstream
transmitted. The I-frame which has the most crucial data is messaging from the decoder. NEWPRED uses upstream
always transmitted using the AM data transfer mode of messages to indicate which segments are erroneously
service to ensure error-free delivery. decoded. On receipt of this upstream message the encoder
will subsequently use only the correctly decoded part of the
prediction in an inter-frame coding scheme. This prevents
4. Error classification temporal error propagation without the insertion of intra
coded MBs (Macro Blocks) and improves the video quality
From the discussion about the interframe predictive coding in noisy multipath environments.
hierarchical structure of MPEG-4 it is clear that various
parts of the bit stream have different levels of importance. The following section explains the concept in more detail.
We have classified errors into three main classes based on
their impact on the coded hierarchy:
5. Error detection
To implement NEWPRED it is important to identify errors
6. Video clips and video quality measurement
at the frame level at the decoding end. Mapping of errors techniques used
identified by the lower layer to the application layer with the
For evaluation purposes three standard video test sequences
precision of a single video frame or video packet often
were used, these being: Mother-Daughter, Highway and
results in a complicated process consuming a considerable
Foreman. Each of these clips is of 650 frames in length of
amount of processing resource and introduces severe
QCIF (176 × 144) resolution and is encoded with the
processing delays. Insertion of CRC bits in the standard
standard MPEG-4 codec at 10 fps.
MPEG-4 bit-stream at frame level provides a simpler
solution to this problem. With the insertion of extra bits
The objective video quality is measured by the PSNR (Peak
which are not defined as part of the standard video encoded
Signal to Noise Ratio) as defined by ANSI T1.801.03-1996
sequence, would normally result in the production of an
Error! Reference source not found.. One more sophisticated
incompatible bit-stream which standard decoders will not be
model which is developed and tested by ITS (Institute for
able to decode. But as mentioned in Error! Reference source
Telecommunication Sciences) is the Video Quality Metric
not found. this would not be the case if these bits are
(VQM) Error! Reference source not found.. VQM has been
inserted at a particular place of the standard MPEG-4 bit-
extensively tested on subjective data sets and has
stream Error! Reference source not found..
significantly proven correlation with subjective assessment.
While decoding, the decoder is aware of the total number of
One of the many models developed as a part of this utility is
macroblocks (MB) in each frame. It starts searching for a
Peak-Signal-to-Noise-Ratio VQM (VQMP). This model is
new video frame header after decoding these macroblocks.
optimized for use on low-bit rate channels and is also used
It ignores everything between the last marcoblock of the
in this paper for the near subjective analysis of the received
frame and the next frame header as padding. If generated
video sequence.
CRC bits are inserted at this place as shown in Figure-5,
after the last macroblock and before the next header, this
Typical VQMP is given by:
should preserve the compatibility of the bit-stream with
standard MPEG-4. Such insertion of CRC bits does not 1
affect the normal operation of any standard MPEG-4 VQMp = 0.1701×( PSNR − 25.6675 )
decoder. Also because the inserted CRC only consists of the
1+ e
16 bits, generated using the polynomial G16 defined for the
MAC layer of the UMTS architecture, it is not possible for it
to emulate any start code sequences.
The higher the PSNR value the better is the objective quality
whilst the lower the VQMP value the better is the subjective
quality of the video sequence.
(IJCNS) International Journal of Computer and Network Security, 133
Vol. 2, No. 2, February 2010
7. Upper layer protocol overheads and PDCP Once these PDUs are mapped onto different transport
header compression channels, they are transmitted using the WCDMA air
interface. The physical layer of WCDMA is simulated using
Each lower layer defined in the UMTS protocol stack the SPW tool by CoWare which models an environment to
provides services to the upper layer at defined Service generate error patterns for various types of channel
Access Points (SAPs) Error! Reference source not found.. propagation conditions defined by the 3GPP standards. A 64
These protocols add a header part to the video frame kbps downlink data channel and 2.5 kbps control channel
payload to exchange information with peer entities. were used for this UMTS simulation. These two channels
Depending upon protocol configurations and the size of the were multiplexed and transmitted over the WCDMA air-
video frames, these headers can be attached to each video interface.
frame or multiple video frames can be used as a single
payload as defined by RFC-3016 Error! Reference source The transmission time interval, transmission block size,
not found.. As many of the successive headers contain a transmission block set size, CRC attachment, channel
huge amount of redundant data, header compression is coding, rate matching and inter-leaving parameters were
applied in the form of the Packet Data Convergence configured for both channels compliant with the 3GPP TS
Protocol (PDCP) Error! Reference source not found.. With 34.108 specification Error! Reference source not found..
PDCP compression, higher layer protocol headers like RTP, The typical parameter set for reference RABs (Radio Access
UDP and IP headers are compressed into one single PDCP Barriers) and SABs (Signaling Access Barriers) and
header. In the presented simulation, header attachments, relevant combinations of them are presented in this
compression and header removal was achieved using the C standard.
programming language. Figure-6 shows a typical structure
of the video frame payload and header before it is submitted The different channel propagation conditions used in the
to the RLC layer for further processing. simulation were static, multi-path fading, moving and birth-
death propagation. These channel conditions are described
in some details in the following section.
8. Propagation conditions
Four different standardized propagation conditions – static,
multi-path fading, moving and birth-death were used to
Figure 6. PDCP header compression generate different error patterns. The typical parameter sets
for conformance testing as mentioned in 3GPP TS 25.101
Once the PDCP compression is achieved this packet is then Error! Reference source not found. is used for the radio
submitted to the RLC layer for further processing. The RLC interface configuration. A common set of parameters for all
layer can be configured in any one of the transparent, kinds of environment is listed in Table 1, while any specific
unacknowledged or acknowledged modes of transmission. parameters to the environment are mentioned in the
respective sections.
The RLC then submits the PDU to the lower layers, where
the MAC layer and physical layer procedures are applied as Table 3. Common set of parameters
appropriate. Interference -60 dB
Table 2, lists the received values of BLER and FER for this Generated error patterns are applied to the data transmitted
propagation condition. from the RLC layer. Different RLC modes are simulated
using the C programming language.
Table 4. Parameters for static propagation conditions
BLER (Block Error Rate) 0.034
9. Simulation Results
FER (Frame Error Rate) 0.0923 As mentioned before VQMP is used as the quality
measurement metric in this simulation. Table 6 presents the
VQMP values obtained during the simulations.
8.2 Multi-path fading propagation conditions The following conventions are used in Table 6.
Dr. Bhumin Pathak received his M.Sc. and Ph.D. degree from
Oxford Brookes University, Oxford, UK. He has been working as
Systems Engineer at Airvana Inc., since March 2007.
Dr. Geoff Childs is Principal Lecturer at School of Technology at
Oxford Brookes University, Oxford, UK.
Dr. Maaruf Ali is Senior Lecturer at School of Technology at
Oxford Brookes University, Oxford, UK.