Вы находитесь на странице: 1из 9

128 (IJCNS) International Journal of Computer and Network Security,

Vol. 2, No. 2, February 2010

Effect of error concealment methods on UMTS


radio network capacity and coverage planning
Dr. Bhumin H. Pathak1, Dr. Geoff Childs2 and Dr. Maaruf Ali3
1
Airvana Inc., Chelmsford, USA, bhumin.pathak@gmail.com
2
School of Technology at Oxford Brooks University, Oxford, UK, gnchilds@brookes.ac.uk
3
School of Technology at Oxford Brooks University, Oxford, UK, mali@brookes.ac.uk

automatic repeat request (ARQ) scheme with selective


Abstract: The radio spectrum is a precious resource and
careful utilization of it requires optimization of all the processes
repeat and dynamic frame referencing provides better
involved in data delivery. It is understood that achieving performance.
maximum coverage and capacity with the required QoS is of the
utmost importance for network operators to maximize revenue Correcting or compensating for errors in a compressed
generation. Whilst current methods of video compression video stream is complicated by the fact that bit-energy
accelerate transmission by reducing the number of bits to be concentration in the compressed video is not uniformly
transmitted over the network, they have the unfortunate trade-
distributed, especially with most motion-compensated
off of increasing signal sensitivity to transmission errors. In this
paper we present an approach to transmit MPEG-4 coded video interframe predictive codecs like MPEG-4. The encoded bit-
in a way that optimizes the radio resources while maintaining stream has a high degree of hierarchical structure and
the required received video quality. This approach thereby dependencies which impact on error correction techniques.
provides increased capacity for network operators. Two different The problem is compounded by the fact that it is also not
methods used for this purpose are selective retransmission of feasible to apply a generalized error-resilience scheme to the
erroneous parts in frames and dynamic changes in the reference video stream as this may impact on standardized parameters
frames for predictive coding. Network operators can still provide
the required video quality with implementation of these two
in the different layers of the UMTS protocol stack. For
methods despite the low signal-to-interference ratios and can example when overhead or redundancy is added to the
therefore increase the number of users in the network. We show existing video standard for FEC implementation, the
here with the help of simulation results the performance modified video stream can become incompatible with the
enhancements these methods can provide, for various channel standard and the subsequent data format may not be
propagation environments. Comparison is performed using a decoded by all standard video decoders. It is therefore
standard PSNR metric and also using the VQMp metric for
important that any overhead or redundancy be added in a
subjective evaluation.
way which does not make the modified video bit-stream
Keywords: MPEG-4, UMTS, NEWPRED, UEP, network incompatible with the standard.
coverage and optimization.
With reference to the above mentioned constraints it is vital
to design an error-protection mechanism that exploits the
1. Introduction hierarchical structure of the inter-frame video compression
algorithm, is compatible with the standardized wireless
Future wireless mobile communication is expected to communication system, and preserves the integration of the
provide a wide range of services including real-time video standard bitstream..
with acceptable quality. Unlike wired networks, wireless
networks struggle to provide high bandwidth for such In this paper we present one such scheme which uses
services which therefore require highly compressed video unequal error protection (UEP) in conjunction with ARQ to
transmission. With high compression comes high sensitivity exploit different modes at the radio link layer (RLC) Error!
to transmission errors. Highly compressed video bitstreams Reference source not found. of the UMTS architecture. This
like MPEG-4 Error! Reference source not found. can lose a scheme is coupled with a dynamic frame referencing
considerable amount of information with the introduction of (NEWPRED) Error! Reference source not found. scheme
just a few errors. In this context, it is important to have a which exploits frame dependencies and interframe intervals.
video codec with a repertoire of efficient error-resilience The techniques discussed in this paper are relevant and
tools. Transmission errors in a mobile environment can vary applicable to a wide variety of interframe video coding
from single bit errors to large burst errors or even schemes. As an illustrative example, the video compression
intermittent loss of the connection. The widely varying standard, MPEG-4, is used throughout.
nature of the wireless channel limits the use of classical The rest of the paper is organized into the following
forward error correction (FEC) methods which may require sections. Section-2 gives an introduction to the general
a large amount of redundant data to overcome bursty errors. principles of the predictive interframe MPEG-4 codec, the
In this case as an alternative to FEC, an optimized frame hierarchies and dependencies involved. Section-3
(IJCNS) International Journal of Computer and Network Security, 129
Vol. 2, No. 2, February 2010

discusses the general UMTS network and protocol shift of object position, then a pixel in the previous frame,
architecture. Section-4 classifies transmission errors into displaced by a motion vector, is used.
different classes and introduces the UEP and NEWPRED Assigning a motion vector to each pixel is very costly.
protection schemes.. Section-5 details the UEP scheme Instead, a group of pixels are motion compensated, such
while Section-6 details NEWPRED. In Section-7 the that the motion vector overhead per pixel can be very small.
simulation scenarios and results are presented. All the In a standard codec a block of 16 × 16 pixels, known as a
achieved results are then discussed from the network Macroblock (MB), are motion estimated and compensated.
It should be noted that motion estimation is only carried out
operators’ point of view in Section-8. Finally the conclusion
on the luminance parts of the pictures. A scaled version of
is drawn at the end in Section-9.
the same motion vector is used for compensation of
chrominance blocks, depending on the picture format. Every
MB is either interframe or intraframe coded. The decision
2. Motion-compensated predictive coding on the type of MB depends on the coding technique. Every
MB is divided into 8 × 8 luminance and chrominance pixel
blocks. Each block is then transformed via the DCT –
2.1 General Principles
Discrete Cosine Transform. There are four luminance
Image and video data compression refers to a process in blocks in each MB, but the number of chrominance blocks
which the amount of data used to represent images and depends on the color resolution. Then after quantization,
video is reduced to meet a bit rate requirement, while the variable length coding (entropy coding) is applied before the
quality of the reconstructed image or video satisfies a actual channel transmission.
requirement for a certain application. This needs to be
undertaken while ensuring the complexity of computation
involved is affordable for the application and end-devices. 2.2 Frame types
The statistical analysis of video signals indicates that there
An entire video frame sequence is divided into a Group of
is a strong correlation both between successive picture
Pictures (GOP) to assist random access into the frame
frames and within the picture elements themselves.
sequence and to add better error-resilience. The first coded
Theoretically decorrelation of these signals can lead to
frame in the group is an I-frame. It is followed by an
bandwidth compression without significantly affecting
arrangement for P and B frames. The GOP length is
image or video resolution. Moreover, the insensitivity of the
normally defined as the distance between two consecutive I-
human visual system to loss of certain spatio-temporal
frames as shown in Figure-2. The I-frame (Intra-coded)
visual information can be exploited for further bitrate
frame is coded using information only from itself. A
reduction. Hence, subjectively lossy compression techniques
Predictive-coded (P) frame is coded using motion
can be used to reduce video bitrates while maintaining an
compensated prediction from a past reference frame(s).
acceptable video quality.
While a Bidirectionally predictive-coded (B) frame is a
frame which is coded using motion and texture compensated
Figure 1 shows the general block diagram of a generic
prediction from a past and future reference frames.
interframe video codec Error! Reference source not found..

Figure 1. Generic interframe video codec Figure 2. Frame structures in a generic interframe video
codec.
In interframe predictive coding the difference between
pixels in the current frame and their prediction values from It is important to note here that the frame interval between
the previous frame are coded and transmitted. At the two consecutive I-frames and between two consecutive P-
receiving end after decoding the error signal of each pixel, it frames has a significant effect on the received video quality
is added to a similar prediction value to reconstruct the as well as on the transmission bitrate. It is actually a trade-
picture. The better the predictor, the smaller the error off between error-resilience capabilities and the required
signal, and hence the transmission bit rate. If the video operational bitrate.
scene is static, a good prediction for the current pixel is the A major disadvantage of this coding scheme is that
same pixel in the previous frame. However, when there is a transmission errors occurring in a frame which is used as a
motion, assuming the movement in the picture is only a reference frame for other P or B frames causes errors to
130 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 2, February 2010

propagate into the following video sequence. This The radio interface protocols are needed to set-up,
propagation continues until an intra-refresh is applied. reconfigure and release the RAB. The radio interface is
divided into three protocol layers:
In the presented work this hierarchy of frame types is
exploited. It should be obvious from the above discussion Physical layer (L1)
that the I-frame which acts as a reference frame for the Data Link layer (L2)
entire GOP needs to be protected the most from Network layer (L3)
transmission errors with perhaps weaker protection for P-
frames. Since no other frame depends on B-frames, errors The data link layer is split further into the Medium Access
occurring in B-frames affect just a single frame and do not Control (MAC) Error! Reference source not found., Radio
propagate into the video sequence. Link Control (RLC) Error! Reference source not found.,
Packet Data Convergence Protocol (PDCP) Error! Reference
As an illustrative example, the video compression standard, source not found. and Broadcast/Multicast Control (BMC)
MPEG-4, is used throughout. The rest of the paper is Error! Reference source not found.. Layer 3 and RLC are
organized into the following sections. Section-2 gives divided into Control (C) and User (U) planes as shown in
introduction of general principles of predictive interframe Figure-3.
MPEG-4 codec, frame hierarchies and dependencies
C-plane U-plane
involved. Section-3 discusses the general UMTS network signalling information

and protocol architecture. Section-4 classifies transmission L3


errors into different classes and introduces the UEP and RRC control

NEWPRED protection schemes. Section-5 introduces UEP Radio


Bearers
while Section-6 introduces NEWPRED. In Section-7 error
control
detection method is described. Section-8 discusses video PDCP
PDCP L2/PDCP

quality measurement techniques used in the paper. Section-


control

9 provides information on header compression protocol used


BMC L2/BMC
in UMTS architecture. Section-10 introduces various radio
control
control

propagation environments used in simulations. In section-


11 simulation scenarios and results are presented. All the
achieved results are then discussed from the network RLC
RLC
RLC
RLC
L2/RLC
RLC RLC
operators’ point of view in Section-12. Finally the RLC RLC

conclusion is drawn at the end in Section-13. Logical


channels

MAC L2/MAC
Transport
3. UMTS network and protocol architecture PHY
channels

L1

Figure 3. Radio interface protocol structure for UMTS


3.1 Brief introduction

UMTS is a very complex communication system with a The service access point (SAP) between the MAC and
wide range of protocols working at different layers of its physical layers are provided by the transport channels
network architecture Error! Reference source not found.. (TrcCHs). While the SAP between RLC and MAC sub-
UMTS has been designed to support a wide range of layers are provided by the logical channels (LcCHs). We
applications with different Quality of Service (QoS) profiles will explain the RLC in further details as the services
Error! Reference source not found.. It is essential to provided by different RLC modes are exploited in order to
understand the overall architecture of the system before we provide different levels of UEP for video transmission.
consider details of transmission quality for a given
application.
UMTS can be briefly divided into two major functional 3.2 RLC modes
groupings: Access Stratum (AS) and Non-Access Stratum Different services provided by RLC sub-layers includes,
(NAS). The AS is the functional grouping of protocols Transparent Mode (TM) data transfer, Unacknowledged
specific to the access techniques while NAS aims at Mode (UM) data transfer, Acknowledged Mode (AM) data
different aspects for different types of network connections. transfer, maintenance of QoS as defined by upper layers and
The Radio Access Bearer (RAB) is a service provided by the notification of unrecoverable error Error! Reference source
AS to the NAS in order to transfer data between the user not found.. TM data transfer transmits upper layer PDUs
equipment (UE) and core network (CN). It uses different without adding any protocol information, possibly including
radio interface protocols at the Uu interface, those are segmentation/reassembly functionality. It ignores any errors
layered into three major parts as shown in Figure 3. in received PDUs and just passes them onto the upper layer
for further processing. UM data transfer transmits upper
layer PDUs without guaranteeing delivery to the peer entity.
PDUs which are received with errors are discarded without
(IJCNS) International Journal of Computer and Network Security, 131
Vol. 2, No. 2, February 2010

any notice to the upper layer or to the peer entity. AM data robust video communication system. Errors introduced
transfer transmits upper layer PDUs with guaranteed during transmission can lead to frame mismatch between
delivery to the peer entity. Error-free delivery is ensured by the encoder and the decoder, which can persist until the
means of retransmission. For this service, both in-sequence next intra refresh occurs. Where an upstream data channel
and out-of-sequence delivery are supported. exists from the decoder to the encoder, NEWPRED or
demand intra refresh can be used. NEWPRED is a
As mentioned before, in the presented scheme unequal error technique in which the reference frame for interframe
protection is applied to the different types of video frames coding is replaced adaptively according to the upstream
transmitted. The I-frame which has the most crucial data is messaging from the decoder. NEWPRED uses upstream
always transmitted using the AM data transfer mode of messages to indicate which segments are erroneously
service to ensure error-free delivery. decoded. On receipt of this upstream message the encoder
will subsequently use only the correctly decoded part of the
prediction in an inter-frame coding scheme. This prevents
4. Error classification temporal error propagation without the insertion of intra
coded MBs (Macro Blocks) and improves the video quality
From the discussion about the interframe predictive coding in noisy multipath environments.
hierarchical structure of MPEG-4 it is clear that various
parts of the bit stream have different levels of importance. The following section explains the concept in more detail.
We have classified errors into three main classes based on
their impact on the coded hierarchy:

• Most critical errors – Class-A errors


• Less critical errors – Class-B errors
• Least critical errors – Class-C errors

Class-A errors contain errors which can significantly


degrade the received video quality. This class includes
errors introduced in the video sequence header, I-frame
headers and I-frame data. Class-B, includes errors in P-
frame headers and P-frame data while Class-C includes
errors in B-frame headers and B-frame data.
In the following sections we discuss two different methods
to deal with Class-A and Class-B errors separately as
described

Figure 4. Error propagation due to interframe decoding


4.1 Unequal Error Protection (UEP) for Class-A errors
dependencies
As mentioned before, the I-frame has relatively more
importance than the P and B frames. Hence an I-frame data When a raw video sequence is encoded utilizing MPEG-4,
needs to be transmitted with higher protection using the each of the raw video frames is categorized according to the
proposed UEP scheme. This is achieved by using a different way in which predictive encoding references are used. An
mode of transmission on the RLC layer of UMTS. The I- Intra-coded (I) frame is coded using information only from
frames are transmitted using AM while other frames are itself. A Predictive-coded (P) frame is coded using motion
transmitted using TM. If any error is introduced during compensated prediction from a past reference frame(s).
radio transmission, the specific PDU of the I-frame is While a Bidirectional predictive-coded (B) frame is a frame
retransmitted using AM. This guaranties error-free delivery which is coded using motion and texture compensated
of the I-frames. prediction from a past and future reference frames. A
disadvantage of this coding scheme is that transmission
errors occurring in a frame which is used as a reference
4.2 NEWPRED for Class-B errors frame for other P or B frames, causes errors to propagate
into the video sequence. This propagation continues until an
The MPEG-4 ISO/IEC 14496 (Part-2) Error! Reference intra-refresh is applied. In the example shown in Figure-4,
source not found. standard provides error robustness and an error occurred in frame P3 which acts as a reference
resilience capabilities to allow accessing of image or video frame for P4, subsequent P-frames and B-frames (B5, B6
information over a wide range of storage and transmission etc), until the next intra-refresh frame (I2) occurs.
media. The error resilience tools developed for this part of
ISO/IEC 14496 can be divided into three major categories: Where the transmission error has damaged crucial parts of
synchronization, data recovery and error concealment. The the bit-streams such as a frame header, the decoder may be
NEWPRED feature falls into the category of error unable to decode the frame which it then drops. If this
concealment procedures. Recovery from temporal error dropped frame is a P-frame, none of the frames that are
propagation is an indispensable component of any error subsequently coded with reference to this dropped P-frame
132 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 2, February 2010

can be decoded. So in effect all subsequent frames until the


next intra-refresh are dropped. This situation can seriously
degrade the received video quality leading to the occurrence
of often long sequences of static or frozen frames.

If through the use of an upstream message the encoder is


made aware of errors in the particular P-frame (P3), the
encoder can change the reference frame for the next P-frame Figure 5. CRC insertion
(P4) to the previous one which was received correctly (P2).
P-frames and B-frames after P4 then refer to the correctly This method adds an extra 16 bits of overhead to each frame
decoded P4, rather than the faulty P3 frame. The technique but the performance improvements in video quality with
therefore reduces error propagation and frame loss NEWPRED implementation coupled with CRC error
occurring from dropped P-frames. detection, justifies this overhead.
The discussion in the previous sections described the
This method can significantly improve the performance of proposed methods for error-protection for different parts of
the received video quality. the video stream. The following section describes the
To implement the NEWPRED feature, both the encoder and simulation scenarios used to test the implementation of
decoder need buffer memories for the reference frames. The these methods.
required buffer memory depends on the strategy of the
reference frame selection by the encoder, and transmission The entire end-to-end transmission and reception system
delay between the encoder and decoder. In this paper results scenario is developed using different simulation tools
for two different schemes are reported. ranging from simple C programming to using SPW 4.2 by
CoWare and OPNET 10.5A.

5. Error detection
To implement NEWPRED it is important to identify errors
6. Video clips and video quality measurement
at the frame level at the decoding end. Mapping of errors techniques used
identified by the lower layer to the application layer with the
For evaluation purposes three standard video test sequences
precision of a single video frame or video packet often
were used, these being: Mother-Daughter, Highway and
results in a complicated process consuming a considerable
Foreman. Each of these clips is of 650 frames in length of
amount of processing resource and introduces severe
QCIF (176 × 144) resolution and is encoded with the
processing delays. Insertion of CRC bits in the standard
standard MPEG-4 codec at 10 fps.
MPEG-4 bit-stream at frame level provides a simpler
solution to this problem. With the insertion of extra bits
The objective video quality is measured by the PSNR (Peak
which are not defined as part of the standard video encoded
Signal to Noise Ratio) as defined by ANSI T1.801.03-1996
sequence, would normally result in the production of an
Error! Reference source not found.. One more sophisticated
incompatible bit-stream which standard decoders will not be
model which is developed and tested by ITS (Institute for
able to decode. But as mentioned in Error! Reference source
Telecommunication Sciences) is the Video Quality Metric
not found. this would not be the case if these bits are
(VQM) Error! Reference source not found.. VQM has been
inserted at a particular place of the standard MPEG-4 bit-
extensively tested on subjective data sets and has
stream Error! Reference source not found..
significantly proven correlation with subjective assessment.
While decoding, the decoder is aware of the total number of
One of the many models developed as a part of this utility is
macroblocks (MB) in each frame. It starts searching for a
Peak-Signal-to-Noise-Ratio VQM (VQMP). This model is
new video frame header after decoding these macroblocks.
optimized for use on low-bit rate channels and is also used
It ignores everything between the last marcoblock of the
in this paper for the near subjective analysis of the received
frame and the next frame header as padding. If generated
video sequence.
CRC bits are inserted at this place as shown in Figure-5,
after the last macroblock and before the next header, this
Typical VQMP is given by:
should preserve the compatibility of the bit-stream with
standard MPEG-4. Such insertion of CRC bits does not 1
affect the normal operation of any standard MPEG-4 VQMp = 0.1701×( PSNR − 25.6675 )
decoder. Also because the inserted CRC only consists of the
1+ e
16 bits, generated using the polynomial G16 defined for the
MAC layer of the UMTS architecture, it is not possible for it
to emulate any start code sequences.
The higher the PSNR value the better is the objective quality
whilst the lower the VQMP value the better is the subjective
quality of the video sequence.
(IJCNS) International Journal of Computer and Network Security, 133
Vol. 2, No. 2, February 2010

7. Upper layer protocol overheads and PDCP Once these PDUs are mapped onto different transport
header compression channels, they are transmitted using the WCDMA air
interface. The physical layer of WCDMA is simulated using
Each lower layer defined in the UMTS protocol stack the SPW tool by CoWare which models an environment to
provides services to the upper layer at defined Service generate error patterns for various types of channel
Access Points (SAPs) Error! Reference source not found.. propagation conditions defined by the 3GPP standards. A 64
These protocols add a header part to the video frame kbps downlink data channel and 2.5 kbps control channel
payload to exchange information with peer entities. were used for this UMTS simulation. These two channels
Depending upon protocol configurations and the size of the were multiplexed and transmitted over the WCDMA air-
video frames, these headers can be attached to each video interface.
frame or multiple video frames can be used as a single
payload as defined by RFC-3016 Error! Reference source The transmission time interval, transmission block size,
not found.. As many of the successive headers contain a transmission block set size, CRC attachment, channel
huge amount of redundant data, header compression is coding, rate matching and inter-leaving parameters were
applied in the form of the Packet Data Convergence configured for both channels compliant with the 3GPP TS
Protocol (PDCP) Error! Reference source not found.. With 34.108 specification Error! Reference source not found..
PDCP compression, higher layer protocol headers like RTP, The typical parameter set for reference RABs (Radio Access
UDP and IP headers are compressed into one single PDCP Barriers) and SABs (Signaling Access Barriers) and
header. In the presented simulation, header attachments, relevant combinations of them are presented in this
compression and header removal was achieved using the C standard.
programming language. Figure-6 shows a typical structure
of the video frame payload and header before it is submitted The different channel propagation conditions used in the
to the RLC layer for further processing. simulation were static, multi-path fading, moving and birth-
death propagation. These channel conditions are described
in some details in the following section.

8. Propagation conditions
Four different standardized propagation conditions – static,
multi-path fading, moving and birth-death were used to
Figure 6. PDCP header compression generate different error patterns. The typical parameter sets
for conformance testing as mentioned in 3GPP TS 25.101
Once the PDCP compression is achieved this packet is then Error! Reference source not found. is used for the radio
submitted to the RLC layer for further processing. The RLC interface configuration. A common set of parameters for all
layer can be configured in any one of the transparent, kinds of environment is listed in Table 1, while any specific
unacknowledged or acknowledged modes of transmission. parameters to the environment are mentioned in the
respective sections.
The RLC then submits the PDU to the lower layers, where
the MAC layer and physical layer procedures are applied as Table 3. Common set of parameters
appropriate. Interference -60 dB

In the presented simulation, each video frame coming from


Received signal / Noise (SNR) -3.0 dB
the application layer is mapped to the RTP layer. Each
frame is mapped into a separate RTP packet after addition
of the above mentioned RTP, UDP, IP headers with PDCP AWGN noise 4 *10-9 watts
compression, and if required, RLC headers.
For TM RLC service, the RLC SDU cannot be larger than Eb/No (Overall) 6.01 dB
the RLC PDU size. For this reason, if the video frame size is
larger than the RLC payload size for TM, then the frame is
BER (Bit Error Rate) 0.001
fragmented into different RTP packets. Other protocol
headers are then added to these packets separately. Each
RLC SDU is then either of the same size as the RLC PDU Data Rate (Downlink) 64 kbps
or smaller.
For AM RLC service, an entire frame can be considered as
one RLC SDU regardless of the size. For this reason 8.1 Static propagation condition
protocol headers are added to each video frame. This RLC
SDU is then fragmented into different RLC PDUs at the As defined in 3GPP TS 25.101 V6.3.0, the propagation
RLC layer. condition for a static performance measurement is an
Additive White Gaussian Noise (AWGN) environment. No
fading and multi-paths exist for this propagation model.
134 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 2, February 2010

Table 2, lists the received values of BLER and FER for this Generated error patterns are applied to the data transmitted
propagation condition. from the RLC layer. Different RLC modes are simulated
using the C programming language.
Table 4. Parameters for static propagation conditions
BLER (Block Error Rate) 0.034
9. Simulation Results
FER (Frame Error Rate) 0.0923 As mentioned before VQMP is used as the quality
measurement metric in this simulation. Table 6 presents the
VQMP values obtained during the simulations.
8.2 Multi-path fading propagation conditions The following conventions are used in Table 6.

Multi-path fading normally follows a Rayleigh fading Video clips names:


pattern. In this simulation Case-2 as mentioned by TS - Mother and Daughter – MD
25.101 is used with frequency band-1 (2112.5 MHz) and the - Highway – HW
number of paths set to 3 with relative delay between each - Foreman – FM
paths of 0, 976 and 20000 ns and a mean power of 0 dB for
all three paths. The delay model used in this case is fixed. VQMP without UEP and without NEWPRED – Results A
The vehicle speed is configured to be 3 km/h. The received VQMP with UEP and without NEWPRED – Results B
values of BLER and FER are given in Table 3. VQMP without UEP and with NEWPRED – Results C
VQMP with UEP and with NEWPRED – Results D
Note the lower the VQMP value the better the subjective
Table 5. Parameters for multi-path fading propagation image quality
conditions
BLER (Block Error Rate) 0.007
Table 8. Simulation results
FER (Frame Error Rate) 0.0225
Video Results Results Results Results
Clip A B C D

8.3 Moving propagation conditions Static Environment


MD 0.70 0.46 0.45 0.25
The dynamic propagation conditions for this environment HW 0.64 0.36 0.32 0.19
for the test of the baseband performance is a non-fading FM 0.85 0.83 0.74 0.68
channel model with two taps as described by 3GPP TS Multi-path Environment
25.101. One of the taps, Path-0 is static, and other, Path-,1is MD 0.27 0.18 0.18 0.16
moving,. Both taps have equal strengths and phases but
HW 0.37 0.21 0.18 0.13
unequal time difference exists between them.
FM 0.62 0.53 0.47 0.34
The received values of BLER and FER are given in Table 4. Moving Environment
MD 0.63 0.45 0.44 0.23
Table 6. Parameters for moving propagation conditions HW 0.55 0.35 0.31 0.26
BLER (Block Error Rate) 0.031
FM 0.83 0.70 0.67 0.52
Birth-Death Environment
FER (Frame Error Rate) 0.088 MD 0.58 0.32 0.35 0.27
HW 0.49 0.40 0.31 0.22
FM 0.85 0.72 0.68 0.45
8.4 Birth-Death propagation conditions
The following observations can be made on the above
These conditions are similar to the Moving propagation results. The received VQMP values for all video sequences
condition except, in this case both taps are moving. The are improved by implementation of the UEP and
positions of paths appear randomly and are selected with an NEPWRED methods. It can be observed that in most cases
equal probability rate. Table 5 lists the received values of the VQMP values show greater improvement with
BLER and FER for this propagation condition. implementation of the NEWPRED method than the
improvements obtained with the UEP method. Errors in the
Table 7. Parameters for birth-death propagation conditions wireless environment occur in bursts and are random in
BLER (Block Error Rate) 0.037 nature. Due to the time varying nature of the wireless
environment it is not possible to predict the exact location of
the error. The NEWPRED method provides protection to
FER (Frame Error Rate) 0.0851
the P-frames while UEP is aimed at protecting the I-frames.
As P-frames are much more frequent than I-frames in the
encoded video sequence, P-frames are more susceptible to
these bursty randomly distributed errors. This explains why
(IJCNS) International Journal of Computer and Network Security, 135
Vol. 2, No. 2, February 2010

the NEWPRED implementation gives better performance


than the UEP implementation. Combined implementation of Now in second run, the Eb/No value is decreased by 1 dB to
the UEP and NEWPRED methods always outperforms a 3.6 dB and the total throughput per cell is presented in
single method implementation and gives the best quality of graphical form in Error! Reference source not found..
received video. The same observation is made in terms of
performance assessment using the VQMP score in most
cases. Objective assessment of VQMP scores is highly
dependent on bit-by-bit comparison which does not take into
consideration the number of frozen frames. In most cases
frozen frames give better correlation with original frames
compared to frames with propagation errors.

10. Network capacity and coverage


improvements
As shown by the simulation results above, implementation
of UEP and NEWPRED enhances the received video
quality. This implies that the same video quality or QoS can
be attained at lower Eb/No if the two proposed methods are Figure 8. Throughput in downlink per cell at Eb/N0 – 3.6
utilized. A reduced Eb/No value have a direct impact on dB
radio network coverage and capacity planning for UMTS
networks Error! Reference source not found., Error! As can be clearly compared from Error! Reference source
Reference source not found.. A number of variable not found. to Error! Reference source not found., the
parameters determine the resultant received video quality. decrease in the Eb/No value by 1 dB results in a significant
The nature of the video clip, the encoding parameters, increase in the total throughput per cell in the downlink
channel propagation environment and many more variables direction.
can have varying effects on the overall video quality. For
these reasons it is not easy to quantify exactly how much
reduction in Eb/No value would be achieved using the
11. Conclusions
techniques described above. For the simulation purpose, a As can be seen from the simulation results, implementation
reduction of 1 dB is assumed. Network capacity and of UEP and NEWPRED results in significant improvements
coverage enhancements are quantified assuming 1 dB on the received video quality. Improvements achieved using
reduction in the required Eb/No value for video service. these methods provides extra margin for network operators
to increase capacity. With implementation of these error
The basic radio network used for this simulation is shown in concealment methods the same video quality can be
following Figure. 10 Node-B sites each with three sectors achieved using a lower Eb/No which in turn provides
are simulated. 2000 UEs are randomly distributed over an flexibility for network operator to increase the number of
area of 6 × 6 km2. The UMTS pedestrian environment for users. This implementation obviously requires some
the micro cell is selected as a propagation model. The processing overhead on both the encoder and decoder sides,
mobiles are assumed to be moving at a speed of 3 km/h. but considering the increasing processing power of mobile
Downlink bit-rate of 64 kbps is selected and other link layer stations, this should present a major obstacle and the error
parameters and results are imported from simulations concealment methods described should provide considerable
discussed above. In the first run, an Eb/No value of 4.6 dB enhancements.
is used and the total throughput per cell is represented in
graphical form as shown in Error! Reference source not
found..
References:
[1] International Standard ISO/IEC 14496-2: Information
Technology - Coding of audio-visual objects-Part 2,
Visual, International Organization for Standardization.
2001.
[2] 3GPP, Technical Specification Group Radio Access
Network; Radio Link Control (RLC) protocol
specification; 3GPP TS 25.322, V4.12.0.
[3] Pereira, F., Ebrahimi, T., The MPEG-4 Book; Prentice
Hall PTR (July 20, 2002), ISBN-10: 0130616214.
[4] 3GPP, Technical Specification Group Radio Access
Network;Radio interface protocol architecture; 3GPP
TS 25.301 (2002-09), Ver 5.2.0.
Figure 7. Throughput in downlink per cell at Eb/N0 – 4.6
dB
136 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 2, February 2010

[5] 3GPP, Technical Specification Group Services and


System Aspects; Quality of Service (QoS) concept and
architecture, 3GPP TS 23.107 v4.0.0.
[6] 3GPP, Technical Specification Group Radio Access
Network; MAC protocol Specification; 3GPP TS
25.321, V4.0.0.
[7] 3GPP, Technical Specification Group Radio Access
Network; PDCP protocol specification; 3GPP TS
25.331 (2003-12), Ver 6.0.0.
[8] 3GPP, Technical Specification Group, Radio Access
Network; Broadcast/Multicast Control BMC; 3GPP TS
25.324, v4.0.0.
[9] Worrall, S., Sadka, A., Sweeney, P., Kondoz, A.,
Backward compatible user defined data insertion into
MPEG-4 bitstream. IEE Electronics letters.
[10] ATIS Technical Report T1.TR.74-2201: Objective
Video Quality Measurement using a Peak-Signal-to-
Noise Ratio (PSNR) Full Reference Technique. October
2001, Alliance for Telecommunications Industry
Solutions.
[11] Wolf, S., Pinson, M., Video Quality Measurement
Techniques. June 2002, ITS, NTIA Report 02-392.
[12] RFC-3016, RTP Payload Format for MPEG-4
Audio/Visual Streams, November 2000.
[13] 3GPP, Technical Specification Group Terminals;
Common test environment for UE conformance testing;
3GPP TS 34.108 (2003-12), Ver 4.9.0.
[14] 3GPP, Technical Specification Group Radio Access
Network; UE radio transmission and reception (FDD);
3GPP TS 25.101 (2003-12), Ver 6.3.0.
[15] Laiho, J., Wacker, A., Novosad, T., Radio Network
Planning and Optimization for UMTS, Wiley, 2nd
edition (December 13, 2001).
[16] Holma, H., Toskala, A., WCDMA for UMTS: Radio
Access for Third Generation Mobile Communications,
Wiley, 3rd edition (June 21, 2000).

Dr. Bhumin Pathak received his M.Sc. and Ph.D. degree from
Oxford Brookes University, Oxford, UK. He has been working as
Systems Engineer at Airvana Inc., since March 2007.
Dr. Geoff Childs is Principal Lecturer at School of Technology at
Oxford Brookes University, Oxford, UK.
Dr. Maaruf Ali is Senior Lecturer at School of Technology at
Oxford Brookes University, Oxford, UK.

Вам также может понравиться