Вы находитесь на странице: 1из 41

1

Network Technology Seminar 2008


"Media Networks in Action"

EBU, Geneva, 23 and 24 June 2008

organized with EBU Network Technology Management Committee (NMC)

Report
Jean-Noël Gouyet, EBU International Training

Revised and proof-read by the speakers

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
2

Opening round table…………………………………………………………………………………4

1 IP and MPLS ................................................................................................................ 6


1.1 Concepts and protocols ................................................................................................ 6
1.2 Traffic engineering ........................................................................................................ 7
1.3 Video over IP - Maintaining QoS ................................................................................... 8
1.4 Practical application ...................................................................................................... 9
2 Audio Contribution over IP ...................................................................................... 10
2.1 The ACIP standard - A tutorial .................................................................................... 10
2.2 Implementing and testing ACIP using open source software ...................................... 11
2.3 Interoperability in action .............................................................................................. 13
2.4 Demos ........................................................................................................................ 14
3 Video contribution over IP ....................................................................................... 15
3.1 Video contribution over IP – The EBU project group N/VCIP ...................................... 15
3.2 Real-time transport of television using JPEG2000 ...................................................... 16
3.3 France 3's new contribution network over IP ............................................................... 17
4 HD over Networks ..................................................................................................... 18
4.1 HD contribution codecs ............................................................................................... 18
4.2 Practical thoughts and wishes… or whishful thinking! ................................................. 19
4.3 Eurovision experience in HD contribution ................................................................... 20
5 Real world applications: the News .......................................................................... 21
5.1 The use of COFDM for ENG applications ................................................................... 21
5.2 Getting News to base, or 'Only connect'? ................................................................... 22
5.3 How to connect your Video Journalists (VJ) ................................................................ 22
6 Real world applications - Production and Broadcast............................................. 23
6.1 Architecting a fully-networked production environment ............................................... 23
6.2 Bringing the BBC's networks into the 21st century ....................................................... 25
6.3 DVB-H small gap fillers: home repeaters improving indoor coverage .......................... 26

List of abbreviations and acronyms………………………………………………………..……28

Table 1: Towards lossless video transport - Deployment scenarios……………………………………….35


Table 2: Practical measurement and practical network performance………………………………...……37
Table 3: HD contribution codecs……………………………………………………………………………….39
Table 4: The ovewhelming crowd of networks worldwide!......................................................................41

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
3

Notice

This report is intended to serve as a reminder of the presentations for those who came to the seminar, or as an
introduction for those unable to be there. So, please feel free to forward this report to your colleagues!

It is not a transcription of the lectures, but a summary of the main elements of the sessions. The tutorial-like
presentations and tests results are more detailed. The speakers‟ presentations are available on the following EBU
FTP site via browser: ftp://uptraining:ft4train@ftp.ebu.ch

The slides number [in brackets] refer to the slides of the corresponding presentation.
1
To help "decode" the abbreviations and acronyms used in the presentations' slides or in this report, a list is
provided at the end of this report.

Web links are provided in the report for further reading.

Many thanks to all the speakers and session chairmen who revised the report draft. Special thanks to Peter Calvert-
Smith (Siemens), Andrea Metz (IRT) and Martin Turner (BBC), who kindly allowed us to use their personal
presentation notes (§ 1.2 - § 2.2 - § 5.2). Nathalie Cordonnier, project manager, and Corinne Sancosme (EBU
International Training) made the final revision and editing.

The Networks 2005 / 2006 / 2007 seminar reports are still available on the EBU Web site:
http://www.ebu.ch/CMSimages/en/NMC2005report_FINAL_tcm6-40551.pdf
http://www.ebu.ch/CMSimages/en/EBU_2006_Networks_Report_tcm6-45920.pdf
http://www.ebu.ch/CMSimages/en/EBU-Networks2007-Report_tcm6-53489.pdf

1
More than 325!

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
4

Opening round table - strategic opportunities and issues


Rhys LEWIS, Chief Enterprise Architect, Technology Direction Group, BBC and Chairman of the EBU Network
Technology Management Committee
"The strongest Networks seminar programme I've seen!"…
with NMC members and session chairmen: Chris CHAMBERS (BBC), Mathias COINCHON (EBU), Lars JONSSON
(SR), Marc LAMBEREGHS (EBU) and Anders NYBERG (SVT).

The opening 'round table' answered the question: 'What are the strategic opportunities and issues posed for
broadcasters by developments in network technology?' It also highlighted some of the work going on in the EBU to
address these challenges.

Strategic opportunities tend to break into:


 either do the same old things but better and/or cheaper
 or, do new things.

The use of IP (Internet Protocol) supports:


 Production based on file transfer. It nowadays represents the bulk of the Broadcast work and requires massive
resources at the storage side and at the network level.
 On-demand delivery
 New and cheaper ways of covering sports and other events and of connecting to foreign correspondents. For
2
example, SVT Swedish Radio by using Audio over IP (AoIP) for Ice Hockey championships has gained in the
order of 10 000 euros for 1-2 week transmission, compared to the cost of ISDN, while keeping good or even
better audio quality in stereo.
 Offering more universality, with a lot of applications around IP, more flexibility and possibly lower costs in the long
term.
 New solutions for the Eurovision and Euroradio satellite path, without forgetting that the EBU fiber betwork based
on DTM offers a good QoS (quality of service) for audio and video transmissions

The issues facing EBU members:

Making it work… well!


 Standards and Inter-operability
o Standards for managing the devices attached to the network3 and ease adding equipment to the network. It
becomes a nightmare to manage hundreds of them, with various proprietary APIs. But a lot of manufacturers are
not so much interested in those particular areas of standards.
o Some of the standards, e.g. Audio over IP (AoIP), which are released are not 'certified'. The challenge is to
define the minimum requirements, to keep the standard as simple as possible in order to be adopted by all
manufacturers and implemented in the right way. It's then a constant effort to verify, to organise plug tests,
workshops, to improve inter-operability.
 QoS
Many EBU members have trouble when they order IP networked solutions. They expect a given QoS specified in the
contract and later note different performances in real operational conditions. There is an issue of speaking the same
language between broadcasters and the networks operators, and also of having the right way to measure the QoS
and to define it in a Service Level Agreement (SLA) in the contract. A new EBU project group, N/IPM, is in charge of
defining the good measurement methods.

Keeping it work!
 Security & Business continuity
o We still don't have the same reliability with IP as we had in the old system, but it's constantly improving. This
is an ongoing process - we are testing new equipment and end units, doing research and measurements on
various types of IP networks.
o Some concerns about 'having too many eggs in a single technology basket'. A lot of work to do to make sure
we don't have a sudden collapse of the entire network. Network engineering can ensure business continuity and
it has to be looked throughinto.

2
cf. § 2.3
3
cf. EBU Networks2006 seminar report § 2.1and § 2.2 and EBU Networks2005 seminar report § 2.2 and § 2.3

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
5
 Evolution of the Internet
o The Internet may change. Traffic shaping, for example, can affect us. There may be also some tariff on the
Internet - if one day I can't download as many Gigabytes as I want, it will change our way of watching
programmes.
o And what will come after IP? Which technology will it be and what are we missing today?

"Technology is our word for something that doesn't quite work yet" (Danny Hillis, Douglas Adams). New network
technology is something that we have, which offers us a lot of opportunites but that don't quite work yet with many
challenges to ensure the expected performance, in a number of environments for a number of different reasons. One
of the role of the EBU Network Management Committee is to help to make it work, and this seminar should
contribute.

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
6

1 IP and MPLS

IP and MPLS has been the network technology hailed as the method of delivering high QoS media services across
both metro and wide area networks for some years now. This first session explores both the theory and the
practicalities of actually trying to achieve the business needs of the broadcast and production markets when using
this network protocol along with the lessons learned up to now.

(Reminder from the Networks 2007 seminar report § 3.3)

MPLS is short for Multiprotocol Label Switching, an IETF initiative that integrates Layer 2 information about network
links (bandwidth, latency, utilization) into Layer 3 (IP) in order to improve IP-packet exchange. MPLS gives network
operators a great deal of flexibility to divert and route traffic around link failures, congestion, and bottlenecks. It has
been developed since 1999. It is complex to operate and configure for the service provider, but is simple as a "data
socket" on the wall for the user.

The main features of MPLS are:


 The MPLS Integrated Services (IntServ) "explicit routing" allows to manually create individual tunnels, taking
different paths through the network, for different types of application data.
 If fast rerouting (FRR) is necessary, either end-to-end back-up tunnels (created manually before the failure
occurs) are automatically switched over globally ("global repair"), or backup tunnels are automatically created for
each segment of the primary tunnel and switched locally ("local repair").
 Differentiated Services (DiffServ) and DiffServ-Aware Traffic Engineering offers a dynamic path selection using
OSPF (Open Shortest Path First), the network knowing globally about available resources.

1.1 Concepts and protocols


4
Thomas KERNEN, Cisco , Switzerland
This presentation focuses on techniques to minimise network based outages.

First, what are the video Service Level Agreement (SLA) requirements?
 Throughput: requirement addressed through capacity planning and QoS (i.e. Diffserv).
 Jitter: important for studio quality content. The delay variation is absorbed by the receiver buffer, which has to be
minimised in order to improve responsivity, especially in contribution environment. It is controlled with Diffserv, a
mature technology known to offer ~ <1msec network induced jitter end-to-end.
 Latency: important mainly for live or conversational 2-way content.
 Service Availability: the proportion of time for which the specified throughput is available within the bounds of the
defined delay and loss - a compound of the other networks and core network availability.
 Packet Loss: extremely important. One lost packet may contain up to 7 MPEG Packetized Elementary Stream
packets. For example [9], with a 12-frame GOP (and a I:P:B data size ratio of 7:3:1), the I-frame loss probability
varies from 30 % to 100 % for an outage duration from 50 ms to around 400 ms. Controlling loss is the main
challenge. The 4 primary causes for packet losses are:
o Excess delay: renders media packets essentially lost beyond an acceptable bound. Can be prevented with
appropriate QoS (i.e., Diffserv).
o Congestion: considered a catastrophic case, i.e. a fundamental failure of service. Must be prevented with
appropriate QoS and admission control.
o Physical-Layer errors: apply to core (less occurence) and access. Assumed insignificant compared to losses
due to network failures.
o Network reconvergence: occur at different scales based on topology, components and traffic. Can be
eliminated with high availability (HA) techniques.

The core impairment contributors are: Trunk failures ( .0010 / 2 h) + Hardware failures ( .0003 / 2 h) + Software
failures ( .0012 / 2 h) + Software upgrades / Maintenance ( .0037 / 2 h) = a total of .0062 Impairements / 2 hours i.e.
1 impairment every two weeks. Note that the average mean time between errors on a DSL line is in the order of

4
http://www.cisco.com/

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
7
minutes if no protection is applied, and that calculations across several Service Providers show a mean time
between core failures affecting video is greater than 100 hours.

The impact of outage can be reduced but requires smart engineering. Aiming at lossless video transport, the
following deployment scenarios can be implemented, classified in Figure 1 in terms of lossless or not, of cost and
complexity of network design and deployment and application infrastructure. The corresponding techniques are
detailed in the Table 1 (Annex).

Figure 1: Deployment scenarios for lossless video transport


Increasing Cost and Complexity
Traffic Engineering
+
Re-engineering

Live / Live
Network

Multi-Topology Routing (MTR)


+
Live / Live

MPLS TE FRR Muti-Protocol Label Switching (MPLS)


+ Traffic Engineering (TE)
FEC or Temporal Redundancy Fast Reroute (FRR)

Muticast-only Fast Reroute Increasing Loss


+
Re-engineering

Live / Live
No network

Fast Convergence Muticast-only Fast Reroute


+ (MoFRR)
FEC or Temporal Redundancy

Fast Convergence

Lossless 1 GOP Impacted

1.2 Traffic engineering


5
Peter Calver-Smith, Siemens IT Solutions and Services , UK

The aim is to deliver a glitch-free, resilient and reliable media stream with minimal latency running over networks
shared with unpredictable business data traffic.
Of the many requirements for an IP media circuit, these have been found critical to the successful implementations:
 Fixed IP latency
 Low IP jitter
 Practically zero IP packet loss and reordering.
 Minimum overall latency.
The fundamental means of meeting this requirement in IP is Quality of Service (QoS). Traffic is segregated by
setting QoS levels, either at the router port or in the coder. Our practice is to set the QoS at the router port to protect
the integrity of the network. The broadcast media traffic and VolP traffic are both given a QoS setting which is
labelled Expedited Forwarding (EF).

Table 2 (Annex) presents the experience of broadcasting engineers regarding the practical measurement [5]-[12] of
key parameters and the practical network performance [14]-[20].

The lessons learnt


 It is cutting edge technology. The Broadcasters use of IP networks for streaming with low latency (and also for
large media file transfer) is very different from “normal IT” e.g. MS office, Web browsing.
 Interoperability is important – Hence EBU N/ACIP (§ 2) and N/VCIP (§ 3)

55
www.siemens.co.uk/it-solutions http://www.siemens.com/index.jsp?sdc_p=c0u2m0p0

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
8
 IP router and broadcast experts need to work together in the design and testing of equipment and systems. IP
measurement is not refined for media use, hence the new EBU N/IPM project group started 1 April 2008.

How to install media IP network


 Plan your network: Physical layer, Capacity, VLANs / ACLs / Address ranges
 Know your IP network equipment: Codecs, Switches, Routers, etc.
 Know your Telco and its equipment
 Know and inform your users (operators…) that Audio and Video over IP has limitations
 Test off-line – you will find out the limitations without prejudicing service
 Ensure the network is very well managed and “clean”
 Ensure a QoS structure is set and maintained
 Use good planning and change control6 to add or change codecs and network structure.
 Monitor the network for unauthorised additions.

1.3 Video over IP - Maintaining QoS


Jeremy DUJARDIN, VP Engineering, Genesis, U.S.A.
Genesis Networks7 provides end-to-end video transmission solutions to world Broadcasters, combining Video over
IP, fiber optic [8] and satellite-based technologies, with over 170 Points of Presence in more than 20 countries and
connections to 17 teleports worldwide [7]. It has a network architecture built on a highly reliable Layer 2 Ethernet
infrastructure overlaid onto underlying capacity, large multi-Gigabit network support from the edges of the network,
multiple layers of service protection including route & nodal diversity using MPLS, and multiple carriers networks for
added resiliency.
8
IRIS is the software enabling customer control over remote bandwidth provisioning, point-to-multipoint routing and
transmission scheduling. It provides the user with real time monitoring, alarming and fault reporting [6]-[7].

The challenge brought by video broadcasting is to maintain the QoS with the stress of a relatively small number of
flows but with large bandwidth requirements (from 2 Mbit/s up to 1.5 Gbit/s). If all IP networks are not created
equally, most IP Networks are designed for many simultaneous connections, with low bandwidth requirements per
stream,. i.e. 100 000 and more IP flows, each under 1Mbit/s.

There are two major components to manage QOS on any network:


 Classification of traffic and ensuring it is carried through the entire network
 Bandwidth management, ensuring no oversubscription of high priority flows.

In order to classify and prioritize traffic, different items are available at (Figure 2 -top):
 Layer 2 (Data Link) : VLAN tags, MAC adresses
 Layer 3 (Network): Source IP, Destination IP, Source Port, Destination Port, Type of Service (TOS) bits. TOS
flags (4 bits) in the IP Header allow prioritization and classification of traffic on a per service basis, give the most
flexibility, and are supported by most video codec manufacturers in each IP flow. The classes of network service
available are: Normal/Low delay, Normal/High throughput, Normal/High reliability, Normal/Minimize monetary
cost.

But what is 'inside the cloud', through the rest of the network (Figure 2 - bottom)?
The IP flow is connection-oriented: series of packets are transmitted based on source and destination addresses.
After a router looks up the first packet, the route is pushed to the line cards, in the hardware tables along with the
appropriate address. This is be enhanced by using basic policy based routing and/or MPLS (Multi-Protocol Label
Switching. MPLS Label Switched Paths (LSP‟s) can be used for traffic engineering, allowing complete control of
bandwidth and routing for each classification of service.
(Figure 2 - bottom/1) The Switch CPU looks up the first packet, determines the classification and priority, then all
subsequent packets matching the same criteria will be forwarded in hardware.
(Figure 2 – bottom/2) Using MPLS, the traffic can be pushed on different paths based on classification.

6
A formal method of tracking and recording proposed and actual changes to a system
7
http://www.gen-networks.com/
8
http://www.gen-networks.com/content/iris/index.aspx

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
9
Guaranteeing availability is part of QoS. It is therefore critical to implement diverse protected circuits and switches
and to build redundant Label Switched Paths (LSP) using MPLS and Classification. This allows Fast Reroute
(shorter than 50 ms) and simultaneous IP Services (hardware manufacturers, such as Harris and Medialinks, will
support receiving multiple IP Flows and do protection in the end video devices).

Bandwidth management - An MPLS enabled network allows the creation of Label Switch Path‟s (virtual circuit) that
can tunnel traffic through the network. The Traffic Engineering implements deterministic routing and enables proper
bandwidth management. The Network Manager needs to ensure no oversubscription of high-priority flows. An
external system, such as IRIS, is required to properly manage broadcasts. This system must: know and understand
the network topology, accept and manage all high priority data on the network, understand all future bandwidth that
will be placed on the network.

Figure 2: Classifing and prioritizing traffic & Inside the 'cloud'

Inside the ‘cloud’


Dedicated private line circuits
SONET / Dark Fiber / SDH /
Wavelengths / Ethernet

1.4 Practical application


Gérard LOTTE, TDF, France
9
The TDF company is a group of French and European subsidiaries [3], present all along the digital production and
contribution/emission chain [4].

In 2006 it started to design a Digital Multi-Service Distribution Network, the 'TMS' (Transport Multi-Service),
completely implemented in July 2008. It is a new MPLS Ethernet (Layer 2) network, composed of 150 routers spread
in 42 POPs all over France. No layer 3 routing is currently used in this network. The inter-POP links are based on 5
Gigabit (1 to 4) optical fiber loops [5] and the access mainly on Ethernet microwave links (155 Mbit/s), some fibers
too. The QoS management enables different levels of services on the same link, without any risk of interaction
between services and customers. 350 services are running on this network, including:
 Digital distribution for regional Digital Terrestrial Television (DTT) programs [8] from the 24 regional studios of the
French regional network France 3 over 92 regional transmitters, using MPLS, RSVP and FRR.
 Collecting 24 France 3 regional programs back to the national FR3 central studio in Paris [9], using the same
protocols.
 Digital distribution of France 3 national programs to regional TV regional production centers for play-out [10],
using MPLS, VPLS, RSTP and PIM (at he head ends).

9
http://www.tdf.fr/groupe-tdf/filiales/

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
10
 Exchanges of programs between regional studios [11], using MPLS, VPLS and RSVP.
 Backbone network for voice and data services (eg for a Wimax service provider) [12], with MPLS, VPLS and LDP.
 Distribution of HD programmes to DTT transmitters [13], with MPLS, VPLS, RSVP and FRR.
 Transport of 18 DVB satellite multiplex to the TDF‟s teleport uplinks in Nantes (Brittany) [14], with MPLS and
RSVP.
 Digital distribution of radio programs from 24 regional studios to 92 regional FM or T-DMB transmitters [15], with
MPLS, VPLS (point-to-point) and and RSVP (point-to-multipoint).
 Pro WiFi contribution access network for journalists [16]: up to 100 access points, 2 Mbit/s guaranteed, and up to
10 Mbit/s in best effort for the access, MPLS, VPLS and LDP used.
 Connecting remote cameras, placed on the transmitters sites, to the studios for live weather illustration or for
regional use [17]; with MPLS and VPLS.
 Transport of COFDM ENG services from the receiving HF point to the play-out studio [18], with MPLS and VPLS.

The next step is to extend this multi-service network to the European level [19].

2 Audio Contribution over IP

Audio over IP equipment from one manufacturer has until now not been compatible with another manufacturer‟s unit.
This session covers applications and real-life use of audio over IP with demonstrations and hands-on.

2.1 The ACIP standard - A tutorial10


Mathias COINCHON, EBU, Switzerland
11
The EBU has worked in a project group, N/ACIP (Audio Contribution over IP) to create a standard for
12
interoperability. This standard, EBU–TECH 3326 (revision 3) published in April 2008 was jointly developed by
members of the EBU-group and manufacturers.

The audio coding algorithms, which 'must', 'should' or 'may' be used, are:

Mandatory audio codecs Recommended audio codecs Optional audio codecs


ITU-R G.711 A-Law & µ-Law (U.S.A. + Japan) MPEG-4 AAC Low Complexity 13
Enhanced APT-X
64 kbit/s Profile

ITU-R G.722 64 kbit/s MPEG-4 AAC-LD MPEG-4 HE-AACv2


MPEG-1 Layer II MPEG-1 Layer III Dolby AC-3
64 / 128 / 192** / 256** / 384** kbit/s 64 / 128 / 192** / 256** kbit/s
32 / 48 kHz 32 / 48 kHz
MPEG2 Layer II MPEG-1 Layer III
64 / 128 / 192** / 256** / 384** kbit/s 64 / 128 / 192** / 256** kbit/s
16 / 24 /32 / 48 kHz 16 / 24 /32 / 48 kHz
Linear PCM (raw) 3GPP AMR-WB+
16 / 20 / 24 bits (Adaptive Multi-Rate Wideband)
** Optional for portable equipment

The transport protocol used is RTP on top of UDP [8], as in many real-time IPTV systems.
For the session management, 3 protocols are used: SDP: Session Description Protocol (RFC4566), SIP: Session

10
See also EBU - TECH 3329 'A Tutorial on Audio Contribution over IP'. May 2008
http://www.ebu.ch/CMSimages/en/tec_doc_t3329-2008_tcm6-59851.pdf
Lars JONSSON and Mathias COINCHON 'Streaming audio contributions over IP' EBU Technical Review - 2008 Q1
http://www.ebu.ch/en/technical/trev/trev_2008-Q1_jonsson%20(aoip).pdf
11
http://www.ebu-acip.org
12
EBU - TECH 3326 (revision 3) 'Recommendation for interoperability between Audio over IP equipment'. April 2008
http://www.ebu.ch/CMSimages/en/tec_doc_t3326-2008_tcm6-54427.pdf
13
http://www.aptx.com

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
11
Initiation Protocol (RFC3261) and SAP: Session Announcement Protocol (RFC2974). For the signalling, SIP,
commonly used for Voice over IP, allows to establish, maintain and end the session [13].The hosts listen to
messages and respond. SIP also negotiates the audio coding parameters. To facilitate interoperability, SIP servers
14
associate names with IP addresses and allow a reporter to be found, whatever the location, whatever the device!

Audio Contribution over IP should rely on managed (private) IP networks, allowing QoS. QoS can be expressed in
Service Level Agreement (SLA) contracts with providers, but requirements must be clearly expressed concerning:
transmission performances (latency, jitter...), network availability (99.9% 99.99%), provisioning delay (1 week?
1 month?). The definition of the measurement method is also important (EBU N/IPM group is working on profiling,
measurement).

A variety of 'last mile' access methods are available with different performances:
Fiber optic Copper with xDSL Mobile (3G/UMTS, Satellite Wireless
Wimax, LTE)
High quality but SDSL: Symetrical Increasing bit rates but Long delays, often Wifi: no guaranty due
expensive uplink / downlink unreliable shared bandwidth to frequency sharing
Bit errors lead to No solutions with Inmarsat BGAN, DVB-
packet losses guaranteed QoS RCS providers
nowadays (Eurovision in the
HSDPA / HSUPA future?)
shared channel

'Audio Contribution over IP' Recommendations has already been implemented by many audio codecs manufacturers
[16]. A recent test (February 2008) between 9 manufacturers proved that earlier incompatible units can connect with
professional audio formats using the standard. Some are still premature prototypes, and not yet EBU compliant.
Marketing is very aggressive. Units are still under development. An open source reference implementation by IRT /
BBC R&D is in development (§ 2.2).

2.2 Implementing and testing ACIP using open source software


Andreas METZ, IRT, Germany and Peter STEVENS, BBC, UK

The goals of this reference software, are:


 to implement all the mandatory features of the EBU standard – protocols, bit rates, commands, etc. at the same
time as manufacturers were working to the same standard (will not cover recommended or optional features).
 to provide independent interoperability testing to check compliance [3].
15
At this point, this reference software will be the BASE point and will be released onto Sourceforge . Therefore it
doesn‟t prevent anyone from taking the source code and developing and producing a software (commercial) version
to maybe include other recommended and/or optional features.
It is jointly developed by BBC (interfaces, network, packetization, logging, etc) and IRT (codecs) A suitable open
source code that supported G.711 and already well documented has been found , but needed to support the
16
additional mandatory codecs G.722 and MPEG-2 Layer II (codec extension of PJSIP ). Suitable audio interfaces
(analogue, digital, consumer, professional) to support the many possible requirements, are also available.
17
A 'plug test' was arranged by the EBU and held at IRT, Munich, in February 2008. Nine manufacturers took part.
All bought one unit for testing. All units tested against each other and also to the EBU reference over a three day
12
period [17]-[21]. Participants actively discussed details in the EBU Tech 3326 document and this resulted in some
clarifications and updates (re-published April 2008).
The test network structure required the same subnet and only a central switch, not a router [10]. Each of the five
tables arranged as shown and manned by an EBU member, plus a central analysis point.

14
e.g. - sip:reportername@sr.se or telephone number
15
http://sourceforge.net/index.php
16
http://www.pjsip.org/
17
AEQ (Spain) http://www.aeq.es/eng/index.htm , AETA (France) http://www.aeta-audio.com , AVT (Germany) http://www.avt-
nbg.de/homepage_engl/start.htm , Digigram (France) http://www.digigram.com , Mayah (Germany) http://www.mayah.com/ ,
ORBAN (Germany) ) http://orban-europe.com, Prodys (Spain) http://www.prodys.net , Telos (U.S.A.) .) http://telos-systems.com,
Tieline (Australia) http://www.tieline.com/ip/index.html

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
12
Advantages:
 Easily see full traffic between both codecs under test and SIP communications with accurate timing of data
 Only one protocol analyser required at each of the five test station positions
 RTP travels directly between codecs, via hub
 No involvement in testing or ensuring that router configuration is correct

Disadvantages:
 Codecs and Config PC‟s have to move around testing station positions – minimal movement regime determined
 Potentially, other test stations could see communication traffic to/from SIP server from codecs not involved in test
– in practice, not a problem

The testing consisted of 11 rounds of 45 permutations of devices under test for each of 4 mandatory codecs. More
18
than 180 tests were undertaken including reruns of some tests. G.711 was tested using an Asterisk SIP server;
G.722, MPEG Layer II and Linear PCM were tested without the SIP proxy. All rounds were recorded (2 GB of data
19
stored) using the open source Wireshark protocol analyser [15] [16].

The test results in short:


Of the four mandatory audio coding formats: most units were capable of connecting G.711 audio, some units were
also capable of connecting with G.722, MPEG Layer II and Linear PCM. 137 total connections successfully made
(G.711: 45 total = 39 audio + 6 failed / G.722: 36 total = 30 audio + 6 failed / MPEG Layer II: 36 total = 29 audio + 7
failed / Linear PCM: 20 total = 10 audio + 10 failed). Minor improvements in software versions were undertaken and
reruns operated in this case – but the time was limited for too many.

Some encountered problems, concerning:


 Protocols' usage: SIP - some connection closures due to poor command handling. SDP (re-invite, options
request) - mandatory attributes not supplied, and ignoring media stream port in SDP message.
 Codecs: G.711 - only µ-Law handled by some. G.722 - audio not received by some units, some related to protocol
usage. MPEG Layer II - bit rate and channel limitations; poor channel handling. Linear PCM - no support; packet
size handling now mandated to 4ms.

The lessons learnt with the SIP Gateways. Although not strictly part of the interoperability document, are a
necessary part of infrastructure requirements for broadcasters.
 Asterisk was used, although others are also available, but with a number of unexpected behaviours (for example:
INVITE queries not relayed correctly; media stream sometimes altered, promotion of G.711 to G.722; unexpected
disconnections; odd behaviour of BYE request; response by codecs handling Gateway codec presence checking
– is codec still there?).
 The SIP infrastructures need to be able to handle Audio over IP, cell phones and Voice over IP units… and Video
over IP as well? Some ongoing work for broadcasters required in this area.

20
At the end January 2008 the BBC Festival of Technology took place at BBC Television Centre, London, with the
first public demonstration of the reference software with 'real' devices [23].

So, what could we do with this in the future? Some thoughts:


 Allow video codecs. Decide on good broadcast (hopefully Open Source) codecs for audio, even codecs not in the
EBU standard. Include surround sound codecs.
 Could produce a stand alone SIP client, with the following needs:
o Use broadcast quality codecs.
o Able to communicate to an EBU SIP server (if implemented).
o Able to communicate with most Hardware codecs.
o Both for on-the-road usage and in-studio usage.
o Maybe Java-client? (What about latency?)
o Ease of use. In the ultimate situation the following one-button control performs the following:
o Confirm connection to SIP server.
o Set up temporary communication to other end (studio?).

18
http://www.asterisk.org/
19
http://www.wireshark.org/
20
http://www.bbc.co.uk/rd/fot/handouts.shtml

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
13
o Test line speed to other end.
o Choose appropriate codec.
o Reconnect using new codec.
o Automatic Fallback to lower bandwidth codec if bad line during transmission.
o Built in simple Audio mixer, for microphone and playback of files.
o Open text chat window with studio for easy communication.
o And so on…

2.3 Interoperability in action


Björn WESTERBERG, SR, Sweden

What kinds of networks have we been testing and using?

Wired networks Wireless networks


Unmanaged Managed 'Unmanaged' Managed
with QoS
Technology: xDSL Technology: MPLS (Multi- Technologies: 3G-Operators offer ”Virtual
Protocol Label Switching) - '3G': UMTS/WCDMA (100- Private Network – service for
200 kbit/s average bit rate own 'Accelerated Private
Bit rate: 256 kbit/s - < 10 Mbit/s Network' ??? good for
Latency (very important for a Key parameters:, bandwidth, '3.5G': HSPA ('Turbo 3G' > security but with no
good performance of Audio latency, jitter 500 kbit/s) coming now…. bandwidth guaranty
over IP): 20 ms - ~150 ms Service Level Agreement, '3G'-networks & operators in
important when you contract Sweden (Tele2, Telia,
Jitter: 5 ms - 30 ms The ICE-net (CDMA2000)
with an ISP Telenor, Tre) offering very
good performance (but at offers access priority to
very low bit-rate); 600 000+ subscribers, i.e. 2-10 times
The Eurovision Fiber Network subscribers / 9 million people (but 50 times more
FiNE (EBU) is a good expensive!) higher priority to
example, used a couple of - CDMA2000 (common in the available subcarriers then
times for 2 Mbit/s U.S.A.), and UTRA TDD other users in the same
transmission from Helsinki – Operators: IceNet (offering sector
worked perfectly. Also for 1.8 Mbit/s upwards and 4
Beijing SOG with 3 different Mbit/s downwards all over the
capacities country), Generic Mobile
- Long Term Evolution ('4G')
2.6 GHz band in major cities;
mobile WiMAX

Lot of hot-spots in Sweden

Don't forget one of the Internet The “bottleneck” for the


basic principle: the Best effort!! wireless is the radio layer
access - you have to get a
very good signal to your
device. Also, too many users
on the same cell reduces the
available individual bit rate.
And it is not yet stable for
Audio over IP, because of
high delay.

Devices in action
Hardware-housed software versus all-software codecs:
 Hardware codecs, are very easy to interface within the Broadcasting house, offer a lot of functionalities, but are
harder to handle for non-technicians and require upgrade attention.
 Software codecs, have a friendly Graphic User Interface, are easy to use for reporters and non-technicians, offer
less functionalities, but can easily be integrated in the portable laptop.

Our activities in the last two years:


 Icehockey World Championship Final in Moscow (2007): the reporters had a very nice quite expensive ISDN

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
14
connection but decided to try using the available IP and found they were able to transmit stereo with 256 kbit/s
MPEG Layer II 25 hours for 10 matches. In Quebec (2008) a 384 ??? kbit/s connection was used with no errors at
all.
 Swedish Icehockey National series (October to April, with 3-4 matches 2 times a week at the peak period) – audio
contribution over IP mainly for Web and 3G distribution (10 000 ('listeners').
 Opera transmission - 3 hours, with a 48 kHz, 20-bit PCM signal; it was risky, but we tested a couple of days
before – so we got a good idea of the traffic shape – and there was satellite backup.
 University of Gothenburg: professors participate live over IP in current affairs programmes. The AoIP equipment
was financed by the University.
 All Swedish Radio‟s foreign correspondents are IP-enabled, so transporting less expensive equipment - where IP
available.
 Alltogether, 30 external companies contribute to SR Radio programmes. For the past 6 months some have been
delivering music live streams over IP at a fairly good rate (384 kbit/s).

Final questions and answers


 Can the Internet be used and recommended in general for audio contribution? Of course not yet! It is not secure
yet! Only as a last option - if you don't have anything else, try it! Recommendations: keep the packets size very
large large (800 bytes or more) and see that you have a 50 % bit rate overhead.
 What can wireless networks be used for? For file transfer mainly, because the delay and the jitter are too much
out of control to be used for live transmissions.
 Which codecs are the best? Software codecs have a future, but inside the Broadcasting house hardware codecs
will remain, because they are easily interfaced with the rest of the infrastructure.

2.4 Demos
Figure 3: Audio over IP network over satellite with DVB-RCS (future Euroradio!)

APT
APT

Figure 4: Audio over IP codecs and SIP

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
15

„IRT/BBC „Commander G3‟


Reference‟ Tieline (Australia)
(Unit in (Unit in Geneva)
Geneva)

„Scoop4+‟
SIP AETA, (France)
„Scoopy‟ Proxy and (Unit in UK)
AETA, (France) Registrar
(Unit in Server
Geneva) (Unit in UK)

„Scoop4+‟ „PortaNet‟
Prodys (Spain)
AETA, (France)
(Unit in (Unit in
Germany) Geneva)

3 Video contribution over IP

This session gives an overview of the work and first results of the EBU 'Video Contribution over IP Networks'
(N/VCIP) project group and two practical use cases for video contribution over IP networks.

3.1 Video contribution over IP – The EBU project group N/VCIP


Markus BERG, IRT, Germany
The European Broadcasting Union (EBU) has established a Video Contribution over IP Networks (N/VCIP) project
group to:
 To identify users‟ requirements for real-time video over IP transmission, relating to broadcast contribution
applications. A questionnaire was sent to EBU members.For file transfer, N/FT-AVC created a report: EBU Tech
21
3318 .
 To analyse all standards and protocols available to support the transmission of video over IP, including multicast
and resource reservation.
 To propose a recommended set of protocols and of common features to ensure equipment compatibility for video
contribution over IP in the EBU (for example between different EBU members).

Different manufacturers already provide solutions for real-time video transport over IP. Interoperability of equipment
is of paramount importance. A first meeting took place with manufacturers in Geneva in March 2008. A presentation
of the ongoing N/VCIP work and discussion with manufacturers is planned at IBC (September 2008). An agreement
on profiles (see below) should follow with interoperability tests. The publication of a Technical specification is
planned for mid/end 2009.

A draft of VCIP profiles (minimum set of required parameters for interoperability) is currently under specification,
including the following ones:

21
EBU – TECH 3318 'File Transfer Guidelines'. June 2008
http://www.ebu.ch/CMSimages/en/tec_doc_t3318-2008_tcm6-60595.pdf

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
16
Profile 1 Profile 2 Profile 3 Profile 4
Uncompressed video Compressed high bit rate profile: MXF over IP Low bit rate:
(SD, HD) over IP Transport Stream over IP ISMA 2.0
according to Recommendation SMPTE (Internet Streaming
22
2022 Media Alliance)
Estimated ~ 300 Mbit/s for SD This profile will be chosen when Profile 1 is Between profile 1 ~ 3 - 15 Mbit/s for
bandwidth (576i25) too expensive or bandwidth not available and 2 SD (576i25) for
requirements ~ 1,6+ Gbit/s for HD ~ 7 - 30 Mbit/s for SD (576i25) (uncompressed or News…
(1080I25, 720p50) compressed)
incl. possible ~ 30 - 300 Mbit/s for HD (1080i25, 720p50)
overhead ~ 3,2+ Gbit/s (3.5 w. ~ 30 - 500 Mbit/s for HD (1080p50)
(draft) FEC) for HD
(1080p50) depending on the compression technology
FEC FEC - still to be cross FEC level B (row and column) To be defined No FEC specified
checked with existing yet - ISMA
specifications standard has to be
enhanced with FEC
Positive Supports TS with constant bit rates Suitable for Very efficient lower
points compressed and bit rate profile
In the future other compression formats
beside MPEG-2 and MPEG-4 AVC / H.264 uncompressed
may be transported in MPEG-2 TS video
Pending No complete standard No standard Useability for HD to
questions available at the available yet - be determined
moment Standard has to This profile has
Transmission be created! raised a lot of
networks expensive? questions from the
Capacity available? manufacturers

Video Services Forum


(VSF) proposal is
under evaluation
SMPTE???

Some topics are still to be covered:


 Signalling: is the usage of SIP recommandeable?
 Network profiles for video contribution, in relation with the newly created (April 2008) N/IPM project group in
charge of:
o Reviewing any relevant work defining network QoS classification for broadcast contribution feeds
o Specifying and conducting tests of IP network components
o Gathering information on IP measurement methods and tools
o Specifying classification for IP networks suitable for use by European public service broadcasters
o Developing network measurement methods suitable for verifying the various classes of services levels
defined for IP broadcast networks
 Scrambling (encryption)

3.2 Real-time transport of television using JPEG2000


23
Helge STEPHANSEN, Chief Technology Officer, T-VIPS , Norway

A JPEG2000 'Refresher'[2]-[8], underlined its following benefits:


 Low latency thanks to a picture by picture compression
 Compression based on wavelet filtering provides high efficiency over a wide quality range
 Compression is uniform for all pixels (no pattern from DCT blocks) - well suited for image post-processing
 Lossless and near lossless modes
 Marginal distortion on multiple generations

22
SMPTE 2022-1-2007 Forward Error Correction for Real-Time Video/Audio Transport Over IP Networks
SMPTE 2022-2-2007 Unidirectional Transport of Constant Bit Rate MPEG-2 Transport Streams on IP Networks
23
http://www.t-vips.com/

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
17
24
 JPEG2000 MXF storage format defined (SMPTE 422M )
 JPEG2000 is already in use in production (for example, Thomson Grass Valley HD/SD Infinity camcorder25 and a
number of video servers)

JPEG2000 offers 3 operational modes for TV broadcasting:


 Normal mode (irreversible) with floating point filter, optimised quantisation and codestream with bit rate limitation.
The typical compression ratio: is 10-20 %.
 Lossless mode with bit rate limitation. This mode applies integer filter, no quantisation and codestream limitation
with bit rate limitation. The typical compression ratio is 40-60%.
 Lossless mode. This mode applies integer filter, no quantisation and all coefficients included in the codestream.
Bit rate will be unrestrained.

The JPEG2000 bit rate characteristics are, after compression: 20-25 Mbit/s from SDI 270 Mbit/s, 60 – 120 Mbit/s
from HD-SDI 1.485 Gbit/s, 200 – 250 Mbit/s from HD 3G , Digital Cinema uses 250 Mbit/s. The lossless mode allows
a bit rate reduction of 30 to 60 %. JPEG2000 is by nature VBR - the number of bits produced by the compression
may be less than the bit rate configured. This may be used to save bit rate on transmission.

JPEG2000 combined with MXF / FEC / RTP / UDP/ IP provides a good solution for transport over IP [18]
MXF does provide a frame-based (progressive or interlaced scanned) wrapping of JPEG2000 picture, plus sound,
VANC and HANC data, and can handle synchronisation of video & audio [19]-[21]. ]. In the T-VIPS implementation
Preamble and Post-amble are skipped in order to reduce overhead (Typical overhead: 6-8 % incl. Ethernet and IP
headers).
26
The Forward Error Correction (FEC) [22] corresponds to the Pro-MPEG Forum Code of Practice #4 or to the
27
SMPTE 2022 matrix. For Variable Bit Rate (VBR), operation the FEC matrix being of fixed size, stuffing packets
are inserted [23].
The IP encapsulation supports a bit-rate range from 1 to 1000 Mbit/s,VBR, FEC, and preserves low latency.

This contribution service has already been used for:


 Euro 2008: 2 HD feeds at 250 Mbit/s and 3 SD feeds between Vienna (Austria) and Scandinavian broadcasters in
Stockholm, Copenhagen and Bergen. The high bit rates were used as an STM-4 connection was available.
 Transmission of ice hockey competitions from 12 arenas: 1 HD and 2 SD at 100 Mbit/s, with integration into
Content Management Systems
 Primary Distribution of the the Norwegian DVB-T since September 2007. The network was installed and is
operated by the telecom operator Telenor. It comprises 3 multiplexes with MPEG-4 coded SD video. JPEG2000
were preferred as an alternative to MPEG-2 in order to use a higher compression on MPEG-4 and provide the
same resulting quality to the viewers.

3.3 France 3's new contribution network over IP


Jean-Christophe AUCLERC, France3, France

The challenges to meet for this project were to: connect 110 sites over the French territory, make possible any-to-
any communications, keep and adapt our Network Management System (NMS), use a network operated by a data
oriented Telco (NeufCegetel), design with the provider a 'broadcast real-time' service on the network, select and
deploy more than 130 encoders and decoders.

The network [7] comprises two types of network for two types of contribution:
 IP/VPN/MPLS (with QoS) for corporate data and file exchange (and managing of our NMS).
 IP/VPN for real-time contribution (with MPEG2 compression): NGN28 range network, IP-based network (Layer 3)
and native multicast (no tunneling). Used for live and rushes transmission (from/to spots where files exchange

24
SMPTE 422M-2006 Material Exchange Format – Mapping JPEG2000 Codestreams into the MXF Generic Container
25
http://www.thomsongrassvalley.com/products/infinity/camcorder/
26
Peter Elmer and Henry Sariowan – 'Interoperability for Professional Video Streaming over IP Networks'
www.broadcastpapers.com/whitepapers/paper_loader.cfm?pid=221
27
SMPTE 2022-1-2007 Forward Error Correction for Real-Time Video/Audio Transport Over IP Networks
28
http://www.itu.int/ITU-T/ngn/

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
18
don‟t exist).
Both services (called Data and Real-time) are delivered on a mutualized infrastructure (optical fiber) for cost
efficiency. The separation of services is logical (VLAN) and in the shaping of the bandwith.

The Real-time Network has following characteristics:


 By choice, there is no QoS! There are 'only' real-time streams on this network!
 The bandwith is 100 % guaranteed by the provider (+ 10% of over-provisionning).
 Our NMS manages equipment generating CBR. So we can indirectely manage bandwith and prevent congestion.
 All streams are in multicast, making it easier to manage for
 the transmitters and for multi-receivers. Therefore only decoders have to subscribe to multicast streams (very
similarly to Set-Top Box in IPTV).

Encoding equipment
29
The MPEG-2 encoders / decoders 'ViBE' are products from Thomson Grass Valley , with Ethernet Interfaces
Typically, the MPEG-2 MP@ML encoding profile requires about 8 Mbit/s (IP level). The normal delay (about 900 ms,
end-to-end) is used for rushes transmission. The low delay (about 450 ms) is used for live. There is no FEC
implementation at the moment.

Example of the network management at the level of a Regional Center: the network bandwith is of 24 Mbit/s and is
managed with 8 Mbit/s one-way, making it possible to have 3 input/output links. The NMS will authorize 3 live
receptions, but will first refuse the fourth. All links being managed by a 'Scheduler', the fourth link will be proposed
after, or the operator must stop one of the 3 links.

Among the evolution foreseen:


 For the monitoring of the Network, Media Delivery Index (RFC 444530) probes are in evaluation.
 Implementation of FEC (probably 1D), depending on the results of the network load increase.
 Implementation of Video servers, in order to be able to record MPEG/IP streams directly through Ethernet
decoders interfaces.
 Development of an interface between our NMS and our Production Media Asset Management for a better
performance of our workflow.

4 HD over Networks

This session is about current status and new key technologies for High Definition Television Production in a
networked environment. The high bandwidth demand of uncompressed HD signals often necessitates some kind of
signal compression in order to enable storage on hard disks and transport over networks. This session brings some
more insight into what is lying ahead of us as well as some experience that can be gained from current real world
HD-TV production and emission.

4.1 HD contribution codecs


Adi KOUADIO, Engineer Dipl. EPFL; presented by Marc LAMBEREGHS, EBU, Switzerland

'Contribution' is the exchange of video content from one broadcaster to another or between the regional facilities of
one broadcaster. It covers 3 main applications:
 News Gathering (SNGs) implying priority to the event with best effort quality).
 Off-line content exchange with high quality expected.
 Live event contribution High quality expected and low delay).
It relies on 3 types of networks:
 Via satellite, with limited bandwidth (depending on the transponder and the modulation technique).
 Via copper links, usually for internal facility contribution, with the bit.-rate limited by cable technology.

29
http://www.thomsongrassvalley.com/products_disttrans/
30
http://tools.ietf.org/html/rfc4445

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
19
 Via fiber, offering an unlimited bandwidth, but being not efficient for multi-point contribution.

HD content suggests a higher viewing experience for end users compared to SD, and therefore HD contribution
should then meet following challenges:
 Higher data rate to handle efficiently in bandwidth limited networks
 Lower latencies or at least the same as in the SD world
 Higher bit depth (at the moment up to 10-bit)
 Multiple image formats (SD, 720p/50, 1080i/25 and potentially 1080p)
 Frame rate conversion for overseas transmissions
 Robustness to cascades
 Etc.

The Table 3 (Annex) details the characteristics, advantages and disadvantages of the choice of codecs.

The selection of the codec should be made based on...


 Product disponibility
 Experience / knowledge of the technology
 Comparisons with other systems
 Recommendations on adequate settings...
 Suitability for specific applications and/or networks: SNG, live (sports, documentary...), off-line exchange versus
via satellite / fiber / copper
 Costs concerning IPR issues, scalability for futureproofness, backward compatibility with older compression
systems...
And how can the EBU Technical Department help you make this choice clearer ?

The ongoing work concerns:


 EBU Technical department: study of HDTV format conversion over contribution link (with BBC and IRT) -
Evaluation of contribution codecs (H.264/AVC, Dirac ...) in the I/HDCC group - Evaluation of the SVC potential for
HDTV broadcast applications in the D/SVC group.
 WBU ISOG with EBU: interoperability test for MPEG-4 AVC / H.264 compression system
 BBC: evolution of the Dirac Pro codec (SMPTE VC-2)

4.2 Practical thoughts and wishes… or whishful thinking!


Lars HAGLUND, Senior R&D engineer and Video Production, SVT, Sweden

SVT produced it‟s own demanding, but not unduly so, multi-genre TV-program “Fairytale” in 3840x2160p50 (hence
31
1080p, 1080i, 720p and 576i).. 3 reference sequences "CrowdRun", "ParkJoy" and "InToTree" can be downloaded
32 33
from the EBU Web site or from the ITU-VQEG FTP site . All sequences from this set were filmed in 50 fps with
34
professional, high-end 65mm film equipment by SVT in October 2004. ITU-R BT.1122 indirectly says that, for
distribution, 75 % of such a program should stay in “excellent-range”, but 25 % (demanding scenes) may go down to
“middle of good-range”. Considering that Blu-ray has no artefacts at all, SVT believes that any scene in Fairytale
should stay for contribution in “excellent-range” to reduce cascading artefacts.
35
In our SVT's national WAN [5], we would like to use... an I-frame based codec to reduce latency in interviews,
and a codec using 4:2:2, 10-bit for further processing robustness. For the time being SVT is using the JPEG2000-
based T-VIPS products (§ 3.2) that need about 150 Mbit/s (for 720p50, 4:2:2, 10-bit), with only 100 ms in latency
(versus 2 seconds in MPEG-2 GOP at 200 Mbit/s!). SVT believes that MPEG-4 AVC / H.264 products for

31
ftp://vqeg.its.bldrdoc.gov/HDTV/SVT_MultiFormat/SVT_MultiFormat_v10.pdf
32
http://www.ebu.ch/en/technical/hdtv/test_sequences.php
ftp://vqeg.its.bldrdoc.gov/HDTV/SVT_MultiFormat/
33
ftp://vqeg.its.bldrdoc.gov/HDTV/SVT_MultiFormat/
34
ITU-R BT.1122-1 (10/95) User requirements for emission and secondary distribution systems for SDTV, HDTV and
hierarchical coding schemes http://www.itu.int/rec/R-REC-BT.1122/en
35
See also the EBU Networks2005 seminar report § 1.1.2

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
20
contribution need to be developed for 4:2:2, 10-bit with both I-frame based mode and GOP-mode!

Concerning international (HD) contribution


SVT has received the 2006 summer football contribution from Germany, skiing and athletics from Sapporo/Osaka,
Song Contests etc... All bit rates have been too low! You can see degradation on a 1st generation Omneon server
recording (MPEG-2 GOP at 100 Mbit/s is too low). A contribution at 45 Mbit/s (hopefully greater than 60 Mbit/s)
before that server isn‟t an “excellent” cascading situation.
Use MPEG-4 AVC/H.264 for contribution (4:2:2, 10-bit) but not to reduce the “too low” MPEG-2 bit rates of today by
20-30 %, but to gain robustness! Push-up the AVC/H.264 GOP-based bit rates to more than 70-100 Mbit/s!

Format conversions... may happen before or after contribution.


SVT prefers to perform any format conversion before contribution, as we did in Sapporo (1080i29.97 to 720p50), we
have the bandwidth and can do better de-interlacing thzan at the consumer side. Big bulky frame rate-converters
36
(59.94 to 50) with good performance should be cheaper and card-based, please! Card-based de-interlacers
(50Hz-1080i to 50Hz-720p) should offer better performance, please! Down-converters to SD (re-interlace) should
have separate H and V “sharpness” settings, please! – to reduce interline twitter when performing the always
needed sharpening. Re-insertion of VANC (e.g. timecode) and Dolby E should work without problems, please!

1080p50 (and 59.94) contribution?


The S/N ratio in 1080p-camera CCD sensors is problematic, (but not in native 720p CCD sensors). Although
1080p50 production may become marketed during 2009, CMOS sensors may give us good (that means better than
CCD) S/N ratio first in 2011. But although the real take-off may not start until 2012, there is a realistic growing need
from 2009.

So, please... offer us MPEG-4 AVC/H.264 codecs Level 4.2 (SVC is not needed for contribution) codecs, with 4:2:2,
10-bit, both I-frame based high bit rate and GOP.

4.3 Eurovision experience in HD contribution


Didier Debellemniere, Eurovision, Switzerland

Eurovision has already a quite substantial experience with HD contribution [10], using MPEG2 4:2:2P bit rates from
37
34 to 60 Mbit/s:
 For the coverage of big events, occasional use of satellite and fiber: 2006 Winter Olympic Games from Torino
using 20 HD encoders and 40 receivers, 2006 World cup from Germany, Euro 2008 from Switzerland and Austria,
2008 Summer Olympic Games from Beijing using 90 HD encoders and 150 receivers
 For permanent connections with NHK for European customers and US networks for sports events: fiber circuits,
with a cheaper bandwidth, but still expensive on long distances, and it's easier to improve the transmission
parameters.

The bandwidth is expensive and sometimes rare. For example, for the Summer Olympic Games in Beijing fifteen 36
MHZ transponders and 7 dedicated STM4 fiber circuits have been booked!

The current parameters in use for big events:


On a satellite transponder 36 MHz bandwidth using MPEG2 4:2:2P@HL and DVB-S2: video 1080i/50 at 56.217
Mbit/s, Audio1 at 384 Kbit/s, Audio2 DolbyE multiplex at 2.304 Mbit/s - total data rate of 60.4 Mbit/s.

Some issues are daily to be solved: frame rate conversion (50Hz-60Hz), Progressive or Interlaced scanning
(720p/1080i), compression cascading in contribution-distribution.

Some trends remain clear: the market is not ready to pay more for contribution links - there is an increased demand
for exclusive circuits due to the bandwidth limitations on satellite – and the cost for long distance circuits is still high.
This leads to MPEG-4 AVC/H.264 to replace MPEG-2 for live HD in the middle term (with a 50% benefit expected).
JPEG2000 will be limited to fiber or file transfer.

Tests have to be conducted (EBU, ISOG…) on: compression cascading, compression/standard conversion

36
For example, from Teranex
37
Very low for HD contribution, only on request from the client

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
21
sequence, interoperability.

Finally the market will decide. Since MPEG-4 AVC/H.264 equipment range for contribution is not fully available, so
MPEG-2 will survive for some time…And anyway how much broadcasters are ready to pay? File transfer is an
alternative with lower costs (and increased picture quality?).

5 Real world applications: the News

The final two sessions will outline some recent developments in using new network technologies to support the
broadcast environment. The first session looks at the impact of networks on news-gathering kicking off with a use
case looking at the application of COFDM for ENG application, with particular attention to the monitoring of the
quality of the signals. This will be followed by some real examples of some of the innovative techniques being used
by journalists to deliver material to supporting their story-telling.

5.1 The use of COFDM for ENG applications


Michael MOELANTS, Project manager, VRT, Belgium

VRT has implemented 'TNG' - Terrestrial News Gathering - which is ENG (Electronic News Gathering) via COFDM
transmitters and receivers (DVB-T) for real-time audio/video. This application uses camera link equipment from Link
38
Research Ltd

Coded Orthogonal Frequency Division Multiplexing (COFDM) is a proven technique, used for DAB, DVB-T,
WiMAX… (OFDM for WiFi/WLAN 802.11a/g…). It has the following characteristics:
 Double-stage FEC (Reed-Solomon and convolutional code): detects and corrects faults, but increases the end-to-
end delay by adding extra bits and data rate processing.
 Orthogonal multiple carriers (2048) [6], to avoid dead spots. Choosing the correct inter-carrier distance, leading to
limiting/suppressing the interference between them.
 A guard interval on the time axis, waiting to start a new symbol to avoid interference of echoes due to multi-path.

A multiple antenna diversity reception is used, with packet diversity. Each receiving antenna signal is
demodulated to ASI. The new HD/SD equipment (not used by VRT yet) uses the better performing maximal-ratio
combining, where all signals are multiplied by a weighting factor and afterwards added - the weighting factor is
function of a quality measure (amplitude, SNR…) [8].

The equipment include on the transmitter side: a MPEG-2 codec (100 or 200 mW) and modulator, a 1 W amplifier
on the wireless camera. In the TNG van, a 5 W amplifier. The transmit antennas [11]+[14] are omnidirectionnal on
the wireless camera and on the van, and directional which can be mounted at the van rear outside.

Network. At VRT, 2 frequencies in the 2.2 GHz band are used: 1 dedicated to VRT and 1 shared between VRT and
RTBF Sports. The A/V bit-rate is 6 Mbit/s (QPSK modulation) or 12 Mbit/s (16-QAM modulation), with a 1/2 FEC
(highest possible) and a 1/32 guard interval (smallest possible). There are 5 receiving points [15]: 2 in Brussels, 1 in
39
Antwerp, Ghent and on the Belgian Coast . The transport of the signal to VRT Headquarters [16] occurs over VRT
optical fiber (+ CWDM) in Brussels (ASI) and Antwerp (SDI), via a microwave radio link in Ghent and then over an
IP-MPLS service to Brussels (16].

Performances [17]. From the TNG van, transmission is possible in cities within a 6-8 km radius area (non line-of-
sight) and up to 60 km (in line-of-sight), the van becoming in this last case a "virtual feeding point". From the wireless
camera to a receiving site: 2-3 km (non LOS) or tens of km (LOS) - or hundred meters to a TNG (or SNG) van, used
as a repeater. The quality of the received signal is monitored using an application [18] [19]. The system is easy to
operate and offers a high uptime.

TNG is used for feeding items and for live “talking head” broadcasts (end-to-end delay 40 ms) for news. The TNG
teams draw maps indicating virtual feeding points and green/orange/red working areas in the cities.

38
http://www.linkres.co.uk/
39
Belgian coast network is not yet operational but will be soon

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
22

5.2 Getting News to base, or 'Only connect'?


Martin TURNER, Head of Operation for News gathering, BBC, United Kingdom
This presentation is intended to explain the challenges facing broadcasters in a networked world and to demonstrate
some of the ways the BBC is changing its workflows and technology to try to meet those challenges.

In former times it used to be simple… television journalists and crews gathered the news, processed it and delivered
it to base – either by hand, circuit or satellite. Of course, in reality it wasn‟t simple - up until the early 90s the BBC
issued its reporters with what it charmingly called “mutterboxes” connected with crocodile clips to a phone handset
or to its wall socket. Satellite equipment was bulky, it went wrong, the delivery cost a small fortune and
unaccountably, suspicious airplane passengers would sometimes refuse to accept a big bag of tapes…
Communications were difficult so for much of the time you were out of touch with head office. The return path was
rudimentary and often amounted to little more than a telex or a fax telling you how your piece had gone down.

And then everything changed. Satphones40 became more portable, speeds increased and the first store and
forward devices emerged on the scene. They were affordable, reasonably easy to use and they worked more or
less. Of course, these had an extraordinary effect on our ability to report from the scene. And then came the M4
41 42
satphone, the videophone, the undreamt of speed of 128 kbit/s. And now the BGAN , the new Thuraya system ,
43 44 45
Streambox , Quicklink , vPoint and so on. And in many places HSDPA/&UPA, WiMAX and, even in remote
areas, within months, 384 kbit/s of bandwidth. But again it‟s not that simple, because it‟s no longer a one-way street,
and what we have been building is a network. We have been extending the newsroom into the field so that many of
the activities that might once have taken place at base are now happening on the ground.

A demo video demonstrates how a reporter in the field can proceed to a federated search into the BBC's Jupiter
Media Asset Management system, import on-demand Flash Video archives files to the field and export its own files
46
after editing with a tool based on Rhozet transcoding solution.

The ability to access material in this way has revolutionised what gets on air. All this is an extraordinary opportunity
for better story-telling, but it‟s also represents a gigantic challenge both editorially and technically:
 We risk overloading staff in the field with an ever-increasing amount of information which leaves them no time to
go and find things out – no time to actually be journalists. Increasingly, the point of publication will get closer to the
newsgathering process and we will need to ensure that editorial standards are upheld.
 The technical challenges are just as significant. Staff is expected to have an extraordinary range of skills. Both
journalists and technicians need a comprehensive knowledge and understanding of networking principles. We run
the risk of allowing technical quality to degrade as the public become increasingly capable of creating and
delivering high-quality video themselves. We face the challenge of mixed standards, of concatenating codecs, of
equipment that simply doesn‟t perform as we expect it to. And we will often be using equipment that offers nothing
approaching the robustness of the broadcast equipment to which we have been accustomed.

The future is the network and the networking of knowledge. Audiences are beginning to experience a truly interactive
relationship with the news – and this requires the extension of the network into the field. So getting news to base is
the easy part. The challenge for the future is to ensure that we exploit the network effect to create a new form of
journalism.

5.3 How to connect your Video Journalists (VJ)


Mattthias HAMMER, IRT, Germany

How can I connect my Video Journalists best? …as flexible as possible! VJs should always be able to use the best

40
For example, with the the 'Toko', manufactured by Toko Japan (http://www.toko.co.jp/top/en/index.html ), a device that digitally
compressed video, stored it and forwarded it through a telephone or a satellite phone.
41
See EBU Networks 2007 seminar report § 5.3
42
http://www.thuraya.com/
43
http://www.streambox.com/
44
http://www.quicklink.tv/
45
http://www.vcon.com/
46
Carbon Coder http://www.rhozet.com

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
23
connection - whenever they need to. Why does a VJ need a network connection? Because the basic rule for news
is: faster = hotter. The main advantage of VJs is the spontaneous acquisition in difficult areas and locations.

What is the payload? It's more “ready files” than streaming.


For audio, the typical bit rate is less than 200 kbit/s (max about 1.5 Mbit/s) - bidirectional Internet streaming can work
for it.
For video (incl. audio) the bit rates are multiple times of audio < 4 Mbit/s... 12 Mbit/s …. 25 Mbit/s - video streaming
over open Internet is difficult (no way!). IRT is testing Multiple Description Coding (MCD) and Scalable Video Coding
(SVC) solutions.

There are numerous ways for a VJ to get connected. The common network interfaces of the VJ equipment are:
Ethernet (≤ 1 Gbit/s), WiFi (nominal up to 56 Mbit/s), mobile (phone) connections (UMTS, HSPA…), WiMAX. Table
4 (Annex) lists the overwhelming crowd of networks worldwide and actual bit rates measured on some relevant
networks.

VPN security is important for VJs. It is useful for the protection of the reporter identity, for very critical material -
especially in 'monitored' networks (China & Co.), for full LAN-integration (with all the administrative overhead), and
for securing applications with low security levels (like FTP). But:
 Transfers can be secure enough if native mechanisms are used: HTTPs, sFTP, file encryption and signing,
dedicated input server (in DMZ).
 The security overhead can have massive negative impact on the transfer-throughput. Therefore individual
performance-tests are essential. For example [14], a 3.4 Mbit/s bit rate through a no VPN circuit, will become
47 48
3.3 Mbit/s with CryptoGuard VPN software (with AES encryption) and decrease to 1.9 Mbit/s with OpenVPN ,
an open source SSL VPN solution.

The VPN throughput is important for daily VJ-business, even very low bit rates are challenging the available
connections. Here is an example of calculation:
A 5-minute video material coded at 4 Mbit/s will deliver 1.2 Gbit in 5 minutes (30 MByte/minute)
25 Mbit/s will deliver 7,5 Gbit in 5 minutes (187 MByte/minute)
The Table 4 (Annex) indicates the corresponding transfer duration of this 5-minute material on relevant networks.

Finally: how to connect?


Every region and continent has its own focus: e.g. Europe develops more UMTS, Asia more WiMAX. But up to now
there is no “always fitting” connection-type.
Fixed line is still the best choice - and worldwide available, with DSL - WiFi, or cable, or Ethernet connection.
Most promising for now and especially in the future:
 WiMAX, LTE (4G)
 UMTS-HSUPA, especially in Europe
 Fixed line (xDSL including local WiFi)

6 Real world applications - Production and Broadcast

In earlier sessions of this seminar, you will have learned about the latest technologies which permit the construction
of networks designed to allow contribution transmissions for television production. This final session will provide
examples of the impact of new network technology on television production and broadcast.

6.1 Architecting a fully-networked production environment


David SHEPHARD, Avid, UK

In a production environment, with file-based acquisition – production – delivery, the network is the link between the
'clients' (desktop applications for journalists, researchers, producers, editors, librarians…) and storage (real-time
online / high-performance neartime / library archive) [2]-[7].

47
http://www.compumatica.de/cms/data/index.php?id=21&L=0
48
http://openvpn.net/

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
24
Conventional network design does not apply here because of the specific constraints. For example, real-time video
collaborative editing is very demanding and requires such networks as Unit Media network with Fibre Channel
(initially 1Gbit/s but grew to be 2Gbit/s and currently 4Gbit/s supporting) or Gigabit Ethernet (for 'lower' resolutions
such as DV25 or IMX30) or 10GE coming soon for uncompressed HD [9]-[11]. The associated Unity ISIS storage
system allows up to 160 GbE and 20 10GE ports [12] and enables the use of higher resolutions IMX50/DV50 and
even DNxHD over Gigabit Ethernet with 10GE coming soon for uncompressed HD [12]-[13].

Gigabit Ethernet end-to-end uses UDP instead of TCP. Just like VoIP this type of video protocol should let the
application decide if it needs to re-request the data. A Fast Ethernet connection (100 Mbit/s) results in dissatisfied
users, especially if they move between machines with Gigabit Ethernet interfaces. Editing application can co-exist
with Video over IP corporate solutions if these are designed correctly.

Should we transfer jumbo frames or fragmented packets? Video data is in large blocks on the disks, so it should
be ideal! It is more efficient to send larger segments of data, especially with UDP, but it only sustainable in a
controlled environment (e.g. server room). Fragmented datagrams can pass over non-jumbo frames enabled
networks but need on-board memory in network interface cards (NIC).

The aggregation of multiple micro-servers requires flexible, expandable buffering [17].

Big chassis-based Gig-E switches have different cards with different abilities - every interface card matters!
Different chassis-based switch models from same manufacturer have different abilities - biggest is not always the
best!

NIC (Network Interface Card) - The buffers (or descriptors) required for this application would have been considered
server class just a couple of years ago, but now this class of adapter can be found on high and medium grade
platforms. On board memory is needed for fragmented packets - 32 KB on low cost NIC implementations is
49
insufficient. If a TCP-based data flow is used there will be lots of CPU requirement unless ToE -enabled NIC is
used.

A separate production network is best for the core systems and high bandwidth clients. Corporate network based
clients are possible for less demanding application and resolutions, if the corporate infrastructure can cope and has
the right products in the correct place.

Separate the Network into zones from the core highest bandwidth zone 1 (with direct storage connection) to the
lowest bandwidth (customer network) zone [21]-[23].
50
In a video production environment QoS generally relates to available bandwidth, latency & jitter. Vendors provide
different QoS tools that you should use on edge and backbone routers to support QoS. Some QoS mechanisms are
aimed at VoIP networks and low bandwidth circuits, but others apply equally to LAN. Example of QoS mechanisms
for Video over IP are: LLQ (low latency queuing), PQ (priority queuing), WFQ (weighted fair queuing), CQ (custom
queuing), PQ-WFQ, CBWFQ (class-based weighted fair queuing). A good old fibre channel is very deterministic and
at 5ms latency the effect begins to show. Design the Gigabit Ethernet system with sufficient capacity to ensure the
necessary QoS. In a corporate network ensure sufficient QoS exists - the corporate network may need to be re-
configured to correctly recognise and prioritize video traffic.

Firewalls present many challenges. They are not good at bursty, high bandwidth, fragmented real-time flows. In a
production environment we deal with REAL TIME VIDEO, not a streaming VC-1 file! Again, contain the network
design into zones and locate externally facing clients in the higher number zone, or even in the corporate network.

Buffers, buffers, buffers! Switches with dynamically shared buffers are a better choice - some manufacturers
provide 1U or 2U condensed versions of popular mid-range chassis-based switches. Switches with statically
assigned buffers limit the design scope - this limit affects some large chassis-based switches from some MAJOR
manufacturers, while smaller chassis-based models have shared buffers and are an excellent choice. Buffers in the
network cards are also critical.

In conclusion, understand every link in the chain (some design examples are presented [29]-[31]) and where that link
exists! Every switch, every speed change, every aggregation point MATTERS, and that includes the NIC in the

49
TCP offload Engine -a technology used in network interface cards to offload processing of the entire TCP/IP stack to the
network controller. It is primarily used with high-speed network interfaces, such as Gigabit Ethernet and 10- Gigabit Ethernet,
where processing overhead of the network stack becomes significant.
50
http://www.techweb.com/encyclopedia/defineterm.jhtml?term=QoS&x=&y=

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
25
client!

6.2 Bringing the BBC's networks into the 21st century


Paul LEWIS, SIS (IT Solutions and Services) Project Director, Siemens and Nick JUPP, Head of SI Solutions &
Raman Technical Design Authority, Cable&Wireless, UK

The BBC network objective was to provide a shared infrastructure for all user requirements that delivers the
cheapest overall solution, for:
 BBC Distribution Services ensuring the real time carriage of Vision & Audio (with multiple platforms to be
connected to the interchange points with the Transmission provider).
 BBC Contribution Services: real time studio-to-studio carriage of Vision and audio; packet network for media and
business applications; Storage Area Network; telephony
 English Region Cluster Sites (ERCS) - 52 sites across the UK - to enhance services and capacity in line with core
sites.
The scale of the implemented network [4] is huge:
Large national & regional sites: 18 core sites + 11 associated sites and 1000+ circuits
London BBC Sites: 10 sites and 1000+ circuits
District offices & local Radio sites: 60 small sites plus approximatively 60 further minor sites and 600+ circuits
International sites – International Bureau: 30 sites and 100+ circuits.

Raman name comes from the optical amplication technology which provides on each fiber 240 DWDM wavelengths
(=channels) x 10 Gbit/s each [5], on each „Raman Arc‟ between two RIPs (Raman Interconnect Point). Raman
technology provides both high density wavelength multiplexing and the ability to drive over much higher distances
without re-amplification. The double star topology [9] offers high geo-resilience, allowing each site to be connected
through two routes.

Implementation and testing. From 2005 to October 2007 Cable&Wireless (C&W) and Siemens SIS designed, built
and tested the network [6].
 C&W managed Raman, IP WAN, Broadcast & IP Network Services, Audio
 Siemens SIS managed the integration to Core Raman sites, the integration to BBC systems, LAN, and IP
telephony

C&W uses SDH transport over the Raman transport layer with following equipment [7]: Marconi SDH multiplexers
with 10G, 2.5G, STM-1, 2Mbit/s & GbE interfaces; Scientific Atlanta iLynx for SDI, PAL & ASI interfaces; utilise
existing ATM switches for ATM services - audio, both analogue and AES3 (delivered via ATM AES47); the packet
network terminal equipment is Cisco “Catalyst” in C&W core, Foundry at edges.

SIS & C&W undertook 12 weeks of circuit testing with 20 resources working with the BBC: 1300 circuits tested; bit
error tests performed; port tests performed; break tests of fibre; impacts all Layers to highlight issues ???; IP
network early gateway connection to confirm monitoring tools ??? Through this methodology the Raman network
was clean of technical issues ready for migration and it was a key work package to ensure the success of the
project. 1200 individual tests were performed over 4 months. In order to test monitoring tools and service proces
gateway connection was made in advance of migration. The tests focused on Operational Interfaces and a gap
analysis conducted on what we need to do differently. The joint SIS-C&W-BBC approach in the design and the
implementation of the tests saved time and ensured a comprehensive testing strategy. The operational model
followed the ITIL industry standard framework (Information Technology Infrastructure Library), providing a common
framework and language to all organisations.

Migration. The Broadcast Migration took 9 weeks; and the IP Network 6 weeks. Both activities were run in parallel
with the IP network starting two weeks before broadcast migrations. All migrations conducted between 20:00 &
05:30 hour. These migrations were all complete without any service outages and impact to the BBC, thanks to
working in partnership, to the SIS and C&W knowledge of the BBC‟s Transmitter, Broadcast & IT network (old and
new), and to planning… planning and more planning! The project has been delivered according to time and budget.
The network is fully operational and performing to design requirements.

English Regions Cluster Sites - This was the first transformation project after Raman. It addressed the connectivity
needs of BBC local Radio sites, to consolidate Telephony, Data and Broadcast Services, enabling initiatives such as
programme share, enabling multicast data and audio in the future.
It consisted of the migration from multiple discrete services to a fully managed, converged solution with resilient IP
over Ethernet WAN service, offering QoS, multicast, support of multiple applications, inntegration to Raman Core

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
26
sites.
 The IP Data Service ensured the transition:
o From a point-to-point E1-based service, with ISDN2 backup, unicast, and no QoS;
o To a managed IP solution on a 10M/100M main link, with 2M/10M backup link, multicast, QoS, scalable,
integrated with Raman Core sites and between Siemens and C&W Management systems.
 The Broadcast Audio Service, with a transition:
o From a Musicline2000 product, unmanaged point-to-point audio, 15 kHz mono, J.41, over E1 - limited scale;
o To a C&W Audio-over-IP solution using the APT Oslo platform, a managed solution, multicast, scalable, with
enhanced APT-x low latency coding, 22 kHz stereo audio, supporting 5.1 configuration (if required by BBC), and
potential for future integration with BBC-wide scheduling systems.
 The Telephony Service with a transition:
o From fractional E1 voice links with legacy TDM PBX‟s
o To a Siemens Managed IP Telephony solution, with the central HiPath 8000 system, IP handsets, the VoIP
traffic using the new IP WAN infrastructure.

6.3 DVB-H small gap fillers: home repeaters improving indoor coverage
Davide MILANESIO, Centro Ricerche e Innovazione Tecnologica, RAI, Italy

DVB-H is a system specified for bringing broadcast services to battery-powered handheld receivers. It is an
extension of the DVB-T system [3], transmitted in the UHF band, allowing multi-channel, robust with respect to
disturbances and for reception indoor, pedestrian and at high speed (train), designed to reduce the battery
consumption.

The video is encoded in MPEG-4 AVC/H.264:


 either in CIF format (352x288), carrying 10 - 11 channels (with QPSK modulation and ½ FEC), at a bit rate
between 350 - 400 kbit/s.
 or in QCIF format (176x144), carrying 15 - 30 channels (with QPSK modulation and ½ FEC), at a bit rate between
128 - 256 kbit/s.
The H.264 video data is transported over RTP/UDP/IP with an additional Multi-Protocol Encapsulation FEC
protection (MPE-FEC) and interleaving. The IP streams are then encapsulated in a DVB MPEG-2 Transport Stream.
51
DVB/H supports the datacast of files using the FLUTE protocol .

DVB-H is already operational worldwide: in Italy (since 2006), Finland (since 2007), Austria-Switzerland (June 2008),
Albania, Asia (India, Malaysia, Philippines, Vietnam) and Africa (Kenya, Namibia, Nigeria). But these networks are
mainly planned for outdoor coverage not for indoor! Therefore traditional DVB-T network planning is not sufficient
for DVB-H, because indoor DVB-H reception requires higher electromagnetic field strength, since the receiving
antenna is integrated in the terminal and not on the roof! [6]

To improve indoor coverage, the main transmitters could be completed by a number of low power urban
transmitters. But electromagnetic radiation limits have to be respected and there is a risk of interference on
traditional TV services in the existing MATV distribution systems [7].

DVB-H 'small gap fillers' are another way to improve indoor coverage using low-power on-channel home repeaters.
These consumer-grade devices can be autonomously installed by final users in their private homes, without the help
of a professional installer. It is connected to the existing in-building cable distribution system [8], Its coverage area is
this of a standard apartment, i.e. about 100 m2 enabling the interested users to be immediately reached by DVB-H
services.

Standardisation – Since these devices are radiating in the UHF band, without a specific regulation, a licence would
be needed. Therefore a new standard is necessary, also to avoid that low quality (illegal) devices appear on the
market, potentially causing dangerous interference on existing services (e.g. analogue or digital TV).
The DVB-H Small Gap Fillers Task Force, with 21 companies involved (Broadcasters, network operators, regulation
authorities, manufacturers), has prepared a Technical Specification. It has been approved by the DVB Steering
Board, in June 2008, for publication as a DVB Blue Book. It will then be submitted to ETSI/CEPT for publication as
European Norm (requiring voting by National Standards Organisations).

51
http://www.ietf.org/rfc/rfc3926.txt

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
27
The DVB-H Small Gap Filler includes two sections [10]:
Signal processing section (for on-channel filtering and amplification) and a built-in DVB-H receiver for output signal
quality monitoring.
This last stage integrates an automatic power control mechanism [12], based on the measured quality, in order to
avoid interferences on existing services, even in case of failure of the device electronics or mistakes of the user.
The frequency response [11] shows a high attenuation of the adjacent channels. Out-of-band emissions are
according to existing regulations.

Validation. The Technical Specifications have been validated in laboratory trials on real hardware prototypes [13]
52
and in the framework of the European Project 'CELTIC B21C' .
Coverage tests were conducted in the Rai-CRIT laboratories and in a real flat and proved an adequate coverage in
standard apartments (e.g. 100 m2) [14].
The disturbance on adjacent TV channels was tested using 2 reference scenarios [15], with positive results [16].

Scenario Results
1 TV / STB connected to another Video SNR degradation within 1 dB,
plug of the in-building cable not noticeable on picture
distribution network
2 TV / STB in another room, Video SNR degradation within 2 dB if OK also in pessimistic condition:
connected to an indoor using 2 SAW filters, not noticeable on Adjacent analogue TV channel received by the
amplified antenna picture. repeater with level 30 dB higher than DVB-H
1 wall separation, 3 m distance Degradation within 5 dB with more No visible impairments using repeaters
relaxed masks (out of standard) with 2 SAW filters

A CATV network could allow for a possible future extension of the DVB-H Small Gap Filler concept. The DVB-H
multiplex could be transported on the CATV network on behalf of the broadcaster [17]. The indoor DVB-H coverage
will be improved in areas where TV aerials are not very popular but where a CATV network is available. CATV would
be used only as a carrier, no DVB-C signals being involved. This would allow to reduce the DVB-H network
deployment costs.
But this involves an additional requirement, the possibility of frequency conversion. Since the CATV network might
not cover the full UHF band. Therefore, the coherence of the output frequency has to be guaranteed, i.e. the output
frequency has to be cross-checked with the Network Information Table of the incoming stream. Moreover the
network operator should guarantee the synchronisation with the traditional DVB-H transmitters as in a standard SFN
[18]. Currently, frequency conversion is not included in the Specification.

52
Celtic (‘Cooperation for a sustained European Leadership in Telecommunications’) - Broadcast for the 21st
Century
http://www.celtic-initiative.org/Projects/B21C/abstract.asp

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
28

Abbreviations and acronyms

Note: Some terms may be specific to a speaker or/and his/her organization

* points to another definition

10GE 10-Gigabit Ethernet


16-QAM 16 Quadrature amplitude modulation
1D 1-dimensional
nd
2.5G Enhanced 2 generation of wireless communication systems (GPRS*, EDGE*)
2D 2-dimensional
3.5G Enhanced 3rd generation of wireless communication systems (HSDPA*)
rd
3G 3 generation of wireless communication systems (UMTS*, WCDMA*)
3GPP 3rd Generation partnership project
th
4G 4 generation of wireless communication systems (LTE*)
a.k.a. also known as
A/V Audio/Video, Audiovisual
AAC Advanced Audio Coding
AAC-ELD AAC* Enhanced Low Delay
AAC-LD AAC* Low Delay (MPEG-4 Audio)
AC-3 Audio Coding technology (Dolby)
ACK Acknowledge-ment message
ACL Access Control List
ADSL Asymmetric Digital Subscriber Line
AES Audio Engineering Society
AL Application Layer
AMR-WB Adaptive Multi-Rate WideBand (G.722.2)
AoIP Audio over IP (broadband audio)
APN Accelerated Private Network
ASI Asynchronous Serial Interface (DVB*)
ATM Asynchronous Transfer Mode
AVC Advanced Video Coding (MPEG-4)
AVC-I AVC* - Intra
B Bidirectional predicted picture (MPEG)
BER Bit Error Ratio
BGAN Broadband Global Area Network (Inmarsat) - cf. EBU Networks2007 seminar report § 5.3
BGP Border Gateway Protocol (Internet)
BW Bandwidth
C&W Cable & Wireless http://www.cw.com/new/
CABAC Content-based Adaptive Binary Aritmethic Coding (MPEG-4 AVC)
CATV Cable TV
CAVLC Context-based Adaptive Variable Length Coding (MPEG-4 AVC)
CBR Constant Bit Rate
CBWFQ Class-Based Weighted Fair Queuing
CCD Charge Coupled Device
CDMA Code-Division Multiple Access
CEPT Conférence Européenne des administrations des Postes et Télécommunications - European Conference of
Postal and Telecommunications Administrations
cf 'Confer', consider, compare
CIF Common Intermediate Format (352*288 pixels)

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
29
CJI Cognacq-Jay Image (TDF* subsidiary)
CMOS Complementary Metal-Oxide Semiconductor
CMS Content Management System
COFDM Coded OFDM*
CoP# Code of Practice number… (Pro-MPEG Forum)
CPU Central Processing Unit
CQ Custom Queuing
CRIT Centro Ricerche e Innovazione Tecnologica (RAI)
CSRC Contribution Source (in RTP*)
CWDM Coarse Wavelength Division Multiplex(ing)
D-Cinema Digital Cinema
D/SVC Scalable Video Coding (EBU Project Group)
DAB Digital Audio Broadcasting
DCI Digital Cinema Initiative
DCM Dynamic Channel Path Management
DCT Discrete Cosine Transform
DiffServ Differentiated Services
DMB Digital Multimedia Broadcasting
DMZ DeMilitarised Zone
DNxHD High Definition encoding (Avid)
http://www.avid.com/resources/whitepapers/DNxHDWP3.pdf?featureID=882&marketID=
DoS Denial-of-service attack
DRM Digital radio Mondial
DSCP Differentiated Services Code Point
DSL Digital Subscriber Line
DTT(B) Digital Terrestrial Television (Broadcasting)
DV Digital Video cassette recording and compression format
DVB Digital Video Broadcasting
DVB-C DVB - Cable
DVB-H DVB - Handheld
DVB-RCS DVB with Return Channel via Satellite
DVB-T Digital Video Broadcasting - Terrestrial
DVB-T DVB - Terrestrial
DWDM Dense Wavelength Division Multiplex(ing)
DWT Discrete Wavelet Transform
e.g., eg exempli gratia, for example
e.m., EM Electro-magnetic
E/S Earth Station
E1 European PDH system level 1 (2.048 Mbit/s)
EDGE Enhanced Data rates for GSM Evolution
EF Expedited Forwarding
EN European Norm/Standard (ETSI*)
END 6.3.6 ???
ENG Electronic News Gathering
ERCS English Region Cluster Sites (BBC)
ERP Effective Radiated Power
ETSI European Telecommunications Standards Institute
FC Fibre Channel
FDM Frequency Division Multiplexing
FEC Forward Error Correction

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
30
FiNE Fiber Network Eurovision (EBU)
http://www.netinsight.net/pdf/040823_Casestudy_EBU_2.pdf
FLUTE File delivery over Unidirectional Transport (RFC 3926)
FM Frequency Modulation
FRR Fast Reroute
FTP File Transfer Protocol
G.711 Pulse code modulation (PCM*) of voice frequencies (ITU-T)
G.722 7 kHz audio-coding within 64 kbit/s
Gb/s, Gbps Gigabit per second, Gbit/s
GbE, GE 1-Gigabit Ethernet
GOP Group Of Pictures (MPEG)
GPRS General Packet Radio Service
GPS Global Positioning System
GSM Global System for Mobile Communication
GUI Graphical User Interface
H Horizontal
H/W, HW Hardware
HA High Availibility
HANC ANCillary data in the Horizontal video blanking interval
HBA Host Bus Adapter
HD(TV) High-Definition (Television)
HDRR Haut Débit Réseau Régional (TDF* subsidiary)
HD-SDI High Definition SDI (1,5 Gbit/s)
HE Head-End (Cable TV)
HE-AAC High Efficiency AAC* (MPEG-4 Audio)
HF High Frequency
HFC Hybrid Fiber/Coaxial
HL High Level (MPEG-2)
HQ Headquarters
HSDPA High-Speed Downlink Packet Access
HSPA High-Speed Packet Access
HSUPA High-Speed Uplink Packet Access
HTTP HyperText Transfer Protocol
HTTPs HTTP* using a version of the SSL* or TLS* protocols
I Intra coded picture (MPEG)
i.e., ie id est, that is to say
I/HDCC High Definition Contribution Codec (EBU Project Group)
IBC International Broadcasting Convention
ICE-net Nordisk Mobiltelefon
ICT Irreversible Color Transform
IDCT Inverse DCT*
IEC International Electrotechnical Commission
IEEE Institute of Electrical and Electronics Engineers
IETF Internet Engineering Task Force
IGMP Internet Group Management Protocol
IGP Interior Gateway Protocol
IMS IP Multimedia Subsystem
IMX (MPEG-) Digital Video Tape Recorder recording and compression (MPEG-2 422P@ML) format (Sony)
iNews Newsroom Computer System (Avid)
IOS-XR Self-healing and self-defending operating system (Cisco)

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
31
IP Internet Protocol
IPoDWDM IP over DWDM*
IPR Intellectual Property Rights
IPTV Internet Protocol Television, Television over IP
IRD Integrated Receiver-Decoder (-> STB*)
IRT Institut für Rundfunktechnik (Germany)
ISDN Integrated Services Digital Network
ISIS Infinitely Scalable Intelligent Storage (Avid)
ISMA Internet Streaming Media Alliance
ISO International Organization for Standardization
ISOG (WBU*-) International Satellite Operations Group
http://www.nabanet.com/wbuArea/members/ISOG.html
ISP Internet Service Provider
IT Information Technology („informatique‟)
ITIL Information Technology Infrastructure Library
ITU International Telecommunication Union
JP2K, J2K JPEG2000
KLV Key-Length-Value coding (MXF*)
Lambda Wavelength of light ( WDM*)
LAN Local Area Network
LDP Label Distribution Protocol
LLQ Low Latency Queuing for VoIP*
LOF Loss of Frame
LOS Line-of-sight
LPCM Linear PCM*
LSP Label Switched Path
LSR Label Switch Router
LTE Long Term Evolution (4G mobile system)
MAC Medium Access Control layer
MAN Metropolitan Area Network
MATV Master Antenna Television
Mb/s, Mbps Megabit per second
MDC Multiple Description Coding
MF-TDMA Multi-Frequency Time Division Multiplex Access
MDI Media Delivery Index (RFC*4445)
MIB Management Information Base (SNMP*)
ML Main Level (MPEG-2)
MoFRR Multicast-only Fast Reroute
MOSPF Multicast extension to OSPF
MP Main Profile (MPEG-2)
MP2 MPEG-1 Audio Layer II
MPE Multi-Protocol Encapsulation (DVB-H*)
MPEG Motion Picture Experts Group
MPLS Multi-Protocol Label Switching
MS Microsoft
MTR Multi-Topology Routing
MXF Material eXchange Format
n/a Not applicable / not available
N/ACIP Audio Contribution over IP (EBU Project Group)
N/IPM IP Measurements (EBU Project Group)

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
32
N/VCIP Video Contribution over IP (EBU Project Group)
NAT Network Address Translation
NC Network Connection
NGN Next Generation Networks
NHK Nippon Hoso Kyokai (Japan)
NIC Network Interface Card
NIT Network Information Table (DVB - SI)
NLOS Non line-of-sight
NMS Network Management System
MRC Maximal-Ratio Combining
NOC Network Operations Center
OB Outside Broadcasting
OFDM Orthogonal Frequency Division Multiplex(ing)
ORT Operational Reliability Testing
OSI Open Systems Interconnection
OSPF Open Shortest Path First
P Predicted picture (MPEG)
PBX Private Branch eXchange
PCM Pulse Coded Modulation
PCM Pulse Code Modulation (audio)
PDH Plesiochronous Digital Hierarchy
PE Provider Edge
PHY Physical Layer (OSI* model)
PIM Protocol Independent Multicast
PJSIP VoIP* GPL free software
PoE Power over Ethernet
POP Point Of Presence
PPP Point-Point Protocol
PQ Priority Queuing
PSTN Public Switched Telephone Network
QC Quality Control
QCIF Quarter Common Intermediate Format (176*144 pixels)
QoS Quality of Service
QPSK Quadrature Phase Shift Keying
RCS Return Channel per Satellite (DVB)
RF Radio Frequency
RFC Request For Comments (IETF standard)
RGB Red-Green-Blue (colour model)
RSTP Rapid Spanning Tree Protocol
RSVP Resource Reservation Protocol
RT Real-Time
RTBF Radio-Télévision Belge Francophone
RTCP Real-Time Control Protocol (Internet)
RTP Real-time Transport Protocol (RFC*3550)
RTSP Real-Time Streaming Protocol (Internet)
Rx Receiver
S/N, SNR Signal-to-Noise ratio
S/W, SW Software
SAN Storage Area Network
SAP Session Announcement Protocol (RFC*2974)

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
33
SAW (filter) Surface Acoustic Wave
SCSI Small Computer System Interface
SD(TV) Standard Definition (Television)
SDH Synchronous Digital Hierarchy
SDI Serial Digital Interface (270 Mbit/s)
SDLC Symmetric Digital Subscription Line
SDP Session Description Protocol (RFC4566)
SDSL Symmetric Digital Subscriber Line
SFN Single Frequency Network (DVB-T)
SFTP SSH (Secure Shell) FTP
SGF Small Gap Filler
SI System Information (DVB)
SIP Session Initiation Protocol (RFC*3261)
SIS Siemens IT Solutions and Services
SLA Service Level Agreement
SMPTE Society of Motion Picture and Television Engineers
SMTP Simple Mail Transfer Protocol
SNG Satellite News Gathering
SOG Summer Olympic Games
SONET Synchronous Optical Network (SDH* in U.S.A.)
SP Service/System Provider
SPH Service Points Hauts (TDF*)
SR Service Router
SR Swedish Radio
SRT Service Readiness Test
SSL Secure Socket Layer
SSM Source Specific Multicast http://www.ietf.org/ids.by.wg/ssm.html
SSO Stateful Switchover
STB Steering Board
STB Set-top box (-> IRD*)
STM-1 Synchronous Transport Module Level 1 (155 Mbit/s)
STM-4 Synchronous Transport Module Level 4 (622 Mbit/s)
SVC Scalable Video Coding
T-VIPS Norwegian company
tbd To be determined
TCP Transmission Control Protocol (Internet)
TDD Time Division Duplex(ing) (UMTS*)
TDF Télé-Diffusion de France
TDM Time Division Multiplex(ing)
TE Traffic Engineering
TLS Transport Layer Security
TMS Transport Multi-Services (TDF*)
TNG Terrestrial News Gathering (VRT)
ToE (card) TCP/IP offload Engine. An iSCSI TOE card offloads the Gigabit Ethernet and SCSI packet processing from
the CPU.
TOS Type-Of-Service
TR Temporal Redundancy
TS Transport Stream (MPEG-2)
Tx Transmitter
UA Unit Address

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
34
UAT User Acceptance Test
UDP User Datagram Protocol (RFC*768)
UHF Ultra High Frequency
UID Universal IDentifier
UTRA TDD UMTS Terrestrial Radio Access Time Division Duplex
UMTS Universal Mobile Telecommunications System
UPA(HS) Uplink Packet Access (High-Speed)
V Vertical
VALID Video and Audio Line-up and Identification (http://www.pro-bel.com/products/C132/ )
VANC ANCillary data in the Vertical video blanking interval
VBR Variable Bit Rate
VC-2 SMPTE code for the BBC's Dirac Video Codec
VCEG Video Coding Experts Group (ISO/IEC-MPEG + ITU-T)
VJ Video Journalist
VLAN Virtual LAN*
VLC Variable Length Coding
VoIP Video over IP
VoIP Voice over IP(narrowband audio)
VP Virtual Path
VPLS Virtual Private LAN Services
VPN Virtual Private Network
VQEG Video Quality Experts Group (ITU)
VRT Vlaamse Radio- en Televisieomroep, Flemish Radio- and Television Network (Belgium)
vs. versus; against, compared to
VSAT Very Small Aperture Terminal
VSF Video Services Forum http://videoservicesforum.net/index.shtml
W Watt
w. With
WAN Wide Area Network
W-CDMA, WCDMA Wideband CDMA*
WBU World Broadcasting Unions
http://www.nabanet.com/wbuArea/members/about.asp
WC World Championship
WDM Wavelength Division Multiplexing ( Lambda*)
WFQ Weighted Fair Queuing
WiFi Wireless Fidelity
WiMAX Worldwide Interoperability for Microwave Access (IEEE 802.16)
WLAN Wireless LAN IEEE 802.11(a,b & g)
WOG Winter Olympic Games
xDSL x DSL* (x = Asymetrical or Symetrical uplink/downlink)
YCbCr Digital luminance and colour difference information

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
35
Tableau 1: Towards lossless video transport - Deployment scenarios (§ 1.1) [© Cisco]

1) Fast Convergence 2) Application Layer 3) Temporal Redundancy 4) Spatial (Path) Diversity 5) Multicast-only Fast Reroute
or Fast Reroute Forward Error Correction (FEC) (TR) ("live / live") (MoFRR)
(FRR) [15] [16] [18] [20]
[8]

Network reconverges** / reroutes* Adds redundancy to the transmitted The transmitted stream is Two streams are sent over diverse Deliver two disjoint branches of
on core network failure (link or data to allow the receiver to detect and broken into blocks, each paths between the sender and the same PIM (Protocol
node). correct errors (within some bound), block is then sent twice, receiver Independent Multicast) SSM
without the need to resend any data. separated in time. (Source Specific Multicast) tree
If block separation period is to the same Provider Edge
greater than the loss of The Provider Edge locally switch
connectivity, at least one to the backup branch upon
packet should be received detecting a failure on the primary
and video stream play-out branch
will be uninterrupted. http://www.nanog.org/mtg-
0802/farinacci.html
Lowest bandwidth requirements in Supports hitless recovery from loss due Supports hitless recovery Supports hitless recovery from loss Hitless: The Provider Edge uses
working and failure case to core network failures if loss can be from loss due to core due to core network failures if have the two branches to repair losses
Lowest solution cost and complexity constrained network failures if loss can network stream split and merge and present lossless data to its
No requirement for network path be constrained functions (e.g. Dynamic Channel path IGMP (Internet Group
The simplest and cheapest design / Management) Management Protocol) neighbors
operational approach for a Service diversity - works for all topologies No requirement for network
Provider is to have such behaviors path diversity – works for all Lower overall bandwidth consumed in A simple approach from a design
optimised by default in the software topologies failure case compared to FEC and deployment and operations
and hardware implementations and Introduces no delay if the paths have perspective
is applicable to all its services. equal propagation delays
Requires fast converging network to Requires fast converging network to Requires fast converging May require network-level techniques MPLS and Multi-Topology
minimize visible impact of loss minimize AL-FEC overhead network to minimize block to ensure spatial diversity, MoFRR Routing are options in topologies
separation period (5), Multi-Topology Routing, Traffic that do not support MoFRR
Engineering; required techniques
depend upon topology
Is not hitless - will result in a visible Higher overall bandwidth consumed in Incurs 100% overhead
artifact to the end users failure case compared to "live / live" (4) Incurs delay - longer outages
Incurs delay - longer outages require require larger block
larger overhead or larger block sizes separation period
(more delay)
© EBU Networks 2008 Seminar / 23 - 24 June 2008
Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
36

(*) According to the number of multicast groups 400 - 800 - 4000, the median / max reconvergence time for all channels following a network failure may amount to 200/290 ms – 260/380 ms
(no more than 1 frame loss) – 510/880 ms [11]. To improve it, prefix prioritisation allows important groups (e.g. premium channels) to converge first, and developments with IP optical
integration can further reduce the outage to sub 20ms in many cases (lossless in some cases) by identfying degraded link using optical data and signaling before the traffic starts failing [12].
(**) Fast rerouting: the routing protocol detects the failure and computes an alternate path around the failure [10]. But loss of connectivity is experienced before the video stream connectivity is
restored.

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
37
Tableau 2: Practical measurement and practical network performance (§ 1.2) [© Siemens]

Practical measurement Practical Network performance


Audio latency With a commercially available Lip-sync measurement system; VALID from It is still routine to find codecs that make only a modest effort to get lip-sync correct. ln all
and Pro-Bel. It uses a test signal with time markers in the picture and sound by cases dealt with, the IP packets carry Transport Stream and there really is no good
lip-sync which a VALID reader can measure the audio to video timing and display reason for getting it wrong. Some have had large errors that are clearly unacceptable.
the result directly in millisecond [7] [8]. For latency measurement [6], the Some have fixed errors that on their own are acceptable but add to the perennial lip-sync
generator's video signal is connected direct to the reader while the audio is problems. A few just get it exactly right.
passed through the system under test. The measurement is therefore the
latency of the audio path. This is applicable to both sound only and sound
with vision circuits.
Network latency The simple approach is to measure the round trip latency [10]. Ping is a Siemens is currently delivering an audio network specified as better than 50ms one way
simple and effective tool but has to be used with care. If QoS is used - but it is accepted that the measurement is round trip and therefore better than 100ms.
essential when the route is shared with data traffic - the Ping traffic through The route distances range up to 400km but it is really the number of routers in the circuit
another port is unlikely to receive the same QoS marking as the media that determine the IP latency. The overall latency is dependent on compression system
traffic. However, it can still be used when the connection is being set up and settings, other things being equal. ln the case of audio, the lower bit rate, the more
and before business traffic is carried. The media stream can be run over time is required to make up an IP packet. Live vision circuits have to be timed so a
the connection before any other traffic is routed. synchroniser is required. This can be built into the codec or separate but either way it will
It is possible to measure latency over a single route using test equipment add to the overall latency.
with GPS references at each end [9]. So far these tests have confirmed the
round trip latency is simply double the one way latency. This goes wrong if
there are unexpected constraints in the network configuration but these are
very likely to show up in other ways, usually as severe packet loss.
Network jitter Again, Ping is a useful tool if used appropriately [15]-[17]. Networks have a There is no clear cut definition of jitter in this context. The issue is the variation in IP
tendency to suffer 'data storms' - a useful catchall phrase for sudden and packet transit time so as to exceed the decoder buffer capacity. Some coding systems
unpredictable changes in the latency and jitter. If the ping time is stable can deliver a very steady flow of IP data. Others have a steady long term average but
while the media traffic is running, it is a very good sign. As ping time jitter show substantial variation in the short term.
starts to rise, the likelihood is media traffic is affected as well. I have
captured the effects of data storms on ping time and correlated them with
damage to the media stream. Sometimes however, network operators are
reluctant to accept this simple test as evidence of problems.
More sophisticated testers can measure the performance with full bit rate
data in both the Expedited Forwarding and the Best Effort QoS levels.
Ideally, this type of testing is done as part of the acceptance testing of a
network connection by the service provider.
Video jitter Some codecs maintain their IP buffers at the mid point by pulling the output [19] [20]
video frequency. If there is a significant jump in the IP latency, as may
happen when switching to a backup route, the output video can become so
far off frequency that downstream equipment cannot follow it.
Unfortunately, there is no clear cut specification for SDI clock frequency
tolerances in the broadcast standards for manufacturers to follow. A
synchroniser is normally needed anyway and they accept the widest
frequency range.
© EBU Networks 2008 Seminar / 23 - 24 June 2008
Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
38
Practical measurement Practical Network performance
This type of codec is deriving the SDI clock frequency from the IP data rate
so SOI clock jitter can occur if the design is not sufficiently careful.
Decoders with built in synchronising have a reference input and deal with
these problems internally but add cost and latency.
Glitches A fixed tone signal passed through the circuit is filtered using typically a The users naturally want zero audible and visible glitches and we have tested audio
distortion analyser to remove the fundamental [11]. Any pulses over and circuits on this basis. While this is good goal to achieve, it is not practical in a managed
above noise and distortion marks a glitch. The pulses from the distortion service. The circuit has to be taken out of service for a long period just to make the
analysis can trigger a digital storage scope, providing a record of the glitch measurement and this is after the event that a customer had cause to complain about.
in the tone and a count of the events [12]. The test is run for an agreed It is more realistic therefore to translate this requirement to the performance of the
period, 12 to 48 hours. ln practice though, if glitches are really a problem, network and codecs. ln practice, this information can only be measured by the decoder.
they will usually start within the first few minutes. The exception is where So coders with good traffic analysis and logging are an advantage.
the test is run in conjunction with less predictable traffic over the same
route. When comparing with pre-existing circuits, e.g. E1/T1, it is interesting to note that the
maximum permitted bit error rate for the connection is often quoted as 1 in 10e9. This
would imply 2 glitches an hour in a high quality audio circuit but in reality, E1 normally
performs several orders of magnitude better than this so user expectations are much
higher.
The audibility of glitches is a highly contentious subject. The relationship between network
events, the technical effect on the audio path and the audibility of the effect in real
program material of different types could be the subject of a huge survey.

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
39
Tableau 3: HD contribution codecs (§ 4.1) [© EBU]

MPEG-2 MPEG-4 AVC / H.264 JPEG2000 Dirac Pro MPEG-4 SVC / H.264
Standard ISO/IEC 13818 ISO/IEC 14496 ISO 19446-1 SMPTE VC-2 Amendment to H.264/AVC
(Developped by BBC standard
Research)
Principle DCT based system Spatial and temporal Wavelet based (DWT) compression system coupled to an Mixing the advantages
Spatial and temporal compression with a arithmetic coder. of the wavelet
prediction with variable choice between context Lossless and lossy compression transform and motion
length statistical coding adaptive arithmetic compensation.
(VLC) coding (CABAC) or Similar wavelet filters to
variable length (CAVLC) JPEG2000
Structure 6 Profiles (prediction 7 Profiles and 5 levels Only DCI profiles exist at the moment. Follows H.264 profiles.
tools supported) and 4 For contribution (at the Broadcast application profiles under investigation in
levels (image formats and moment): JPEG.
max bit-rate supported)
High profile 4:2:2 – 10bit
For contribution:
Level 4.1 (1080i and
Main profile (4:2:2) I,P 720p)
and B frame prediction
@ High level
(1920x1080) up to 80
Mbit/s
Scalability (provided SVC Highly scalable Spatially scalable due Scalability on top of H.264
amendment) Spatially – Wavelet transform. to the wavelet. High coding efficiency

Temporally – I frames Scalability is provided layer-


wise:
Quality – statistical coder
Base layer H.264/AVC
compatible.
Enhancement layer SVC
decodable.
Pros Several products Proven 50% benefit Already in use by some broadcasters for internal Open source and All pros of H.264/AVC
available. compared to MPEG-2 in contribution License free – no Backward compatible with
Well known technology primary distribution bit- Used by DCI for 2k and 4k image format archival. commercial risk. H.264/AVC
rate range Very low latency (order
Cheap Scalability might be useful to embbed several spatially Primarily considered for
State of the art codec different streams in the same feed. of 6ms) since it use distribution but might appear
(using context based variable length coding useful in contribution; needs
arithmetic coding) Strong error resillience of the codestream useful for for entropy coding.
transmission in error prone channels enabled by 10 to 30% less bit rate than
High compression synchronisation Support for 10 bit depth simulcast.
capability with good 4:2:2
quality at low bit Strong sustainability to cascadings
Product available
© EBU Networks 2008 Seminar / 23 - 24 June 2008
Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
40
MPEG-2 MPEG-4 AVC / H.264 JPEG2000 Dirac Pro MPEG-4 SVC / H.264
rate.(suitable for SNGs) High bit depth range. (Hardware & software)
Enhanced Error I frame only – no interdependence between frames ; Handles formats from
resilience with valuable for editing 720p/50, 1080i25,
synchronization 1080p/50, to 4K.
mechanism in the Visually friendly artifacts. (Futureproof.)
codestream Low latency (approx 45ms) Lossless and visually
Part – 1 of the standard IPR free – no license fees lossless compression
Proof of better performances than AVC-I for high quality
large images. [] (reference software)
Cons Not anymore state of the Not that many products Needs very High bit rates to provide very good visual No comparison with All cons of H.264/AVC
art compression system. avalaible with the full quality (over 100Mbit/s) depending on the image format) other systems made till No product available yet
Better technologies exist contribution profile tools under cascaded and shifted generations now
(H.264/AVC; DIRAC etc.) (Hi422 Level 4.1) Not that many products available.
but the gain is still High coding Latency
unknown (over 800ms)
Bandwidth greedy for GoP-based
High quality sequence Additional decoding delay
exchange [see BBC for post production.
presentation - EBU
Production Technology GoP pumping effect.
2007 seminar report Gain over MPEG-2 in
§ 1.2] high bit rates
80Mbit/s needed for high (contribution) not yet
quality content exchange defined.
=> too expensive on
satellite
GoP based system
Potential for emphasised
GoP Pumping visual
effect in video sequences
(pulsing loss of
resolutions) in cascades.
Additional decoding delay
due to inter-frame coding
DCT based => blocky
artifacts not visually
friendly

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training
41
Tableau 4: The ovewhelming crowd of networks worldwide! [© IRT]

Acccess technique Bit rate (uplink) Performance of relevant networks Transfer duration
(measured) of a 5-minute material
(4 Mbit/s material)
Fixed line
Tel. modem 56 kbit/s
ISDN 128 kbit/s
ADSL 640 kbit/s
SDSL 4.6 Mbit/s Arcor Business ~ 1.95 Mbit/s ~ 10 min.
VDSL 50 Mbit/s T-Home/IRT ~ 2.94 Mbit/s ~ 6.7 min
Cable network 40 Mbit/s
Powerline (PLC) 200 Mbit/s
Wireless
WLAN (802.11b/g) 11 / 54 Mbit/s InternetCafe Munich ~ 870 kbit/s ~ 30 min.
EBU WiFi ~ 3.55 Mbit/s ~ 5.6 min.
Satellite link 1 Mbit/s
GSM 12 kbit/s
GPRS 171.2 kbit/s
HSCSD 57.6 kbit/s
EDGE 473 kbit/s
CDMA One 14.4 kbit/s
UMTS 128 kbit/s Vodafone ~ 120 kbit/s ~ 180 min.
HSUPA 5.8 Mbit/s
CDMA2000 144 kbit/s
EV-DO 5.4 Mbit/s
Flash OFDM 900 kbit/s
LTE / HSUPA 50 Mbit/s Vodafone ~ 1.35 Mbit/s ~ 14.5 min
WiMAX (mobile) 7 Mbit/s IRT-TESTnet ~ 1.15 Mbit/s ~ 17 min.
iBURST 346 kbit/s

© EBU Networks 2008 Seminar / 23 - 24 June 2008


Reproduction prohibited without written permission of the EBU Technical Department & EBU International Training

Вам также может понравиться