Академический Документы
Профессиональный Документы
Культура Документы
06 COVER STORY
Designing Wi-Fi Networks for High
Ethernet. By Josh Taylor
SUBMISSION POLICY
ICT TODAY is published bimonthly in January/February, March/April, May/June, July/August, September/October, and November/December by BICSI, Inc., and is mailed Standard A to BICSI members,
RCDDs, RITPs, RTPMs, DCDCs, BICSI Installers and Technicians and ESS, NTS, OSP and WD credential holders. ICT TODAY subscription is included in BICSI members’ annual dues and is available
to others through a purchased yearly subscription.
ICT TODAY welcomes and encourages submissions and suggestions from its readers. Articles of a technical, vendor-neutral nature are gladly accepted for publication with approval from the Editorial
Review Board. However, BICSI, Inc., reserves the right to edit and alter such material for space or other considerations and to publish or otherwise use such material. The articles, opinions and ideas expressed
herein are the sole responsibility of the contributing authors and do not necessarily reflect the opinion of BICSI, its members or its staff. BICSI is not liable in any way, manner or form for the articles, opinions
and ideas, and readers are urged to exercise professional caution in undertaking any of the recommendations or suggestions made by authors. No part of this publication may be reproduced in any form or by any
means, electronic or mechanical, without permission from BICSI, Inc.
ADVERTISING: Advertising rates and information are provided upon request. Contact the BICSI Sales Department for information at +1 813.979.1991 or 800.242.7405 (U.S. and Canada toll-free) or
sales@bicsi.org. Publication of advertising should not be deemed as endorsement by BICSI, Inc. BICSI reserves the right in its sole and absolute discretion to reject any advertisement at any time by any party.
POSTMASTER: Send change of address notices to BICSI, Customer Care, 8610 Hidden River Pkwy, Tampa, FL 33637-1000; Phone: +1 813.979.1991 or 800.242.7405 (U.S. and Canada toll-free)
© Copyright BICSI, 2016. All rights reserved. BICSI and RCDD are registered trademarks of BICSI, Inc.
May/June 2016 t 3
ICT TODAY
THE OFFICIAL TRADE JOURNAL OF BICSI
U.S. North-Central Region Director Chris Scharrer, RCDD, NTS, OSP Dura-Line......................................................13
duraline.com
U.S. Northeast Region Director Matthew Odell, RCDD
U.S. South-Central Region Director Todd W. Taylor, RCDD, NTS, OSP Hitachi..........................................................17
U.S. Southeast Region Director Charles “Chuck” Wilson, RCDD, NTS, OSP hca.hitachi-cable.com
U.S. Western Region Director Larry Gillen, RCDD, ESS, OSP, CTS
Hyperline.......................... Inside Back Cover
Executive Director & Chief Executive Officer John D. Clark Jr., CAE hyperline.com
PUBLISHER
BICSI, Inc. 8610 Hidden River Pkwy., Tampa, FL 33637-1000
Phone: +1 813.979.1991 Web: www.bicsi.org
CONTRIBUTE TO
ICT TODAY
EDITOR ICT Today is BICSI’s premier publication
Steve Cardone, icttoday@bicsi.org that aims to provide authoritative, vendor-
neutral coverage and insight on next
generation and emerging technologies,
PUBLICATION STAFF standards, trends and applications in the
global ICT community. Consider sharing
Wendy Hummel, Creative
your industry knowledge and expertise by
Amy Morrison, Content Editor becoming a contributing writer to this
informative publication.
Clarke Hammersley, Technical Editor
ADVERTISING SALES
+1 813.979.1991 or sales@bicsi.org
4 u ICT TODAY
FROM THE PRESIDENT, BRIAN ENSIGN, RCDD, NTS, OSP, RTPM, CSI
The primary application for 40GBASE-T is very specific and the market is clearly
Growing BICSI Around the Globe
defined—data center interconnection from servers to edge switches.
May/June 2016 t 5
6 u ICT TODAY
By Jussi Kiviniemi
May/June 2016 t 7
FIGURE 1: Example of simple Wi-Fi coverage planning.
It is also imperative to note that signal strength is a antennas are required to direct the signal toward the
two-way street. It is not enough that the mobile device client devices. Antennas must be tilted horizontally, as
can hear the AP; the AP needs to hear the mobile device, well as vertically. 3D planning tools help in figuring out
as well. Therefore, even if the AP radio power levels are how to align the antennas.
increased to the maximum and the mobile device can In the end, what is sufficient coverage? For a data-
hear the AP very well, the connection still might not work only network, it may suffice to provide a signal strength
due to a failure from the mobile device back to the AP. of -75 decibel milliwatts (dBm) or better. However,
In Wi-Fi networks, distributed antenna system (DAS) -75 dBm signal strength may not be enough to achieve
solutions are not commonly used. Instead, the antenna crystal-clear voice call quality. Values ranging between
is located close to or, more typically, built into the AP -65 to -67 dBm are often referred to as industry standards
(Figure 1). That, however, does not mean the coverage for deploying high-quality voice networks.
pattern is circular, as it is affected by walls, elevator shafts, Coverage refers to the signal strength received from
and other objects. Because of this, specific Wi-Fi planning the strongest of the audible Wi-Fi APs. However, to
tools are used to design coverage that meets the user ensure Wi-Fi connectivity when roaming from one AP
requirements. to another, sufficient signal strength from the second
In areas that are challenging for radio network design, strongest AP is also required. A typical design guideline
such as warehouses, the APs may be mounted 15 meters would be -67 dBm signal strength everywhere for the
(m [50 feet (ft)]) up in the ceiling, while the client devices strongest AP, and -75 dBm signal strength everywhere
are on the floor level. In cases like these, directional for the second strongest AP.
8 u ICT TODAY
g/
The network should be designed to
lo
May/June 2016 t 9
FIGURE 4: Wi-Fi signals appear as curvy shapes in a spectrum analyzer.
unsupported by the device manufacturer, especially with Microwave ovens, for example, are wide-band inter-
older 5 GHz client devices. DFS support on the APs, as ferers from the Wi-Fi point of view. A microwave oven
well as the client device channel limitations, calls for extra leaks radiation on all 2.4 GHz Wi-Fi channels. One oven
caution with Wi-Fi channel assignment, especially with does not kill the frequency entirely, but it has a significant
outdoor Wi-Fi deployments. impact on capacity for the nearby devices. Many wireless
Since one AP typically includes both 2.4 GHz and 5 video cameras, on the other hand, utilize only a few of the
GHz radios, and the 5 GHz frequency space has multiple 2.4 GHz channels, leaving room for a smart Wi-Fi engineer
times more interference-free channels, it often becomes (or a smart Wi-Fi infrastructure) to cope with the situation
impossible to find a satisfactory channel plan for 2.4 GHz by adjusting the channel separation of the Wi-Fi network.
without turning off some of the 2.4 GHz radios. It may Interfering devices can easily be detected by a spec-
sound counterintuitive, but disabling some of the 2.4 GHz trum analyzer (Figure 4). There are two types of spectrum
radios to minimize overlap actually increases capacity. analyzers, and a wireless engineer would ideally have
both tools available:
Penalty for Interference u A portable spectrum analyzer that can be connected
A question often heard is “Why does our wireless to a laptop via USB. It is easily portable for performing
network become unusable at lunchtime?” site surveys and troubleshooting in desired locations.
Wi-Fi radios operate on license-free frequencies. This The downside is the spectrum analysis is periodic in
allows anyone to set up a Wi-Fi network, but it also means nature at best, not constant.
Wi-Fi needs to compete with other devices using these u A spectrum analyzer built into APs allows for constant
frequencies. Such devices include microwave ovens, monitoring of the spectrum once the network is up
wireless video cameras, Bluetooth®, baby monitors, home and running. However, the measurements are measured
automation systems, radar and more. at the ceiling level, not where the users are.
802.11n 2.4/5 GHz 450 Mb/s Wider channels, more spatial streams = more speed.
802.11ac 5 GHz 1.3 Gb/s Even wider channels, even better modulation = even more speed.
10 u ICT TODAY
Are You at Risk?
TIA / ETL
Component Rated? + Solderless PCB? + 50 Gold Rated? + PoE Rated?
®
UL® / cUL® Listed No Intermittent Failure 50 Micro-inch Gold IEEE 802.3at PoE Plus
Rear 110 -Type IDC 750 Plug-Jack Mating Cycles 100+ Re-termination RoHS Compliant
Not complying with FCC or TIA standards might prevent you from getting paid. Ask your data connectivity
vendor these questions:
Are the patch panels TIA /ETL “Component-Rated?” Are the patch panels made with solderless PCB?
A component rating is a tougher standard than the “permanent link” rating Punch-down termination requires up to 50 pounds of harsh force. This force
some manufacturers claim to have. can break the solder joint between 110 and PCB. We only use solderless 110s.
Are the patch panels FCC part 68 compliant? Are the patch panels PoE rated?
FCC specifies 50 micro inch gold on contact. Every time you plug in, you shave If you are installing a PoE system, a PoE rating is required. Without it, you
off a little gold plating. 50 Gold® will cover you for up to 750 insertion cycles. risk damaging your equipment.
With all these enhancements, ICC patch panels still cost 50% less than the big brands. If you are still paying for
big brands, we need to talk!
888-ASK-4ICC
icc.com csr @ icc.com
© Copyright 2016, ICC. May/June 2016 t 11
12 u ICT TODAY
Is Your Network Running
DL_BICSI mag_fp ad.pdf 1 4/4/16 2:50 PM
onIs Old
YourTechnology?
Network Running
on Old Technology?
Is Your Network Running
on Old Technology?
CM
CY
Smart needed
Scalable
Save Ask about
money onthe benefits of
installation, a pathway!
fiber and investment in technology
eABF solutions are designed to meet your network’s current bandwidth
and arenow. eABF
easily solutions
scalable to can be quickly more
accommodate and economically
that isn’t needs,
expanded
fiber in the future. at anyto
No need
Smartforsplice,
time with
Save
today’s
minimal
money
neededneeds
onlabor and disruption
installation,
and defer
now. eABF
fiber
costs can
solutions
andto your facility.
investment
untilbegrowth
quicklydemands!
in It’s just
connectorize, and test fiber that you're not ready to use today! isn’tsmart
technology to
that
and economically expanded at any
pay
time with minimal labor and disruption to your facility. It’s just smart to pay
A well designed, flexible network architecture with eABF enables easier
Contact usFlexible
today to learn how to air-jet fiber! for today’s needs and defer costs until growth demands!
customization than both conventional conduit and traditional cabling solutions.
Ask about the benefits of a pathway!
Contact us today to learn how to air-jet fiber!
Smart Save money on installation, fiber and investment in technology that isn’t
needed now. eABF solutions can be quickly and economically expanded at any
time with minimal labor and disruption to your facility. It’s just smart to pay
for today’s needs and defer costs until growth demands!800.847.7661
www.duraline.com/enterprise
800.847.7661
Contact us today to learn how to air-jet fiber! May/June 2016
www.duraline.com/enterprise
t 13
Data Center Infrastructure
Management Solutions:
HOW TO HIT
A MOVING
TARGET
By Bob Potter For some 20 years, data center design evolved more or less
linearly—that is to say, as expected and without significant
deviation. Data centers grew larger, added racks and
increased density, and the power and cooling systems they
required evolved accordingly. This is the story of the traditional
enterprise data center.
14 u ICT TODAY
As the industry matured, energy DCIM Evolution and the emerging—everything from high-
use and costs skyrocketed, end-user Modern Data Center voltage DC to telco-tested modular
demands intensified, and new tech- The earliest DCIM products DC power to hybrid models.
nologies emerged. Organizations launched in a new world in which the Acceptable temperatures are rising,
started to question some long- benefits of unprecedented visibility but intelligent thermal management
standing assumptions about what and control across the physical is critical to maintaining availability
a data center is and could be: layers of the network were clear and and optimizing energy use. Demands
How much computing will be unquestioned, but the early adopters for speed and security are driving
needed five years from now? largely were large traditional data computing closer to the edge. This
Is there justification for build- centers. DCIM immediately became is forcing providers to rethink
ing a facility to house 500 and remains a valuable solution in network architectures and vendors
racks when only 100 are cloud, colocation and hyperscale to reduce their equipment footprint
needed today? environments. However, traditional and increase visibility and remote
Is it affordable to build for data centers were far from extinct. management capabilities.
500 when, someday, 1,000 It would be too strong a statement
might be required? to say the first DCIM solutions were
Whatever is decided, how can
Applying DCIM to
outdated on arrival—after all, those
energy costs be managed? early adopters enjoyed better asset
Distributed Networks
Consider distributed data
management, increased efficiency
Those questions, along with centers—an increasingly common
and improved availability. But almost
the increasing complexity of the model. They often have multiple
as soon as it arrived, DCIM had to
traditional data center model, drove change to better meet the needs of small computing modules spread
the development of data center the evolving information ecosystem. across different locations, providing
infrastructure management (DCIM). As DCIM platforms have become local computing and storage while
Early DCIM solutions were designed more sophisticated and the benefits still networking with each other
with enterprise deployments in mind, more pronounced, more data center and with the small data center at
focused on reining in the mostly and IT managers are investigating the network’s center. Individually,
unchecked waste associated with DCIM for their networks—even if these are simple IT resources—often
the overprovisioned facilities of the those networks do not fit the big just a single rack of equipment.
early 2000s. DCIM would provide enterprise data center model. But collectively, these are complex
a tool to track and manage assets What they have found, especially networks with significant manage-
across those increasingly virtualized over the last two years, is a changing ment challenges that the earliest
environments. DCIM landscape, with scalable DCIM solutions were never designed
But an interesting thing options that allow organizations to to address. So, just as the data
happened on the way to DCIM build a DCIM solution that fits their center has evolved, so has DCIM.
delivery—the data center evolved data center and can grow as the Today’s best solutions are flexible,
again. Server capacity utilization organization grows. scalable and modular, suitable for
increased, while energy efficiency Today’s data center is no single immense data centers, cloud facilities,
and visibility across systems degener- thing. There are many traditional colocation pods or those distributed
ated. The industry started to explore enterprise facilities, but they are data centers with hundreds of
and eventually embrace new arch- more complex than ever before. scattered computing nodes. In fact,
itectures and approaches. Some Although conventional AC power because of the disparate nature of
organizations abandoned owned architectures remain the primary today’s data centers, effective DCIM
facilities altogether and moved to choice in these environments, is more important and more valuable
cost-efficient colocations or the cloud. alternative architectures are than ever before.
May/June 2016 t 15
As data center infrastructure management (DCIM) platforms have become
more sophisticated and the benefits more pronounced, more data center
and IT managers are investigating DCIM for their networks—even if
those networks do not fit the big enterprise data center model.
Take the case of Cambridge facility and remaining distributed performance and efficiency. One of
University in England, a school computing sites under a single the great benefits of today’s modular
that measures its history not in DCIM system. Although originally DCIM platforms is the ability to
decades but in centuries. As the conceived with a traditional data deploy at any of these four stages,
campus and various departments center model in mind, applying making each of them a potential
have grown, evolved and embraced DCIM to improve visibility and entry point for tailored DCIM
the IT revolution over the past 30 control of a web of hard-to-see IT implementation instead of hurdles
years, separate micro networks assets is exactly what the technology to be cleared on a long road to data
have popped up across the campus. was developed to do. The various center optimization.
Integration with the school’s main locations and disconnected nature
data center was scattered at best. As of the Cambridge facilities just STAGE 1:
a result, the campus ended up with added another layer of complexity. Data Capture and Planning
more than 200 server rooms serving Ultimately, the desire was the same What and where are assets in
120 departments, and most of them as it is for any DCIM customer: to see the data center?
operated independently. How are they interconnected?
and control multiple assets—not just
Is there space, cooling and power
This is not especially unusual, servers, but every component across
to meet future needs?
but it is exceptionally inefficient. the network—from a single location.
How can assets be efficiently
These separate, independent IT nodes Cambridge deployed a DCIM
commissioned and
included equipment from multiple solution that enabled efficient
decommissioned?
vendors, eliminating the benefits management of multi-vendor IT,
of consolidated management, power and cooling resources. The
Benefits: Improves planning and
standardized service delivery and system configures and organizes data
provides data needed to improve
improved security and availability. from all of Cambridge’s computing
efficiency.
The result: some 200 loosely facilities and translates it into a
connected server rooms with little or unified, actionable language, helping STAGE 2:
no consistency across equipment or the university achieve higher Monitor and Access
operational practices. efficiency—with a projected power How are assets operating?
Eventually, the university usage effectiveness (PUE) of 1.2 and Are there real-time notification
prioritized improved visibility and improved performance. Simply put, of alarms and alerts?
management of these scattered it has made everything the university How does a server get back up
computing clusters with some does around its IT systems smarter. and running?
clear objectives: to optimize the Can planning tools be populated
performance and efficiency of its IT The Stages of DCIM with actual performance data?
systems and reduce operational costs. There are four stages to
The school consolidated several of effective DCIM deployment, each Benefits: Provides early warnings that
the IT rooms in a single modern addressing critical questions which, minimize service requirements and
data center and unified the new when answered, improve network ensure availability.
16 u ICT TODAY
STAGE 3: as choosing one who only offers
Analyze and Diagnose large, enterprise solutions you
How can the life of the data hope to grow into—someday. The
center be extended? single biggest development in
How can mean time to repair DCIM in the last two years—and
(MTTR) be reduced? the most significant differentiator
How can infrastructure be between today’s solutions—is
synched with virtualization scalability.
automation? That capability was one of
the critical data points in recent
Benefits: Reduces MTTR and DCIM evaluations from Gartner
service requirements, manages and IDC. Gartner’s 2015 DCIM
infrastructure capacity, and tracks Magic Quadrant and the IDC
performance. MarketScape: Worldwide Data
Center Infrastructure Management
STAGE 4: 2015 Vendor Analysis evaluated
Recommend and Automate current DCIM offerings on
How can potential failures be many different elements of
anticipated so that compute capabilities and segment strategy,
and physical loads can be including scalability and modular
automatically shifted to capabilities. Those capabilities
eliminate downtime? allow businesses to choose DCIM
How can efficiency be opti- bundles that meet immediate
mized across my data center? needs while allowing for future
growth, thus ensuring IT remains
Benefits: Ensures availability and a critical business asset. t
optimal performance.
AUTHOR BIOGRAPHY: Bob Potter
Finding the Right Fit is a Senior Product Marketing Manager
for Emerson Network Power’s Software
The key to answering all
business unit. Bob joined Emerson in
these questions is finding a DCIM 2011 as Product Marketing Manager for
solution that helps the data the Trellis Platform software solution
center become a business asset and has managed the marketing efforts
instead of a business expense. from initial launch in 2012 to date. Prior
to joining Emerson, Bob held software
DCIM technologies are not one-
product management and marketing
size-fits-all solutions to whatever
roles with Stanley Black & Decker and the
is ailing the data center. Right- McGraw-Hill Companies. He led several
sizing is important, and finding a successful software product launches for
vendor capable of matching the these organizations. Bob holds a B.S. and
right solution to your data center an M.B.A. in Marketing from Grand Valley
State University. He can be reached at
is critical to optimizing your IT
bob.potter@emerson.com.
investment. Choosing a vendor
that has the right fit today but no
option for growth is as inefficient
May/June 2016 t 17
SPEAKER PLACEMENT
AND WIRING GUIDELINES
FOR PUBLIC ADDRESS/PAGING SYSTEM/AV
Public address (PA)/paging/AV systems have been a following World War I. Early applications included movie
part of the information and communications technology theaters, concurrent with the development of motion
(ICT) industry for more than 90 years. Based largely on pictures with soundtracks, and outdoor sports stadiums.
technical developments and research conducted within Further developments in later years expanded
the telephone and radio industries by organizations the reach of these systems into an increasing number
such as Bell Labs and RCA early in the 20th century, and variety of applications, both public and private.
the invention of the vacuum tube and the loudspeaker These include locations such as schools, offices, retail
allowed for the commercial deployment of these systems establishments and transportation facilities.
18 u ICT TODAY
By Robert B. Hertling Jr., RCDD, OSP SPEAKER CHARACTERISTICS
May/June 2016 t 19
HELP GUIDE THE
FUTURE OF THE INDUSTRY
Subject Matter
Experts (SMEs) Needed
to Contribute to BICSI
Manuals and Standards.
Visit bicsi.org/volunteer to apply to sit on these committees and help shape the future of ICT.
WRITING OPPORTUNITIES
MANUALS:The Technical STANDARDS:The Standards OTHER:Write articles
Information and Methods (TI&M) Committee participates for BICSI’s premier trade
Committee is responsible for in the development of publication, ICT Today. Writers
writing and updating the BICSI telecommunications-related share vendor-neutral, relevant
design and installation technical standards, as well as notifies and authoritative insight on
manuals. The committee is BICSI’s membership of ICT design and installation
composed of SME members changes within applicable practices and solutions to
and other non-committee standards that may affect help our readers expand their
SMEs, from all over the world, them. Several different scope of work and enhance
who assist in writing, performing subcommittees make up the their professional stature. It’s
reviews, making suggestions Standards Committee. an excellent opportunity to
for applicable revisions and share your knowledge and
updates, and agreeing expertise with your industry
on changes through a peers. Send your article
consensus-based process. ideas to icttoday@bicsi.org!
Since speakers have an electrical input which For the purposes of this discussion, the following
produces a mechanical output (namely, sound), there are parameters for a typical ceiling-mounted 8-inch (in)
both electrical and mechanical (acoustical) characteristics coaxial speaker will be used:
that are functions of speaker performance and which u Frequency response—60 Hz–16 kHz
need to be taken into consideration in design and u Sensitivity—97 dB/1 W/1 m SPL
installation activities. u Dispersion angle—50 degrees off axis
The acoustical parameters that are important
considerations in speaker selection and spacing/ ACOUSTIC CONSIDERATIONS
location include: Acoustic considerations for speaker spacing/
u Audio Frequency Response: Measured in location include:
Hertz (Hz), frequency response is the range of audio u Space dimensions and configuration (length,
frequencies that the speaker can faithfully reproduce. width, height, circular, rectangular, etc.) for the
For successful reproduction of speech in a PA/paging area to be covered.
system, the accepted minimum frequency response u Ambient noise (ranges from low, as in an office
of all components within the system is typically environment, to high, as in an industrial
350 Hz–5 kilohertz (kHz). For AV systems, the environment) within the area to be covered.
accepted minimum frequency response of all u Surface characteristics (reflective—concrete, ceramic
components within the system can be as broad as tile and similar surfaces or absorptive—carpet, fiber
20 Hz–20 kHz. ceiling tile and similar surfaces) within the area to
be covered.
u Sensitivity: Measured in decibels (dB) and sound
pressure level (SPL). This is the on-axis (i.e., directly For the purposes of this discussion, the following
in front of or below the speaker) loudness produced parameters will be used:
by the speaker in dB SPL measured at a specific u Space dimensions (includes length, width and
distance—usually ≈1 meter (m [3.3 feet (ft)]) with a ceiling height) and configuration (rectangular)
specified electrical power input (usually 1 watt [W]). u Ambient noise (medium to high)
SPL can range from 0 dB (threshold of hearing u Surface characteristics (reflective)
for a typical person) to 120 dB (threshold of pain
for a typical person). As an example, a subway train DETERMINING COVERAGE AREAS
entering a station typically can generate 90 dB SPL To determine coverage areas, the designer should:
measured at ≈6.1 m (20 ft). An average person’s u Obtain space measurements and prepare plan and
voice at a conversational level generates 70 dB SPL elevation views.
measured at ≈0.3 m (1 ft). u Note existing surfaces and finishes and identify areas
SPL represents sound energy intensity—what is not to be covered.
commonly referred to as loudness. An increase of u Identify special conditions, such as open archways
10 dB SPL is perceived by a typical listener as to stairwells or other spaces, or abrupt changes in
doubling the volume of the sound. It is important dimensions, such as ceiling heights.
to note that SPL is an acoustical/mechanical u Calculate the base coverage area for an individual
measurement, not an electrical measurement. speaker based on the manufacturer’s specifications.
u Dispersion angle: Measured in degrees, this is Figure 1 on the next page provides the methodology
the angular value within which the SPL is not more for calculating the base coverage area, using basic
than 6 dB below the on-axis level (the sensitivity trigonometry and the dispersion angle and sensitivity
level) for the speaker’s overall frequency response or values noted earlier. While the calculations shown here
a specific frequency specified by the speaker are being done by hand in order to demonstrate the
manufacturer. individual steps involved, there are commercial software
May/June 2016 t 21
applications available to perform these calculations
1 watt input
and the subsequent steps involved in determining
speaker placement.
50 degree Once the speaker’s base coverage area has
dispersion
angle SPEAKER
been determined, the next step is to determine
the location-specific coverage area by including
1 meter
Radius (r) = tan50x1 two additional measurements—distance from
= 1.1918 meters
speaker to listener ear height and desired SPL at
97 dB SPL listener ear height—into the calculations. Assume
a 12-ft ceiling height from the floor, a 5-ft listener
ELEVATION VIEW PLAN VIEW
ear height, and a minimum of 75 dB SPL and
maximum of 95 dB SPL at listener ear height.
FIGURE 1: Methodology for calculating the base coverage area. Note that, in many cases, the level of 95
dB SPL at listener ear height is the maximum
1 watt input allowable in order to avoid the potential for
hearing damage to people present within the
covered area.
50 degree
dispersion There are two important concepts to remember
angle SPEAKER
about SPL values:
2 meters u If the reference distance for the initial SPL
Radius (r) = tan50x2
(6.6 ft)
= 2.3836 meters
calculation is doubled, the SPL will decrease
(7.8 ft)
91 dB SPL by 6 dB (e.g., base coverage area SPL at ≈1 m
[3.28 ft] is 97 dB at 1 W, at ≈2 m [6.6 ft] the
ELEVATION VIEW PLAN VIEW SPL will be 91 dB at 1 W). This principle is
also known as the Inverse Square Law.
FIGURE 2: Calculating the location-specific coverage area for an individual speaker u If the reference input electrical power for the
using data for space dimensions, ambient noise and surface characteristics initial SPL calculation is doubled, the SPL will
increase by 3 dB (e.g., base coverage area SPL
at ≈1 m [3.28 ft] is 97 dB at 1 W, at ≈1 m
[3.28 ft] the SPL will be 100 dB at 2 W).
22 u ICT TODAY
BICSI’s NEW Corporate
Connection Program
membership
membership
MEMBERSHIP
Offering multiple tiers of membership—select the option that will best fit
training
your organization.
training
TRAINING
discounts
Opportunities for training your employees and promoting your company.
discounts
DISCOUNTS
Each tier includes discounts and benefits tailored to let you get the most
out of your membership.
May/June 2016 t 23
u Increase or decrease the height of the speaker
above the floor.
u Increase or decrease the input electrical power
to the speaker.
u Select a different speaker—usually a speaker
with a smaller dispersion angle can provide a
SPEAKER SPEAKER greater SPL output at ≈1 m (3.28 ft) than one
with a wide dispersion angle. If this is the
case, the calculations must be redone before
proceeding with the speaker layout.
SPEAKER LAYOUT
FIGURE 4: Edge-to-edge spacing method. Once the final coverage area for an individual
speaker has been determined, the next step is to
evaluate and select a speaker layout methodology.
Two basic patterns exist as shown in Figure 3 on
page 21: square and hexagonal.
The choice of pattern depends on the best fit
between the space dimensions and the speaker
coverage areas. Also, the pattern orientation can
be rotated as needed to fit the shape of the space.
SPEAKER SPEAKER
24 u ICT TODAY
3. Edge-to-center: This method utilizes the highest result. Calculating the audio power required involves
speaker density commonly used for PA/paging obtaining all of the individual speaker power require-
systems. It is the best methodology for areas with ments based on the matching transformer tap
poor acoustics or significant background noise. settings and adding them together (e.g., 25 speakers
The spacing distance is equal to r. It requires an each tapped at 1 W = 25 W).
additional quantity of speakers over that required
for the minimum overlap and edge-to-edge spacing u Audio circuit configuration: Determine the
methodologies. This methodology is shown in number of circuits required to connect the speakers
Figure 6. to the amplifiers. Zoning requirements, separation of
spaces within the same zone, circuit redundancy and
ELECTRICAL CONSIDERATIONS pathway/raceway configuration are just some of the
Once the speaker spacing and location issues have factors to be considered in determining the number
been addressed in the design, the next step is determin- of circuits required.
ing the electrical requirements and constraints. For Next, assign speakers to each circuit. This is best
systems using a voice over Internet protocol (VoIP)/ done by utilizing one-line or riser diagrams with
Ethernet-based distribution methodology, these factors each speaker uniquely identified to its location on
can include horizontal cabling length limits (100 m the plan and elevation drawings. Finally, ensure that
[295 ft]) per the governing TIA standards and powering circuit connections are polarized (+/-) correctly. Even
availability for the individual speaker locations. Powering though audio circuit connections and components
considerations can include Power over Ethernet (PoE) such as matching transformers and speakers are
limits and/or availability of local 120VAC power. part of an alternating current (AC) circuit in a
For systems using a constant voltage distribution constant voltage distribution methodology, the
methodology, factors to be considered can include polarity directly determines the phase relationship
the following: of both the electrical and acoustical signals within
u Audio power requirement determination: the system. Incorrect polarity can create an “out of
These systems utilize nominal voltage level audio phase” condition where speaker outputs can interfere
output circuits from audio power amplifiers. with or, in extreme cases, cancel each other, resulting
Typically this voltage is 25 or 70.7 volts (V), but in reduced or no sound levels and/or distortion.
in some instances it could be 100 V or higher. The
speakers are wired in parallel to the audio output u Audio circuit sizing: Usually, a minimum of
circuits via multi-tap matching transformers at 16 AWG wire is specified for audio output circuits
each speaker location. The matching transformers between the amplifier outputs and the speakers.
are used to match the impedance of the speaker In a 70.7 V PA system, 16 AWG wire is limited to a
voice coil (typically 8 ohms) to the high impedance maximum safe current of 6 amperes (A), resulting
of the constant voltage audio output circuits of in a maximum power capacity of 420 W at a
the amplifiers and allow, through the multi-taps maximum distance of 90 ft, assuming a 0.5 dB (12.5
on the primary of the transformer, selection of percent) line loss. In some cases, wire size may have
the power in watts to be provided to the speaker. to be increased to meet the power and/or distance
Matching transformers can be purchased with taps limitations within a particular circuit (Table 1).
as low as ¼ watt (W) up to values as high as 15 W
along with various levels in between these values. u Amplifier loading: According to manufacturer
The selected transformer must always match the recommendations/best practices, the connected
speaker voice coil impedance and must not allow load should not exceed 80 percent of the amplifier
the power to exceed the speaker manufacturer’s power rating—for a 100 W rated amplifier, the total
maximum; otherwise, damage to the speakers and connected load should not exceed 80 W, including
other system components, such as amplifiers, may circuit losses.
May/June 2016 t 25
WIRE OHMS MAX MAX MAX MAX MAX MAX MAX MAX MAX
SIZE PER SAFE SAFE LENGTH LENGTH LENGTH LENGTH LENGTH LENGTH LENGTH
1000' AMPS POWER (FT) AT (FT) AT (FT) AT (FT) AT (FT) AT (FT) AT (FT) AT
LOOP (R) (I) (W) 10W 15W 20W 30W 40W 60W 100W
#15 8.0 6 420 3600 2400 1800 1200 900 600 370
#14 5.2 15 1000 5300 3800 2800 1900 1400 950 560
#12 3.2 20 1400 9100 6200 4600 3100 2300 1500 910
TABLE 1: Length of two-wire 70.7 V line delivering various values of power at 0.5 dB (12.5 percent) loss.
CODE AND AUTHORITY HAVING defined in Article 800. As a result, these systems may
JURISDICTION (AHJ) CONSIDERATIONS be required to utilize cabling and pathways that are
In all cases, system wiring must comply with all partially or totally independent of other ICT cabling
applicable codes and standards. In the United States, and infrastructure within a premises.
NFPA 70 and the National Electrical Code® (NEC®)
Article 640 (Audio Signal Processing, Amplification, CONCLUSION
and Reproduction Equipment) contains the primary This article has attempted to identify basic concepts
governing requirements for PA/paging/AV systems to and considerations involved in speaker placement and
be enforced by the Authority Having Jurisdiction (AHJ). wiring guidelines for PA/paging/AV systems. As stated,
Article 640 also contains references to Article 725 (Class it may be necessary to obtain specialized professional
1, Class 2 and Class 3 Remote-Control, Signaling and acoustic and communications engineering support
Power-Limited Circuits). for large systems and/or those with unique coverage
At this point, a question often arises: Are these NEC requirements. However, the end result should be the
articles relevant to system implementation, regardless same: a system which provides the required degree
of whether it utilizes a VoIP/Ethernet architecture or a of coverage and presents useful information to the
constant voltage distribution methodology? The answer
listeners. t
is: it depends on the AHJ’s interpretation. The NEC
currently does not differentiate between the two types AUTHOR BIOGRAPHY: Bob Hertling, RCDD, OSP, is a Supervising
of systems. Article 640 does allow the use of Class 2 or Engineer, Communications with the Parsons Corporation. For the past
Class 3 power-limited wiring as defined in Article 725, 16 years, he has supported communications and electrical design and
provided the amplifier assemblies are listed and marked construction phase services on numerous intelligent transportation system
and rail/transit projects for the Massachusetts Turnpike Authority, the
for use with Class 2 or Class 3 power-limited wiring—
Massachusetts Bay Transportation Authority (MBTA), the Port Authority of
this is typical for amplifier assemblies having output
New York and New Jersey, the Long Island Railroad, Amtrak®, New Jersey
power no greater than 100 W, in order to meet the Transit, the Southeastern Pennsylvania Transportation Authority (SEPTA)
supplied power limits defined in Article 725. and New York City Transit. Prior to his employment by Parsons, Bob was
Article 725 specifically prohibits audio circuits using also a Telephone Technician and an Electronics Engineer while on active duty
Class 2 or Class 3 power-limited wiring to occupy the with the U.S. Coast Guard for 23 years. Bob is a 15-year member of BICSI®
and also is a member of the NFPA and IEEE®. He holds an Associate’s degree
same cable or raceway as other Class 2 or Class 3 power-
in Telecommunications Engineering Technology and a Bachelor’s degree
limited circuits.
in Management of Telecommunications Systems from Capitol College in
Many AHJs also prohibit audio circuits using Class Laurel, Maryland. Additionally, he is a licensed Telecommunications Systems
2 or Class 3 power-limited wiring from occupying the Contractor in his home state of Rhode Island. He can be reached at
same cable or raceway with communications circuits as robert.hertling@parsons.com.
26 u ICT TODAY
The First Step
to Data Center
Design Expertise
DC101: Introduction to Data Center Design
bicsi.org/dc101
May/June 2016 t 27
By Josh Taylor
component of next- impact that this network speed transition has on data center
cabling infrastructure, and the decisions that organizations
generation data centers. will need to make to accommodate these changes.
28 u ICT TODAY
Choosing the right cabling product can combat the issue of balancing manageability versus
performance. Cabling products with low optical loss rates will ensure that a structured cabling
environment is running at its peak.
Why Are Data Centers Migrating to 40/100 Gb Ethernet? particularly true for data centers built
DATA GROWTH: The world revolves around digital data. We now rely on in the 1980s, before high-performance
data to conduct business, engage in social activities and manage our lives. cabling even existed.
There is no sign of slowed growth in the production of, and demand for,
more data, or for faster access to it. According to the 2014 IDC Digital DECREASING TOLERANCE FOR DOWNTIME:
Universe Study sponsored by EMC: “Like the physical universe, the When data transactions are interrupted
digital universe is large—by 2020 containing nearly as many digital bits due to network downtime, it translates
as there are stars in the universe. It is doubling in size every two years, to a very real loss of revenue. When
and by 2020 the digital universe—the data we create and copy annually— Amazon.com® went down in August
will reach 44 zettabytes, or 44 trillion gigabytes.”1 2013, the company lost $66,240 per
minute.2 Considering how quickly lost
THE CLOUD: Among several other factors, the increase in cloud storage will revenue can add up, it makes sense that
drive the need for data throughput. “In 2013, less than 20 percent of the there is an extremely low tolerance for
data in the digital universe [was] ‘touched’ by the cloud, either stored, network downtime.
perhaps temporarily, or processed in some way. By 2020, that percentage The effect of downtime on revenue
will double to 40 percent.” 1 is even greater when considering end-
user experience. According to one
THE INTERNET OF THINGS: Another factor contributing to the exponential source, network downtime measured
growth of information is the advent of the Internet of Things (IoT). for user experience and business needs
“Fed by sensors soon to number in the trillions, working with intelligent costs an average of $5,600 per minute.3
systems in the billions, and involving millions of applications, the Network administrators should
Internet of Things will drive new consumer and business behavior that have a contingency plan in place in the
will demand increasingly intelligent industry solutions...”1 event of network failure. However, one
This exponential growth in information means processing speeds of the most effective ways to mitigate
must also increase so as not to slow access to data. High-performance this issue is to make sure the existing
cabling that can transfer data over 40/100 Gb Ethernet will be a necessary network is able to meet the demands of
addition to data centers looking to keep up with this digital data growth. increasing data throughput, including
upgrading networks capable of handling
VIRTUALIZATION: A double-edged sword, virtualization can help data 40/100 Gb speeds.
centers save on capital expenses, improve operational efficiency and
create more agile infrastructures. There are many types of virtualization, MANAGING CAPITAL EXPENSES: While
from desktop to storage to server. Server virtualization, in particular, migrating to 40/100 Gb Ethernet
calls for fewer, more efficient servers, which translates to fewer server creates an up-front capital expense, it
connections. Because there are fewer connections, however, it is saves data centers in the long run by
important that these connections work properly. Unfortunately, many future-proofing infrastructure. Not only
data centers do not contain cabling infrastructure designed to meet will data centers be prepared for the
the high-performance capabilities that virtualization demands. This is increasing demands on data throughput,
May/June 2016 t 29
YEAR APPLICATION DATA RATE STANDARD LOSS BUDGET (dB)
1982 Ethernet 10 Mb/s* IEEE® 802.3 12.5
1991 Fast Ethernet 100 Mb/s IEEE 802.3 11
1998 Short-Wavelength Fast Ethernet 10/100 Mb/s TIA/EIA-785 4
2000 1G Ethernet 1000 Mb/s IEEE 802.3z 3.56
2004 10G Ethernet 10,000 Mb/s IEEE 802.3ba 2.6
2010 40G SR4 Ethernet 40,000 Mb/s IEEE 802.3ae 1.9
2010 100G SR10 Ethernet 100,000 Mb/s IEEE 802.3ba 1.9
2015 100G SR4 Ethernet 100,000 Mb/s IEEE 802.3bm 1.9
TABLE 1: Ethernet transmission speeds and loss amounts. *Megabits per second.
but the high-performance cabling infrastructure required of 40/100 Gb leads to decreased efficiency, increased
Ethernet can grow with future hardware upgrades. This will reduce the dB loss and more cable management
need to tear out and replace cabling with each upgrade. challenges.
The Telecommunications
Preparing for 40/100 Gb Ethernet Migration Infrastructure Standard for Data
LINK DISTANCES AND LOSS AMOUNTS: As data center speeds increase, Centers, or TIA-942, was developed
optical loss budgets decrease. Optical loss occurs over cabling distance to address various data center
and at mating points where connections are made. Since most data infrastructure design topics, including
center cabling runs are shorter distances (compared to long-haul the problem of spaghetti cabling.
campus runs), the inherent losses from distance in a data center are Among other aspects of data center
somewhat negligible compared to the losses incurred from mating planning and design, TIA-942 focuses
points. As connections in the data center increase to improve on the physical layout of cabling
manageability, performance suffers. This is because added connections infrastructure (Figure 1).
contribute to increased decibel (dB) loss. Therefore, a balance must TIA-942 offers a roadmap for
be maintained between manageability and performance. data center cabling infrastructure
Choosing the right cabling product can combat the issue of based on the concept of a structured
balancing manageability versus performance. Cabling products with cabling environment. By creating
low optical loss rates will ensure that a structured cabling environ- logical segments of connectivity, a
ment is running at its peak. When comparing dB loss rates of cabling structured cabling system can grow
products, look for “maximum” instead of “typical” loss rates. and move as data center needs
While typical loss rates can allude to the performance capabilities change and throughput demands
of a product, they are not helpful when determining loss budgets increase. Therefore, implementing a
(Table 1). structured cabling system in accordance
with the standards is the ideal way to
Cabling Infrastructure Design prepare for migration to 40/100 gigabit
Due to the exponential port growth experienced by data centers per second (Gb/s) speeds.
during the last two decades, cabling infrastructure is often reduced to The heart of a structured cabling
a cluttered tangle commonly referred to as “spaghetti cabling.” This system is the main distribution area
30 u ICT TODAY
Offices,
Entrance Room
(Carrier Equipment & Demarcation)
Backbone Cabling Connectivity Options
Operations Center, Horizontal Cabling When migrating to 40/100 Gb
Support Rooms,
speeds, there are several connectivity
Telecom Room
options to consider when planning
Main Distribution Area Computer Room
the cabling infrastructure. The first
uses long-haul (LX) transceivers
Horizontal Distribution Area Horizontal Distribution Area Horizontal Distribution Area with singlemode (SM) cabling.
Data is transmitted via serial
Zone Distribution Area
transmission. In serial transmission,
one optical fiber is dedicated to
carry transmitting data and another
Equipment Distribution Area Equipment Distribution Area Equipment Distribution Area
carries receiving data. These two
fibers make what is referred to as a
FIGURE 1: Basic TIA-942 recommended layout. “channel.” A channel is defined as
the optical fiber, or group of fibers,
used to complete a data circuit. Until
(MDA). All equipment links back to the MDA. Other terms used to define recently, serial transmission has
this area include main cross-connect, main distribution frame (MDF) and been used for Ethernet speeds up to
central patching location (CPL). The principle of a structured cabling system 10 Gb/s.
is to avoid running cables from active port to active port (often referred to This setup is typically not used
as “point-to-point”). Instead, all active ports are connected to one area— in data centers because it is built
the MDA—where the patching is done. This is also where moves, adds and for long distances. It is also very
changes (MACs) take place. expensive despite the abundance
TIA-942 calls for the use of interconnect points, which are typically in the (and therefore low cost) of SM
form of patch panels (also referred to as fiber enclosures). Patch panels allow cabling. In order to work effectively
for patch cables (or jumpers) to be used in the front of the cabinets or racks over long distances, the lasers used
where the equipment is housed. The patch cable would then connect to an in LX transceivers are extremely
optical fiber trunk and then to another patch panel in the MDA. precise—and expensive. This
There are several advantages to implementing a structured cabling system. drastically increases the overall cost
First, using optical fiber trunks significantly reduces the amount of cabling of an LX/SM connectivity solution.
bulk both underfloor and in overhead conveyance. Implementing a structured The next option uses short-haul
cabling system also reduces airflow congestion, which reduces power usage. (SX) transceivers with multimode
Another distinct advantage to a structured cabling system is that it allows optical fiber cabling (Figure 2 on
for modularity, meaning connector changes can be made without having page 32). Data is transmitted via
to remove horizontal or distribution cabling. For example, a chassis-based parallel optic transmission. Parallel
switch with 100BFX ports is connected to a patch panel using SC optical fiber optic transmission aggregates multi-
jumpers. Upgrading the chassis and installing new blades with LC ports does ple optical fibers for transmission
not require replacing the entire channel as would a point-to-point system. and reception. For 40 Gb SR4 trans-
Instead, the module within the patch panel is replaced. Underfloor and mission, four fibers transmit at 10
overhead cabling remains undisturbed. Gb/s each, while four fibers receive
However, it should be noted that this method adds insertion loss to the at 10 Gb/s each. This means a total
channel because it adds more mating points. To offset insertion loss created of eight strands of fiber will be utiliz-
by additional mating points, high-performance optical fiber cables should be ed for a 40 Gb Ethernet channel.
used for implementation.
May/June 2016 t 31
a significant advantage to end-users
with LC connector footprints in
their existing infrastructures.
40G SR4 100G SR10 100G SR4 The QSFP-40 Gb universal
transceiver utilizes the LC duplex
footprint but is also universal for
both MM and SM optical fiber.
This standards-based transceiver is
compliant with IEEE 802.3bm,
FIGURE 2: Parallel optic transmission over 40 and 100 Gb Ethernet. so it can inter-operate with QSFP-
40G-LR4 and QSFP-40G-LR4L.
32 u ICT TODAY
House
ad
May/June 2016 t 33
Next Steps for Data Centers
Data centers are experiencing
the most significant change in
cabling infrastructure since the
introduction of optical fiber
cabling. No longer is it a question
of if data centers will migrate to
40/100 Gb Ethernet, but when.
Installing a high-performance
optical fiber structured cabling
infrastructure is essential to a
successful migration.
This discussion has covered
why migration to 40/100 Gb
FIGURE 3: Typical LC duplex connector and FIGURE 4: Typical MPO-style connector and
Ethernet is imminent, as well as
transceiver for 10 Gb Ethernet. transceiver for 40/100 Gb Ethernet.
the decisions data center managers
will need to make to prepare for
Possibly the most drastic change data centers will undergo in migrating implementation. There are several
to 40/100 Gb Ethernet is a change from the LC connector to the MPO-style steps that can be taken to prepare
connector (Figure 4), developed by Nippon Telegraph and Telephone Corpora- for this change:
tion (NTT). MPO is the generic term for multi-fiber push-on connectors. 1. Determine current and future
What about copper? There have been significant technology improve- data center needs, including
ments over the past few decades that create the potential for 40 Gb copper throughput demand, data
links. Choosing copper over fiber usually comes down to cost. Active copper production rates and business-
cables with transceivers on each side that utilize coaxial cables are surging in driven objectives. In what
the market, driven by top-of-rack architecture that utilizes switches at the top ways does the current data
of a rack versus a patch panel. This can be costly, especially when considering center infrastructure support
hardware refresh rates and support windows. or fail those needs?
Another option will be category 8 balanced twisted-pair cabling, which will 2. Use this information to
support 40 Gb links for channels up to 30 meters (m [100 feet (ft)]) in length. determine when the data
The standards for the media and protocols for category 8 cable are expected to center should migrate to
be published in 2016, along with a 26GBase-T alternative. This is an important 40/100 Gb Ethernet.
development that will allow smaller data centers, and any users committed 3. Map out the current data
to maintaining their copper infrastructures, to migrate to 40 Gb with fewer center infrastructure.
cost implications. This migration will be standards-based and will use the RJ45 4. Use this map to create a plan
connector for a seamless transition within the physical infrastructure. for the hardware and cabling
For the long term, it is clear that optical fiber will likely play the dominant infrastructure upgrades
role in data center structured cabling. It has better transmission properties necessary for migration.
and is not susceptible to external interference the way the copper medium is. 5. Create a plan for migration,
However, copper cabling will continue to have a role toward the edge of a including internal communica-
data center structured cabling system, as well as the edges of a campus. tion strategy, budget, timeline,
and roles and responsibilities
of those involved.
34 u ICT TODAY
The timeline for migration is
Save the date!
different for every data center,
depending on technology needs,
budget, size and organizational
priority. However, educating the B I C S I FA L L
organization on 40/100 Gb CONFERENCE
Ethernet, evaluating current & EXHIBITION
cabling infrastructure and begin-
ning plans for implementation
will help ensure a smooth, trouble-
free migration. t
REFERENCES
1.
http://www.emc.com/leadership/digital-
House
universe/2014iview/index.htm
2.
http://www.forbes.com/sites/kellyclay/
2013/08/19/amazon-com-goes-down-
loses-66240-per-minute/
3.
http://www.datacenterknowledge.com/
archives/2011/08/10/true-costs-of-data-
center-downtime/
ad
senior product manager at CABLExpress,
where he is responsible for managing the
development, support, and marketing SEPTEMBER 11-15
for all product lines. Taylor has been with
CABLExpress for more than 15 years,
previously serving as an infra-structure
specialist team leader. An expert in data
center cabling trends and technologies,
Taylor produces Respect Layer One®, an
educational video series that addresses
industry standards and best practices for
data center professionals. He can be reached
at jtaylor@cablexpress.com.
May/June 2016 t 35
By Aaron Hesse, RCDD, PE
OPTICAL FIBER
to the Classroom
Technology in the classroom has become It is essential to deepening and enhancing the classroom
experience. Multimedia services in the classroom are driving
more than teaching basic computer skills. significant demand for data in higher education and K-12
enterprise networks. Between bring-your-own-device (BOYD)
policies, high-speed Wi-Fi, distance learning classrooms,
and the Internet of Things (IoT) looming on the horizon,
universities and school district network administrators and
facility managers are working to provide the speeds required
by today’s classrooms while future-proofing their network
from the coming changes. And they are increasingly bringing
fiber optics directly to the classroom.
36 u ICT TODAY
There are two popular topologies to suites are giving teachers the flexibility to
choose from when deploying optical fiber display their screen to the class from either
directly to the classroom: a distributed the tablet or desktop computer using the
switch network with edge switches classroom wireless network. The challenge
pushed into the classroom or a gigabit with these technologies is that this traffic
passive optical network (GPON). These will be a burden to the structured cabling
technologies may appear similar to the end plant or building wireless network.
users but there are significant differences, Live streaming video is one of the
and it is important to look at factors such most demanding forms of network traffic.
as initial cost of deployment, total cost Prerecorded streaming video, such as an
of ownership, and achievable bandwidth educational movie, requires an initial ax, 10 Gb/s
prior to choosing one or the other. burst of bandwidth to fill the video buffer.
From there, maintaining the video buffer
THE DEMAND FOR BANDWIDTH does not require a significant amount of
The modern classroom environment is bandwidth. However, live video does not
rich with technology. Professors, teachers have the luxury of a large buffer. The larger
and school administrators are constantly the buffer, the more lag between what is
looking for new and innovative ways transmitted and what is seen on the other
to deliver educational materials to their end. For this reason, videoconferencing
students in ways that are effective and or distance learning classrooms require a
engaging. Additionally, many classroom high degree of quality of service, as well
and building support systems such as voice as significant bandwidth for the duration
over internet protocol (VoIP), classroom of the transmission. If the network is
AV systems, and IP-based intercom and overburdened, the user may experience
paging systems are converging on the delays in communication or poor video
structured cabling plant. quality, making the technology difficult to
Many schools are implementing a use for its intended purpose. ac2, 3.47 Gb/s
BYOD policy for use on their in-building To those who use wireless network-
wireless Internet. It is not uncommon for ing, IEEE® 802.11a/b/g and n may be
students to bring two or three wirelessly familiar terms. In 2013 the IEEE published
connected devices to the classroom in 802.11ac, which built on the 802.11n
middle school and high school settings, standard and expanded on what is
each device being capable of streaming called multiple input, multiple output
video, music and a torrent of social (MIMO) technology; the first devices ac1, 1.3 Gb/s
media traffic. to use this standard could achieve one
Streaming video into the classroom gigabit (Gb) speeds. Starting in 2014 and
from a central location also is becoming 2015, manufacturers began producing
n, 600 Mb/s
more popular. School districts find what are called 802.11ac Wave 2 devices.
g, 22 Mb/s
that housing multimedia content in a Speeds from these access points can reach b, 15 Mb/s
central location and accessing it from 6.8 gigabits per second (Gb/s). In 2015, FIGURE 1:
the classroom is becoming cost effective. manufacturers began offering what are Comparison
of Theoretical
This reduces the need for each teacher to being called multigigabit Ethernet ports
Bandwidths of
manage his or her classroom multimedia with some of their Ethernet switches. IEEE 802.11
content locally. Further, new software These ports offer 2.5 and 5 Gb/s speeds Standards.
May/June 2016 t 37
MDF/Building Distributer Classroom
Category 6
Internet Wireless
Access Point
t
Layer 3 Core Switch
t Students
over a single legacy category 5e In this topology, the building’s BICSI 001-2009 Information Transport
or 6 cable. IEEE 802.11ax, the main distribution frame (MDF) Systems Design Standard for K-12
current wireless standard under requires the use of a Layer 3 Educational Facilities, states that OM1
development, is predicted to have optical fiber switch stack for switch multimode fiber be the minimum
top speeds of around 10 Gb/s aggregation. A fixed aggregation requirement, the upcoming rewrite
(Figure 1 on page 37). switch can provide numerous 10 Gb of that standard states OM3 at a
small form-factor pluggable (SFP+) minimum with a recommendation
DISTRIBUTED SWITCH ports for use with the multimode for OM4. The rewrite for ANSI/BICSI
The first option for delivering optical fiber to the distribution 001-2009 is currently titled BICSI
fiber to the classroom is with a switches in the classrooms. If the D003 Information and Communication
distributed switch topology that building is a single story, this Technology Systems Design and Imple-
places edge switches in the classroom. may be the only dedicated telecom- mentation Best Practices for Educational
This approach uses a hierarchical munications space (Figure 2). If the Institutions and Facilities and is in the
star configuration and pushes a building has multiple stories, ANSI/ approval process.
telecommunications enclosure out TIA-4966 Standard for Educational According to BICSI’s 13th
to the classroom. The layout of Facilities requires the use of one floor edition of the Telecommunications
the system closely matches a more distributor telecommunications Distribution Methods Manual (TDMM),
traditional network configuration. space per floor. Note that a network OM3 multimode fiber is rated for 10
Each classroom has an 8- to 16- enclosure does not satisfy the Gb speeds up to 300 meters (m) and
port switch and one or two 10 Gb requirements of a floor distributor. OM4 is rated for up to 550 m. Using
multimode optical fiber uplinks to The optical fiber connection the distributed switch topology, it is
the floor distributor or the building from the telecommunications room important to verify optical fiber run
distributor. Some switches can also (TR) to the classroom should be rated distance, as these runs can get fairly
provide two multigigabit Ethernet for use with 10 Gb systems at this long in large, multistory buildings.
ports, supporting future high-speed distance. Although BICSI’s current While designing these systems, keep
wireless networking capabilities. standard on the subject, ANSI/ in mind that there may be a 3 m
38 u ICT TODAY
GPON is a particularly attractive option in messy remodels
Technology
Infrastructure
32% or retrofit projects. In many cases, the building structure
will not support the additional required size of a category 6
Active Equipment
+29 cabling plant. One question that will need to be answered
is: will 1 Gb to the classroom be enough? Using the GPON
Building IT Spaces
-39 solution, that single optical fiber to the classroom can be
split to feed multiple ONTs.
TOTAL SAVINGS
3%
TABLE 1: Cost of distributed switch network
versus traditional central switched network.
[10 feet (ft)]) to 4.6 m (15 ft) service The cost of active equipment SYSTEM PERFORMANCE
loop that will need to be brought would increase when using a AND FUTURE-PROOFING
into consideration. distributed switch topology. The The savings to the initial
required number of fiber switches construction project are marginal.
MIDDLE SCHOOL EXAMPLE and distribution switches would However, while this option may not
The cost difference between a increase. Additionally, a distributed be the best choice for those looking
distributed switch network and a switch network requires either a to save project costs, the additional
traditional central switched network central uninterruptible power supply benefits come in performance and
was analyzed using a two-story, future-proofing. With an unprece-
(UPS) with distributed power or
70,000-square-foot middle school dented 10 Gb to the classroom, this
numerous smaller UPSs for each
as an example (Table 1). Costs were topology will support the needs
intermediate distribution frame
calculated using quoted pricing of the instructor well beyond the
(IDF) in the classroom.
for equipment and RSMeans 2015 expected life cycle of the network.
There is some additional cost
adjusted for the state of Washington. Another advantage to this
associated with the telecommunica-
system is the availability of
tions enclosure and conditioning
Analysis of capital expenditures multigigabit ports in smaller
required to enclose the distributed
(CapEx): Technology infrastructure switches. This allows for six
is reduced due to the lower cost IDFs. In the model above, it was
10/100/1000 Gb Ethernet ports,
of multimode optical fiber to each assumed that these were going sufficient for most classrooms,
distribution switch compared to to be stand-alone enclosures, plus two 100/2500/5000/10,000
a multiple category 6 structured although some districts choose to multigigabit ports to be used with
cabling path. Pathway costs are also provide space in the cabinetry if the wireless access points (WAPs).
reduced due to the elimination of the network overhaul is part of an This will support IEEE 802.11ac
large cable trays down hallways and overall building remodel. This could Wave 2, as well as 802.11ax, without
a significant reduction in required increase the savings to the project at any need for switch or cabling
vertical pathways. the cost of usable cabinet space. upgrades.
May/June 2016 t 39
MDF/Building Distributer IDF/Floor Distributer Classroom
Internet
Wireless
Layer 3 Access Point
t
Core Switch Singlemode Singlemode
Fiber Fiber
t 1:16 t
Splitter
MPO Cable Optical Network Terminal
(ONT) t Teachers
t Students
GIGABIT PASSIVE OPTICAL and easily scaled for future upgrades. projects. In many cases, the build-
NETWORK (GPON) From this unit, singlemode optical ing structure will not support the
Passive optical networks are not fiber is routed through passive additional required size of a
new to the landscape. Numerous splitters to either multi-fiber push category 6 cabling plant. Perhaps
government bodies have begun on (MPO) cables or multiple single- the existing data plant is category 3,
standardizing around GPON strand singlemode fibers. These and the concrete construction of
technology; this includes all four fiber cables are then distributed the building would require coring
branches of the U.S. military and the throughout the building before out every existing pathway in order
Departments of Defense, Homeland reaching the classroom (Figure 3). to upgrade to a gigabit network.
Security and Energy. GPON also has An optical network terminal In cases like this, the entire copper
been deployed in large commercial (ONT) is required to convert the cabling plant can be replaced with
buildings. singlemode optical fiber to category one or two singlemode optical fiber
Similar to the distributed switch cabling, but this can come in either cables. This greatly reduces the
model, the network requires a layer a multiport rack-mounted switch cost of pathway and can present
3 switch and an incoming WAN or housed in the millwork, a desktop significant cost savings.
Internet connection. From there the unit that serves the workstation, or a
design deviates considerably. The recessed wall unit that emulates the UNIVERSITY BUILDING
installation requires the use of an look of a typical two-port data outlet EXAMPLE
optical line terminal (OLT) in the TR. and fits in a single gang box. Figure Universities are pioneering the
This replaces the layer 3 aggregation 3 uses a rack-mounted switch in use of passive optical networks. All
switch stack found in a typical order to remain as similar as possible eyes seem to be on Washington State
building distribution room. This unit to the distributed switch topology. University in the Pacific Northwest.
comes in many different sizes and is GPON is a particularly attractive Dozens of universities call them to
generally modular in construction option in messy remodels or retrofit learn more about the technology,
40 u ICT TODAY
and many have traveled to visit With the total construction cost
-50%
Technology
them personally. for GPON at nearly half the cost of
Infrastructure To illustrate the cost savings of a traditional switched network, the
their installations, the university total savings to the construction
-28
has studied two of their recent project was found to be over half a
Active Equipment
building remodels that occurred a million dollars.
couple of years from each other,
in two buildings that are nearly Analysis of CapEx: This data
-70
Building IT Spaces identical. One has been remodeled reveals a number of key differences.
with a traditional switched network First, these buildings are five stories
and the other with GPON. The high. In a smaller building, or a
41%
figures in Table 2 reflect the actual K-12 building with a large footprint
TOTAL SAVINGS
CapEx incurred by the university, but only one story, the savings
not a budgetary estimate prior to associated with a reduction in
construction. telecommunications space would
May/June 2016 t 41
ADD THIS GLOBAL STANDARD TO YOUR REFERENCE LIBRARY!
be less significant; i.e., a building that may performance capabilities, the technology
have only had two TRs to begin with would infrastructure will fall short of delivering
only reduce their telecommunications these speeds.
spaces by one room instead of four. Furthermore, the upcoming XPON or
This cost breakdown does not 10GPON standard, needed to provide 10
reflect the additional monetary value Gb/s to the ONT, is not currently defined.
of the space that is saved by reducing Predictions suggest that the OLT line cards
telecommunications space. That benefit within the chassis and all ONTs will need
will have to be calculated based on the to be replaced in order to take advantage of
building’s application. If the building is a the 10GPON standards. If this is expected
dorm room or apartment, and the space to be required during the lifespan of the
is revenue generating, the calculation installation, this could be a costly upgrade.
becomes fairly straightforward. The
monetary value of the space saved can CONCLUSION
be seen as a direct result of their network Universities, school districts and their
topology decision. For university or school consultants need to analyze the goals
districts, the removal of a TR may result in of the network installation and select
additional storage or support space, a much a topology and technology that makes
more difficult value to quantify. sense for the end user. There are numerous
Another aspect of these numbers is options available and it is important to
that this represents a single building. explore each option. If optical fiber to
the classroom is a priority, each approach
If there are multiple buildings on the
discussed above has its own strengths.
property, this cost breakdown would apply
Whether the decision is driven by
to the first building only. The cost benefit
economics or by the need for significant
to the subsequent remodels to buildings
bandwidth in the future, bringing optical
on the property would be substantial.
fiber directly to the classroom might be a
While the first building may save 30-40
good fit for numerous school districts. In
percent over a traditional installation, the
the face of a changing data consumption
cost savings to the next building could
landscape and constrained budgets,
be as much as 60 percent. As long as the
network administrators are often required
original OLT can support the additional
to choose between network performance
ONTs of the nearby building, the additional
and more economic options. For those
buildings would not require an OLT or a
looking for ways to provide a 21st century
dedicated space for an MDF.
network while saving and conserving
funds, GPON is an attractive technology.
SYSTEM PERFORMANCE AND For those with a focus on future-proofing
FUTURE-PROOFING and providing significant network speeds,
One question that will need to be consider looking at a distributed switch
answered is: will 1 Gb to the classroom be technology in order to provide optical fiber
enough? Using the GPON solution, that to the classroom. t
single optical fiber to the classroom can
be split to feed multiple ONTs. This would AUTHOR BIOGRAPHY: Aaron Hesse, RCDD, PE, is
an electrical engineer with Coffman Engineers in Spokane,
provide 1 Gb to the desktop if necessary.
WA. He is a licensed professional engineer and a BICSI
However, if WAPs are expected to utilize Registered Communication Distribution Designer.
the IEEE 802.11ac Wave 2 or 802.11ax He can be reached at hesse@coffman.com.
42 u ICT TODAY
The Essential Standard
for Data Center Design
an American National Standard
ANSI/BICSI
002-2014
Data Center Design and
Implementation Best Practices VITAL DATA CENTER
®
DESIGN INFORMATION
(DCIM)
n DC power
n Energy efficiency
resilience requirement
n And more
May/June 2016 t 43
DisplayPort and the By Joseph D. Cornwall,
CTS-D, CTS-I
of Desktop
Video
Connectivity
Solutions
There was a time when it was easy to understand AV connectivity.
Consumer products used an RCA connector Connectivity Technologies Develop
(one for composite, three for component, and we Over the Years
won’t mention S-Video) and commercial products The video graphics array (VGA) was first
used BNC connectors (one for composite, five for introduced with the IBM® PS/2 line of computers
RGBHV [for red, green, blue, horizontal sync, vert- in 1987 (VGA also refers to a specific resolution,
ical sync] and again, we won’t mention S-Video). but this discussion focuses on the cables and
That pretty much summed it up for decades, with connections). The VGA connector uses a three-
one glaring exception: the computer. Both laptop row, 15-pin DE-15 that carries analog component
and desktop computers needed something more RGBHV video signals, and Video Electronics
compact and with some advanced capabilities— Standards Association (VESA) display data channel
such as the ability to exchange extended display (DDC, also called EDID) data. For nearly three
identification data (EDID) information with the decades, this blocky analog connection ruled
display. A different connector was needed. the office. While not particularly elegant,
44 u ICT TODAY
it worked and was well understood. Electronics LCD Business, and LG® Display
Low-voltage differential signaling announced “intentions to accelerate
(LVDS) is a physical layer specification adoption of scalable and lower power
that describes the way signals move digital interfaces such as DisplayPort (DP) Like HDMI,
between inputs and outputs on integrated and high-definition multimedia interface
circuit (IC) chips. In use since 1994, (HDMI) into the PC” as an alternative to
DisplayPort
LVDS was used for computer video and the aging, analog VGA connector. The supports fully
graphics data transfers and drove the most press release went on to say that “Intel
advanced versions of the VGA interface. plans to end support of LVDS in 2013 and
embedded
LVDS became an increasingly important VGA in 2015 in its PC client processors and digital audio for
technology as computer resolutions chipsets.” The death notice for the VGA
began to escalate beyond 800 x 600 connector had been posted. surround sound
super VGA (SVGA), but the proliferation
applications
of high-definition digital video at the The Rise of DisplayPort
consumer level changed that. This is DP is a digital display interface stand- (typically eight
because widespread proliferation of HDTV ard developed by VESA. It is specifically
came with an unanticipated challenge. designed for the transfer of video, audio
channels)
Intellectual rights management for content and data between a source and sink. and is fully
is a particularly thorny issue in the world DP has replaced LVDS on essentially all
of digital media. Unlike analog, there is no computers because of its rich feature set
compliant with
generational loss of quality when copies and its compatibility with transition- high bandwidth
are made. In the digital world, there are minimized differential signaling (TMDS)-
no copies—there are only clones. encrypted HDMI and HDCP technologies. digital content
Often erroneously referred to as “high DP has been included in about one-in-20
protection
definition copyright protection,” high commercial desktops and one-in-50
bandwidth digital content protection laptops as of 2009. Today, DP is found on (HDCP)
(HDCP) is a system developed by Intel® the majority of new convertible, laptop,
Corporation. HDCP leverages a key desktop and workstation computers. Where
for content
exchange protocol known as Blom’s we used to see a VGA connector, today we protection
Scheme. It is intended to prevent see DP or mini DP.
encrypted content from being played on Although DP has a lot of the same
and system
unauthorized devices or devices which functionality as HDMI, it is a complemen- integration.
have been modified to copy HDCP content. tary connection and not necessarily a
Before sending data, a transmitting device competitive one. This is a good thing,
checks that the receiver is authorized to because HDMI and DP deliver very
receive it. If so, the transmitter encrypts the different strengths, which translate to
data to prevent eavesdropping as it flows unique features and benefits best aligned
to the receiver. As computers and personal with their respective applications. HDMI
digital portables became a bigger part of and DP, despite superficial similarities,
the industry, the ability of these devices serve different markets, and there is plenty
to interface with just about everything of room for them to coexist in the AV
became an important market factor. ecosystem.
On December 8, 2010, a joint The fundamental difference between
press release from AMD®, Dell®, Intel DP and HDMI is the use of TMDS in the
Corporation®, Lenovo®, Samsung® HDMI format, and the use of serial data
May/June 2016 t 45
There are several variations of DP, including Thunderbolt,
in use today. Of critical concern are the differences between
DisplayPort 1.1 (DP) and DisplayPort 1.2 (DP++), also known
as “dual mode” DP. There are no differences in the cable
and connector topology per se, but there is one important
difference in performance. DP++ has the ability to leverage
the power of the host computer to output a signal in a true
TMDS HDMI-compliant format. That means that devices that
FIGURE 1: HDMI uses TMDS, based on IBM’s 8b/10b line code. support DP++ (and are so marked) can use a passive cable with
a DP connector on one end and an HDMI connector on the
other to make the conversion passively.
transmission in DP. HDMI uses TMDS based When a DP++ source is connected to the HDMI input on
on IBM’s 8b/10b line code, which maps 8-bit a display, it can “see” that there is a demand for a clock signal
symbols to 10-bit symbols to achieve DC- and, therefore, recognize that the signal is not packetized.
balance, bounded disparity and clock recovery Using all four lanes of connectivity, the DP++ video card takes
(Figure 1). a single content stream and repackages it into RGBs, sending
The HDMI connector consists of 19 pins, the signals over the four lanes of the connection. In this way,
12 of which carry red, green, blue (RGB) and it moves from operating in a world of four independent serial
clock data across four shielded twisted pairs. streams to one in which a unified signal is sent over four lanes
The remaining pins negotiate hardware simultaneously.
handshake and voltage assertion, provide DP devices require an active device to make this
DDC EDID connectivity and support optional conversion. DP can only output a monolithic AV signal that
connectivity unique to the HDMI (think occupies a single lane, so its serial content must be actively
multimedia) application. reformatted into a TMDS RGB state. This demands an active
The DP system uses packetized data, a form device, external to the source. Since the newly released USB
of digital communication familiar from its use Type-C technology that will make such a huge impact in the
in Ethernet, peripheral component interconnect mobile market is design limited to DP, keeping this critical
(PCI) express, and USB technologies. The DP difference in mind will play an important role in designing
protocol is based on micropackets, which embed AV connectivity systems.
the video sync (clock signal) into the data The newest version is DP 1.3. Released in 2014, it will
stream, allowing for a complete video link with become more important as devices move from high definition
embedded audio (eight channels) over a single to ultra-high definition (UHD) 4K performance levels. At this
lane (Figure 2). point in time, the distinction is not critical when selecting
The DP system features four such lanes and accommodating desktop connectivity solutions as the
in its 20-pin connector topology. Each lane differences are not in the physical layer.
occupies three pins and connects to a shielded Like HDMI, DP supports fully embedded digital audio for
twisted pair of the cable. Each lane can deliver surround sound applications (typically eight channels) and is
all the information necessary to support a full fully compliant with HDCP for content protection and system
1080p video stream; this is the real power of integration. An even more important feature is the ability of
DP. One physical connector can deliver up DP to support multi-monitor MST where a single output on
to four discrete channels of communication, the computer can deliver as many as four discrete video feeds.
supporting the multi-monitor desktop that is For a number of technical reasons, DP connectivity
now customary. This is also known as multi- is limited to a maximum run length of ≈10 meters (m [33
stream transport (MST). feet (ft)]) across native DP cables. Beyond this, the signal
46 u ICT TODAY
DisplayPort Extenders
When the project requires links greater than ≈10 m
(33 ft), active solutions must be employed. HDBaseT is
an IEEE® standard technology that can extend digital
video signals up to 100 m (328 ft) by using transmit and
receive electronics with a category cable as the link in-
between. This also gives the option of running DP in a
plenum environment by using communications plenum
(CMP) or FT6 rated category 6 or 5e. For best results, use
solid-core screened unshielded twisted-pair category 6
cable for all digital AV applications.
FIGURE 2: The DP protocol is based on micropackets, which embed the Depending on the project, HDBaseT solutions can be
video sync into the data stream, allowing for a complete video link with selected to provide a DP-to-DP configuration or a DP-to-
embedded audio (eight channels) over a single lane. HDMI configuration. Performance of the two is identical
and this technology is independent of DP++ capability,
so it works with any DP device.
must be converted if longer runs are needed. Conversion
can be accomplished by using modems with fiber links
(maintaining the DP packetized format) for runs of tens DisplayPort Dongles and Converters
of meters to a kilometer or more. DP can also be adapted Dongles that convert DP to analog VGA, digital
to a category UTP environment for runs up to 100 m (328 DVI-D and HDMI are all readily available, often
ft) by using HDBaseT technology. The following is a closer in a choice of black or white to match the desktop
look at some of the DP solutions available for individual environment. Particular attention must be paid to
installations. systems where conversion to VGA or other analog
interfaces is necessary. Keep in mind that such a
DisplayPort Cables conversion does not eliminate the need to comply
Stock DP cables come in lengths from ≈1–10.7 m with HDCP requirements. HDCP-encrypted content
(3–35 ft). There is no difference between DP 1.1 and may, under no circumstances, be converted to
1.2 cables, and any certified cable can be used for either any unprotected format; HDCP continuity must
application. be maintained for an image to be displayed. Also,
DP multimode cables in lengths up to 3 m (10 ft) may conversion to the analog domain (VGA or composite
support 4K UltraHD content capability. This is a useful video) requires digital-to-analog conversion, an active
length for desktop connectivity or a jump into a format process that must be powered by the signal bus or
converter such as an HDBaseT transmitter or fiber modem. external power source. t
DP multimode interconnects of 5 m (16.5 ft) to 10 m
AUTHOR BIOGRAPHY: Joseph D. Cornwall, CTS-D, CTS-I, is a
(33 ft) will support 32-bit payloads up to 1920 x 1200. Technology Evangelist with Legrand North America. Cornwall has held
Although it looks quite different, mini DP is just a both management and technical positions in broadcast, residential, and
different form factor and has identical properties to full commercial market sectors. Honored as the 2014 InfoComm Educator of the
Year, he is widely recognized as an energetic and compelling presenter, trusted
sized multimode DP links. Mini DP is often used on laptop
subject matter expert and seasoned industry professional. Cornwall regularly
and tablet computers because of its smaller size and profile. addresses groups both large and small on topics as diverse as AV technology
Adapters are available for mini DP to full sized DP. Cables and system design, technology trends, sales skills development and market
with mini DP on one end and full sized DP on the other navigation. He can be reached at joseph.cornwall@legrand.com.
are also available in lengths less than ≈3 m (10 ft), as are
mini DP to mini DP patch cords.
May/June 2016 t 47
THE UNEXPECTED IMPACT OF
RAISING
DATA CENTER
TEMPERATURES
By Wendy Torell, Kevin Brown and Victor Avelar
LOW
HIGH
48 u ICT TODAY
WHY DO
WHEN OPERATORS THINK ABOUT RAISING TEMPERATURES IN
SERVER FANS
RAMP UP?
THE DATA CENTER, IT IS COMMONLY UNDERSTOOD TO MEAN The purpose of server fans is to cool the
components inside the server chassis. The
RAISING THE TEMPERATURE TO A NEW FIXED SET POINT. most important of these components is the
CPU chips which can reach temperatures
the data center can operate in economizer mode(s) for a larger upwards of 90 °C (194 °F). As the IT inlet
portion of the year, and the chiller efficiency increases. But this is air temperature increases, so will the CPU
not the entire picture. Although the chiller energy decreases, the temperature. This typically triggers server fans
following also occurs: to increase airflow in an effort to reduce the
u The dry cooler (which operates in economizer mode instead of CPU temperature. This increase in airflow
the chiller) energy increases because the number of economizer
consequentially increases server energy
hours increases.
consumption.
u Server energy increases because requirements for airflow,
measured in cubic feet per minute (CFM), increase as temp-
erature rises (See sidebar, “Why do server fans ramp up?”).
Computer room air handler (CRAH) fans speed up to support
OPERATING
TEMPERATURES
u
u
CRAH energy consumption.
If not already oversized to accommodate the additional airflow, OF CHILLERS
more CRAHs are needed to match the higher server fan CFM Every chiller has a maximum chilled water
requirements. This means additional capital expense. temperature it is capable of supplying. This
is limited by the type and design of the
Figure 1 on page 50 illustrates these countering effects. This chiller. For example, in centrifugal chillers the
article walks through an analysis of a data center with a packaged compressor must be capable of reducing its
chiller cooling architecture, to demonstrate how location and server speed to produce lower refrigerant pressures
fan behavior have a significant influence on the potential savings (or without damaging the motor or without
cost penalties) when IT inlet air temperature set points are increased.
leaking its lubricating oil into the refrigeration
The research also considered the implications of fixing the temp-
circuit. Depending on the chiller type, other
erature (at a higher point) versus allowing the data center tempera-
chiller components may require special
tures to float within a defined range, as the outdoor temperature
fluctuates. The final consideration was a scenario where an existing
features which allow for higher chilled water
data center is oversized (50 percent) to illustrate the impact that temperatures. Consult with the chiller vendor
percent load has on these results. before increasing the chilled water set point.
May/June 2016 t 49
FIGURE 1: System dynamics are complex, so there is a need to evaluate the data center holistically.
ARCHITECTURE ANALYZED
This analysis looks at what is believed to be a very u Airflow demand of servers matched with CRAH
common cooling architecture deployed in data centers airflow supply (i.e., CFM of servers = CFM of
today—a packaged air-cooled chiller with economizer CRAH fans)
(Figure 2). The dry cooler, utilized during economizer u Power density of 4 kilowatts (kW)/rack.
mode, is a heat exchanger that directly cools the data u 3 percent cost of capital used for total cost of
center chilled water when the outside air conditions are ownership (TCO) calculations
within specified set points. Pumps move the chilled water u $0.10 per kilowatt hour cost of electricity
through the dry cooler where the cold outside air cools u Weather bin data from ASHRAE Weather Data
the chilled water that supplies the CRAHs. Viewer 5.0
The main assumptions used in the analysis are: Three different operating temperature scenarios were
u 1 megawatt (MW) data center, fully loaded created in order to compare the energy consumption
u Three air-cooled chillers in an N+1 configuration, and TCO of each:
sized for 20-year extreme temperature 1. The baseline case assumed a fixed IT inlet tempera-
u All chillers (including the redundant chiller) operate ture of 20 °C (68 °F), which is a typical operating
at part load under normal operation point for data centers today.
u Chillers are capable of operating at higher chilled 2. The second case allowed temperatures to float from
water temperatures (see sidebar, “Operating 15.6 °C (60 °F) to 26.7 °C (80 °F).
temperatures of chillers,” page 45) 3. The third case fixed the temperature at 26.7 °C (80 °F).
u Use of a variable frequency drive (VFD) dry cooler
for economizer mode (no evaporative cooling) The data center scenario was analyzed in three U.S.
u Fixed-speed pumps cities—Chicago, Seattle, and Miami—to illustrate the
u CRAHs with hot aisle containment in an N impact of varying climates on the results.
configuration
ANALYSIS METHODOLOGY
Energy cost and capital expense of the entire
cooling system were analyzed utilizing the following
Indoor CRAH Dry cooler methodology:
Bin data from ASHRAE Weather Data Viewer 5.0
u u
50 u ICT TODAY
Chicago Fixed $1,186,000 $162,000 $0 $571,000 $258,000 1% Energy Reduction
Chicago Floating $855,000 $241,000 $173,000 $583,000 $320,000 No change in TCO
FIGURE 3: Summary of results from baseline of 20 °C (68 °F) fixed to floating from 15.6 °C (60 °F) to 26.7 °C (80 °F) at full load.
u The 20-year extreme temperature was used as the u The capital expense values were estimated using
worst case outdoor temperature for sizing the component, labor and design prices typically seen
packaged chiller. This design point is the generally in a 1 MW data center project. The change in CRAH
accepted practice for sizing chillers and is capital expense as the IT CFM changes with IT inlet
recommended by the Uptime Institute.1 temperature was also accounted for.
u The cooling system energy is dependent on the
different operating modes: full mechanical cooling, FINDINGS
partial economizer mode, and full economizer mode. First, the findings of the baseline [where IT tempera-
The number of hours spent in each operating mode tures are fixed at 20 °C (68 °F)] were compared to the
was calculated.2 second case (where IT temperatures float up and down).
u The IT inlet air set point was used to calculate the Following these findings, the comparison of the baseline
chilled water temperature. The chilled water temp- to the third case (where IT temperatures are fixed at a
erature was allowed to range from 7.3 °C (45 °F) to higher temperature of 26.7 °C [80 °F]) is presented.
32 °C (90 °F).
u For IT inlet temperatures above 20 °C (68 °F), the BASELINE VERSUS
increase in server energy consumption was added to FLOATING TEMPERATURES
the total cooling system energy consumption. The TCO differences of the baseline versus the float-
u The floating temperature scenario represented an ing temperature case are presented in Figure 3. The TCO
ideal case where the chiller and economizer controls shown excludes the capital cost of systems that do not
allow chilled water temperatures to reset dynamically. change between the two scenarios. This analysis leads to
In most data centers, the chilled water temperature the following conclusions:
is set at a fixed temperature year-round and would u While chiller energy always improves (decreases),
yield lower energy savings than this model projects. the net energy consumed does not always improve.
1
Uptime Institute’s “Data Center Site Infrastructure Tier Standard: Topology,” http://www.gpxglobal.net/wp-content/uploads/2012/10/TIERSTANDARD_Topology_120801.pdf
2
In full economizer mode operation, the outdoor conditions allow for all mechanical cooling (i.e. those components used in the refrigeration cycle) to be turned off to conserve energy while still
effectively cooling the defined load. When the outdoor temperature limits full economizer mode operation, the cooling plant enters a partial economizer mode of operation, where a proportion
of the cooling is handled by the economizer mode and the remaining is handled by the mechanical system. The proportion of each changes (increasing the mechanical cooling proportion as
temperature increases outdoors) until full mechanical system operation is required.
May/June 2016 t 51
CHICAGO SEATTLE MIAMI
pPUE (cooling only) Improves from 1.203 to 1.178 Improves from 1.222 to 1.166 Improves from 1.327 to 1.312
TABLE 1: Summary of results from baseline of 20 °C (68 °F) fixed to floating from 15.6 °C (60 °F) to 26.7 °C (80 °F) at full load.
u Higher IT inlet temperatures cause an increase in Table 1 provides additional results, including
IT equipment airflow which decreases the difference differences in the total energy kilowatt hour (kWh) and
in temperature (deltaT) across the CRAHs. More partial power usage effectiveness (pPUE).3 Although power
CRAH airflow is needed to remove the same amount usage effectiveness (PUE) improved in all cases, energy
of heat at these lower deltaT values. did not always improve. This points out the limitation of
u The required CRAH capacity increases at higher using only PUE as a basis for operational decisions.
chilled water supply temperatures because the heat Also highlighted are the maximum float temperature
removal capacity of the coil decreases as deltaT that would result in the lowest TCO for each of the three
decreases. cities. As the data demonstrates, this optimal temperature
u The degree to which the increase in energy occurs varies quite a bit from one city to the next.
for the servers and CRAHs depends on the IT Figure 4 is a graph illustrating the TCO as the maxi-
equipment characteristics. This is explained in the mum float temperature was varied. In all cases, the mini-
following section. mum float temperature was assumed to be 15.6 ˚C (60 ˚F).
u Bin weather data is a significant driver in determin- This graph demonstrates how bin data can have
ing whether floating temperatures from 15.6 °C a significant impact on results. In Seattle, the optimal
(60 °F) to 26.7 °C (80 °F) result in a cost savings. temperature for this cooling architecture occurs at
3
In this analysis, pPUE represents only the cooling system losses.
52 u ICT TODAY
Chicago 20C $1,186,000 $162,000 $0 $571,000 $258,000 13% Energy Increase
Chicago 27C $855,000 $241,000 $345,000 $583,000 $388,000 5% worse TCO
FIGURE 5: Summary of results from baseline of 20 °C (68 °F) fixed to a higher fixed temperature of 27 °C (80 °F) at full load.
pPUE (cooling only) Improves from 1.221 to 1.182 Improves from 1.241 to 1.170 Improves from 1.363 to 1.312
TABLE 2: Summary of results from baseline of 20 °C (68 °F) fixed to fixed of 27 °C (80 °F) at full load.
27 ˚C (80 ˚F), whereas in Chicago, this same architecture The server fans will always draw greater power than
has an optimal maximum temperature of 23 ˚C (74 ˚F), the baseline scenario because the higher fixed IT inlet
and in Miami, that temperature is only 21 ˚C (70 ˚F). temperature forces the IT fans to spin at the same faster
These findings may come as a surprise to many, but they speed all year round. Figure 5 illustrates how the higher
are driven by the increase in server and CRAH energy fixed temperature compares to the baseline fixed temp-
that more than offsets the chiller savings above these erature. The findings are:
temperatures. In Miami, the economizer hours are limited u Server energy is even higher than the floating temp-
by the weather, and so the chiller savings couldn’t offset erature scenario because the server fans are running
the increase, even at 22 ˚C (72 ˚F). at the higher temperature year-round.
u Bin weather data is a significant driver in determining
BASELINE VERSUS whether to go to a higher operating temperature.
HIGHER FIXED TEMPERATURE u Fixing at a higher temperature is always worse than
When operators think about raising temperatures allowing the space to float to that same higher temp-
in the data center, it is commonly understood to mean erature, because when the temperature is fixed there
raising the temperature to a new fixed set point. Control are never days when the servers and CRAHs can
systems are rarely set up to handle the condition of consume less energy.
floating, as the analysis in this article suggests. So, the u There is no impact on the number of economizer
question is: what is the impact on energy, TCO and hours (and therefore the chiller and dry cooler
reliability (X-factor) if the data center temperature is to power consumption) relative to the floating temp-
be raised and fixed at 27 ˚C (80 ˚F)? erature scenario.
May/June 2016 t 53
FIGURE 6: Composite server
power versus inlet temperature.
TABLE 3: Impact on total energy (kWh) of varying CFM curves from baseline of 20 °C (68 °F) fixed to floating from 15.6 °C (60 °F)–26.7 °C (80 °F) at full load.
Table 2 on page 53 summarizes additional findings (Table 3) holding chilled water flow constant and varying
including total energy (kWh) and pPUE. Again, this the CFM rise as a function of temperature from flat (i.e.,
illustrates that (1) energy is not always improved when no rise) to the highest rise. The following occurs as it
you raise IT temperatures, and (2) PUE as a metric alone moves to a steeper curve:
is insufficient. u Server fan power becomes a greater penalty because
As the inlet temperature of servers rise, the airflow power is proportional to the cube of the shaft speed.
requirement and fan power increases. Figure 6 is the u Number of CRAHs needed increases because more
composite curve from those measurements. The analysis airflow is required.
used this curve as the assumed ramp-up of power draw u CRAH energy increases because more airflow is
as temperature increased. required.
If the server CFM requirement did not ramp up as u Economizer hours go down because colder chilled
temperature increased (i.e., if the curve was flat), the water is required to make up for the decrease in
results of this analysis would be very different. The IT CRAH deltaT and the associated decrease in CRAH
equipment’s behavior at elevated temperatures is what coil effectiveness.
offsets the chiller energy savings, making it a complex
analysis. A flat curve would mean higher temperatures In all three cities, the IT equipment behavior is a key
are always better because you gain savings through driver to the overall energy impact of going to higher
economization with no energy penalty on the CRAH (floating) temperatures. This illustrates the importance
and server side. of understanding the behavior of your IT gear and
To illustrate the impact that the CFM curve has on analyzing the data center holistically before making
the overall results, a sensitivity analysis was performed operational changes.
54 u ICT TODAY
FIGURE 7: ASHRAE’s offset the warmer temperature-
X-factor as a function of
hours, but in a more tropical
IT inlet temperature.
environment (like in Miami),
there are not as many cool
temperature-hours to counter
those above 20 ˚C (68 ˚F).
When comparing the
baseline scenario to the higher
fixed temperature of 27 ˚C
(80 ˚F), there is a 31 percent
increase in failures. This is
regardless of location because,
now, the IT equipment is
exposed to the same higher
temperature year-round.
Another common discus-
sion point when it comes to
reliability implications of raising
IT temperatures is what happens
FIGURE 8: X-factor as a function of floating temperature for Chicago, Seattle, and Miami. in the event of a power outage.
If a data center is at a higher
initial temperature, there is
IMPACT ON RELIABILITY less ride-through time if the
The analysis thus far has focused on the optimal temperature in terms of cooling system is down before
energy and TCO savings, but reliability is another factor that must be considered things overheat and crash the
when selecting the operating temperature(s). X-factor, a metric published IT equipment.4
by ASHRAE TC9.9 committee, is the ratio of failure rate at a given dry bulb Unfortunately, today there
temperature to the failure rate at 20 ˚C (68 ˚F)—see Figure 7. seems to be a lack of quantified
The data illustrate that, relative to the failure rate of servers at 20 ˚C (68 ˚F),
data on the subject of reliability
there will be an increase in failures as the operating temperature rises. Therefore,
implication.
simply raising a fixed set point temperature will always decrease reliability if the
servers follow the curve of Figure 7.
Floating temperatures up and down is the only way to maintain reliability. For
ALTERNATIVE
example, if a data center was at 16.1 °C (61 ˚F) for half of the year (X-factor = 0.8)
SCENARIOS
The analysis presented
and 24 °C (75 ˚F) for the other half of the year (X-factor = 1.2), the average X-factor
above was based on a particular
would equal 1. In other words, there would be no impact on failures overall.
architecture with particular
Figure 8 demonstrates the impact that the maximum float temperature has
on X-factor for each of the cities analyzed. This data shows that in Chicago, assumptions. Two key varia-
floating the IT environment up to 23.3 ˚C (74 ˚F) enables cost savings without tions are addressed below
any reliability penalty; beyond this temperature, there will be an increase in because they are common
failures relative to the baseline. For Seattle, this temperature is 21.1 ˚C (70 ˚F), occurrences in today’s data
and for Miami, it is 20 ˚C (68 ˚F). This, again, is driven largely by the bin weather centers: oversized CRAHs and
data. If there are a lot of cooler temperature-hours (like in Chicago), they can lightly loaded data centers.
4
See Schneider Electric White Paper 179, Data Center Temperature Rise During a Cooling System Outage
May/June 2016 t 55
FIGURE 9: Effect of CRAH oversizing
on 10-year energy cost floating from
15.6 °C (60 °F) to 27 °C (80 °F).
What if the CRAHs were oversized? What if the data center was
In the analysis that was described, it was assumed that the CRAH airflow only at 50 percent load?
was perfectly matched to the IT server airflow requirement, which is the best This is certainly a valid ques-
case from a capital expense perspective. However, this almost never happens tion, as most data center capacities
in practice because there is always some portion of the cool air that bypasses are based on uncertain future
the IT equipment inlets. In an actual data center, the installed CRAH airflow loads, resulting in systems that
capacity is always greater than that required by the IT equipment to ensure are under-utilized in practice. The
that all IT equipment receives the proper amount of cool air. Some of this majority of data centers operate
oversizing may be intentional, as a safety margin or for redundancy, and some between 30 and 60 percent load.
is accidental because of difficulty in forecasting loads or shrinking loads due to The same analysis described
virtualization. Uptime Institute assessments5 have found this CRAH oversizing before was run, but with the 1 MW
to be, on average, 2.6 times that required by the IT equipment. This oversizing data center loaded to 50 percent
is obviously a capital expense penalty but can actually reduce the energy (500 kW), and an additional
consumption compared to the ideal “perfectly matched” case. 25 percent CRAH capacity. The
This is due to the fan laws (sometimes referred to as the cube losses) where results are illustrated in Table 4 on
fan power is proportional to the cube of the fan shaft speed. When CRAH page 57 (baseline versus floating
airflow is oversized, the variable speed fans operate at a lower CFM (i.e., lower temperatures) and Table 5 on page
speed), therefore consuming less energy. An analysis was conducted of the 10- 57 (baseline versus fixed higher
year cooling energy implication of oversizing the CRAH airflow by 25, 50, 75, temperature).
and 100 percent while floating the IT inlet temperature up to 27 °C (80 °F). When temperatures are
Figure 9 shows that, while all three cities experienced an energy reduction as floated in a 50 percent loaded data
the oversizing increased, Miami exhibited the largest energy reduction (steeper center, the savings (as a percent
slope). This is because Miami experienced a limited number of hours at colder of the energy at the fixed baseline
temperatures where the fans could reduce their speed. Therefore, a reduction in temperature) improves. The
fan energy by oversizing the CRAH units was realized for nearly all bin hours. majority of the additional savings
Note that this CRAH oversizing comes with an increase in capital expense that comes from having less chiller
typically exceeds the 10-year cooling energy savings. While some oversizing energy which is the result of more
helps prevent hot spots in front of IT equipment, this practice must be balanced free cooling hours. This happens
with proper air management practices. because a dry cooler that is half
loaded is capable of attaining the
5
https://uptimeinstitute.com/uptime_assets/c7f39bad00527fa4e2207a5f1d5dfc1f8295a0a27287bb670ad03fafbdaa0016-00000web4.pdf
56 u ICT TODAY
CHICAGO SEATTLE MIAMI
pPUE (cooling only) Improves from 1.092 to 1.075 Improves from 1.091 to 1.079 Improves from 1.157 to 1.138
Temperature range with lowest TCO 27 °C (80 °F) 27 °C (80 °F) 21 °C (70 °F)
X-factor Improves from 1 to 0.94 Improves from 1 to 0.87 Worsens from 1 to 1.27
TABLE 4: Summary of results from baseline of 20 °C (68 °F) fixed to floating from 15.6 °C (60 °F)–27 °C (80 °F) for 50 percent loaded data center.
pPUE (cooling only) Improves from 1.092 to 1.075 Improves from 1.091 to 1.080 Improves from 1.157 to 1.138
X-factor Worsens from 1 to 1.309 Worsens from 1 to 1.309 Worsens from 1 to 1.309
TABLE 5: Summary of results from baseline of 20 °C (68 °F) fixed to fixed of 27 °C (80 °F) for 50 percent loaded data center.
chilled water temperature earlier in the year (smaller (BIN data-specific), and as these dynamics change, the big
approach temperature). drivers to the total energy also change.
These savings are attainable if the data center For this reason, the results in this article might seem
temperature can float. In practice, this is almost never counter-intuitive. Remember that all results shown are
done because control systems are not set up to adjust relative savings/penalties compared to the baseline for
temperatures dynamically/automatically. Table 5 shows the particular location and load.
the results of the 50 percent loaded data center when
the IT space is raised to a fixed temperature of 27 ˚C RECOMMENDATIONS
(80 ˚F). Compared to the baseline fixed temperature, The analyses in this article demonstrate that there are
this represents an energy penalty in all three cities many variables that influence cost savings (or penalty),
analyzed. In addition, this represents an increase in the and that raising temperatures is not always a good thing.
X-factor, since the IT equipment is exposed to a constant Before making temperature changes to a data center, it
higher temperature. As mentioned earlier, this can be a is important to have a solid understanding of the design
reliability concern. In all scenarios, PUE improved which conditions, system attributes, load and so on. Follow these
points to the limitation of using this as a sole metric in recommendations before raising data center temperatures:
making decisions. u Air management practices such as containment and
There are several factors that go into the percent blanking panels must be in place before attempting
improvements/penalties shown in these tables. A 50 to increase IT inlet temperatures. This will avoid
percent loaded data center has an oversized dry-cooler creating hot spots.
which enables it to get more economizer hours, which u Make sure the behavior of the IT equipment is
means less time on chiller. This impacts not only the dry understood (power consumption and CFM require-
cooler and chiller energy, but also (in the floating case) ment) as temperatures are raised. Ask the IT vendors
the IT kWh penalty. These changes are location-specific for this information.
May/June 2016 t 57
u Consider whether the BIOS settings of the IT equip- u While raising temperatures improves the chiller
ment can be adjusted to optimize performance at efficiency (by increasing economizer hours), those
higher temperatures. This requires a higher level of savings can be offset by an increase in IT energy
collaboration between facilities and IT departments. consumption, as well as the air handlers.
u X-factor predicts a relative increase of failure rates, u Operating conditions like percent load and CRAH
but work with IT vendor(s) to determine if the actual oversizing/redundancy influence whether you see a
rate is significant enough to be a concern. savings or cost penalty.
u Since data centers are not solely made up of servers,
u Do not assume that raising the temperature is always
make sure the reliability impact on other equipment
a good thing. Understand the specific system
like storage and networking is understood.
dynamics completely before making changes.
u Ensure that cooling architecture can operate at
u Cooling architectures that use direct and indirect air
elevated temperatures (i.e. some chillers cannot run
economizer modes will likely perform better than the
at higher chilled water temperatures).
packaged chiller architecture that was analyzed in
u Make sure that growth plans comprehend the
potential negative energy impact of increasing this article. t
IT inlet temperatures. In other words, a savings at
50 percent load might actually be a cost penalty at AUTHOR BIOGRAPHIES:
80 percent load. Wendy Torell is a Senior Research Analyst at Schneider Electric’s
u Model out how much total energy may be saved by Data Center Science Center. In this role, she researches best practices
in data center design and operation, publishes white papers and articles,
raising temperatures versus other optimization
and develops TradeOff Tools to help clients optimize the availability,
strategies. Software is available to help analyze the
efficiency, and cost of their data center environments. She also consults
system dynamics of your specific data center. This is with clients on availability science approaches and design practices to
critical because every data center will behave help them meet their data center performance objectives. She received
differently. her B.S. in Mechanical Engineering from Union College in Schenectady,
u When evaluating changes, be sure to look at total New York and her M.B.A. from University of Rhode Island. Wendy is an
energy consumption as a metric, as PUE alone can ASQ Certified Reliability Engineer. She can be reached at wendy.torell@
be misleading. schneiderelectric.com.
CONCLUSION Kevin Brown is the Vice President of Data Center Global Solution
Offer & Strategy at Schneider Electric. Kevin holds a B.S. in Mechanical
Data center operators are struggling with the decision Engineering from Cornell University. Prior to this position at Schneider
of raising temperatures in their IT space. Is it safe to Electric, Kevin served as Director of Market Development at Airxchange,
do this? What is the right temperature? Is it worth the a manufacturer of energy recovery ventilation products and components
increased risk? These are some of the questions they in the HVAC industry. Before joining Airxchange, Kevin held numerous
are faced with. This article explains the implications senior management roles at Schneider Electric, including Director,
of making the choice to raise IT temperatures. It is Software Development Group. He can be reached at kevin.brown@
important that the architecture be fully understood and schneiderelectric.com.
that a complete analysis is done before choosing the
operating points. This analysis demonstrated that: Victor Avelar is the Director and Senior Research Analyst at Schneider
Electric’s Data Center Science Center. He is responsible for data center
u The cooling architecture and geographic location
design and operations research, and consults with clients on risk
(specifically the temperature profile of the climate)
assessment and design practices to optimize the availability and
has a significant impact on the optimal IT tempera- efficiency of their data center environments. Victor holds a B.S. in
ture set point. Mechanical Engineering from Rensselaer Polytechnic Institute and
u The shape of the server fan and CFM curve are an M.B.A. from Babson College. He is a member of AFCOM. He can
key drivers. be reached at victor.avelar@schneiderelectric.com.
58 u ICT TODAY
New Brand, New Game
New beginnings with traditional values
PROUD DIAMOND
Visit Hyperline at the BICSI Canadian Conference SPONSOR OF
May/June 2016 t 59
“It was this big!”