Академический Документы
Профессиональный Документы
Культура Документы
Effciency and
Design
Volume 13 | November 2009
Green Power Protection spinning
into a Data Center near you
Isolated-Parallel UPS Systems
Effciency and Reliability?
Powering Tomorrows Data Center:
400V AC versus 600V AC Power Systems
MC5402_OP24_DCJFIN.indd 1 9/15/09 2:30:58 PM
2 | THE DATA CENTER JOURNAL www.datacenterjournal.com
FACILITY
CORNER
ELECTRICAL
4 ISOLATED-PARALLEL UPS
SYSTEMS EFFICIENCY AND
RELIABILITY?
By Frank Herbener & Andrew Dyke, Piller Group
GmbH, Germany
In todays data center world of ever
increasing power demand, the scale
of mission critical business dependent
upon uninterruptible power grows ever
more. More power means more energy
and the battle to reduce running costs is
increasingly ferce.
MECHANICAL
8 OPTIMIZING AIR COOLING
USING DYNAMIC TRACKING
By John Peterson, Mission Critical Facility
Expert, HP
Dynamic tracking should be considered
as a viable method to optimize the
effectiveness of cooling resources in a
data center. Companies using a
dynamic tracking control system beneft
from reduced energy consumption and
lower data center
costs.
CABLING
11 PREPARE NOW FOR THE
NEXT-GENERATION DATA
CENTER
By Jaxon Lang, Vice President, Global
Connectivity Solutions Americas, ADC
Fueled by applications such as IPTV,
Internet gaming, fle sharing and mobile
broadband, the food of data surging
across the worlds networks is rapidly
morphing into a massive tidal wave--
one that threatens to overwhelm any
data center not equipped in advance to
handle the onslaught.
SPOTLIGHTS
ENGINEERING AND
DESIGN
14 GREEN POWER PROTECTION
SPINNING INTO A DATA
CENTER NEAR YOU
By Frank DeLattre, President, VYCON
Flywheel energy storage systems
are gaining strong traction in data
centers, hospitals, industrial and other
mission-critical operations where
energy effciency, costs, space and
environmental impact are concerns.
This green energy storage technology
is solving sophisticated power problems
that challenge computing operations
every day.
18 POWERING TOMORROWS
DATA CENTER: 400V AC
VERSUS 600V AC POWER
SYSTEMS
By Jim Davis, business unit manager, Eaton
Power Quality and Control Operations
While major advancements in electrical
design and uninterruptible power
system (UPS) technology have provided
incremental effciency improvements,
the key to improving system-wide
power effciency within the data center
is power distribution.
22 DATA CENTER EFFICIENCY
ITS IN THE DESIGN
By Lex Coors, Vice President Data Center
Technology and Engineering Group, Interxion
Data centers have always been power
hogs, but the problem has accelerated
in recent years. Ultimately, it boils down
to design, equipment selection and
operation of which measurement is an
important part.
ITCORNER
25 ONLINE BACKUP OR CLOUD
RECOVERY?
By Ian Masters, UK Sales & Marketing Director,
Double-Take Software
There is an old saying in the data
protection business that the whole point
of backing up is preparing to restore.
Having a backup copy of your data is
important, but it takes more than a
pile of tapes (or an on-line account) to
restore.
26 FIVE BEST PRACTICES
FOR MITIGATING INSIDER
BREACHES
By Adam Bosnian, VP Marketing Cyber-Ark
Software
Mismanagement of processes involving
privileged access, privileged data, or
privileged users poses serious risks to
organizations. Such mismanagement is
also increasing enterprises vulnerability
to internal threats that can be caused
by simple human error or malicious
deeds.
All rights reserved. No portion of DATA CENTER Journal may be reproduced without written
permission from the Executive Editor. The management of DATA CENTER Journal is not
responsible for opinions expressed by its writers or editors. We assume that all rights in
communications sent to our editorial staff are unconditionally assigned for publication.
All submissions are subject to unrestricted right to edit and/or to comment editorially.
AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30022
PHONE: 678-762-9366 | FAX: 866-708-3068 | WWW.DATACENTERJOURNAL.COM
DESIGN : NEATWORKS, INC | TEL: 678-392-2992 | WWW.NEATWORKSINC.COM
THE DATA CENTER JOURNAL | 3 www.datacenterjournal.com
ITOPS
28 ENERGY MEASUREMENT
METHODS FOR THE DATA
CENTER
By Info-Tech Research Group
Ultimately, energy data needs to be
collected from two cost buckets:
data-serving equipment (servers,
storage, networking, UPS) and support
equipment (air conditioning, ventilation,
lighting, and the like). Changes in one
bucket may affect the other bucket, and
by tracking both, IT can understand this
relationship.
EDUCATION
CORNER
30 COMMON MISTAKES IN
EXISTING DATA CENTERS &
HOW TO CORRECT THEM
By Christopher M. Johnston, PE and Vali Sorell,
PE, Syska Hennessy Group, Inc.
After youve visited hundreds of data
centers over the last 20+ years
(like your authors), you begin to
see problems that are common
to many of them. Were taking
this opportunity to list some
of them and to recommend
how to correct them.
YOURTURN
32 TECHNOLOGY AND THE
ECONOMY
By Ken Baudry
An article from our Experts Blog
Holis-Tech................................. Inside Front
www.holistechconsulting.com
MovinCool ..................................... pg 1
www.movincool.com
PDU Cables .................................. pg 3
www.pducables.com
Piller ............................................... pg 7
www.piller.com
Server Tech ................................... pg 9
www.servertech.com
Snake Tray .................................... pg 10
www.snaketray.com
Binswanger ................................. pg 13
www.binswanger.com/arlington
Upsite ............................ pgs 19, 21, 23
www.upsite.com
Universal Electric ....................... pg 20
www.uecorp.com
Sealeze .......................................... pg 22
www.coolbalance.biz
AFCOM ........................................... pg 24
www.afcom.com
7x24 Exchange ............................ pg 27
www.7x24exchange.org
Info-Tech Research Group ....... pg 29
www.infotech.com/measureit
Data Aire ....................................... Back
www.dataaire.com
CALENDAR
NOVEMBER
November 15 - November 18, 2009
7x24 Exchange International 2009 Fall Conference
www.7x24exchange.org/fall09/index.htm
DECEMBER
December 2 - December 3, 2009
KyotoCooling Seminar: The Cooling Problem Solved
www.kyotocooling.com/KyotoCooling%20Seminars.html
December 1 - December 10, 2009
Gartner 28th Annual Data Center Conference 2009
www.datacenterdynamics.com
VENDOR
INDEX
4 | THE DATA CENTER JOURNAL www.datacenterjournal.com
Parallel
Redundant
System
Redundant
Isolated
Redundant
Distributed
Redundant
Isolated
Parallel
Redundant
Fault tolerant No Yes Yes Yes Yes
Concurrently
maintainable
No Yes Yes Yes Yes
Load Manage-
ment required
No No Yes Yes No
Typical UPS
module loading
(max)
85% 50% 100%* 85% 94%
Reliability order
(1= best)
5 1 4 3 2
* One module is always completely unloaded.
Table 1 Comparison of UPS scheme topologies.
A
parallel redundant scheme usually
provides N+1 redundancy to boost
reliability but sufers from single
points of failure including the out-
put paralleling bus and the scheme
is limited to around 5 or 6MVA at low volt-
ages. Te whole system is not fault tolerant
and is difcult to concurrently maintain.
A System + System approach can
overcome the maintenance and fault tolerance
issues but sufers from a very low operating
point on the efciency curve. Like the paral-
lel redundant scheme, it too, is limited in scale
at low voltages.
An isolated or distributed redundant
scheme can be employed to tackle all these
problems but such schemes introduce ad-
ditional requirements such as essential load
sharing management and static transfer
switches for single corded loads.
Te Isolated-Parallel (IP) rotary UPS
system eliminates the fundamental draw-
backs of conventional approaches to provide
a highly reliable, fault tolerant, concur-
rently maintainable and yes, highly efcient
solution.
IP SYSTEM CONFIGURATION
Te idea [1] of an IP system is to use
a ring bus structure with individual UPS
modules interconnected via 3-phase isolation
chokes (IP chokes). Each IP choke, is de-
signed to limit fault currents to an acceptable
level at the same time as allowing sufcient
load sharing in the case of a module output
failure. Load sharing communications are
not required and the scale of low voltage
systems can be greatly increased.
LOAD SHARING
In normal operation each critical load
is directly supplied from the mains via its as-
sociated UPS. In the case that the UPSs are all
equally loaded there is no power transferred
through the IP chokes. Each unit indepen-
dently regulates the voltage on its output bus.
In an unbalanced load condition each
UPS still feeds its dedicated load, but the units
with resistive loads greater than the average
load of the system receive additional active
Isolated-Parallel UPS Systems
Effciency and Reliability?
FRANK HERBENER & ANDREW DYKE, PILLER GROUP GMBH, GERMANY
In todays data centre world of ever increasing power demand, the scale of mission critical business
dependant upon uninterruptible power grows ever more. More power means more energy and
the battle to reduce running costs is increasingly ferce. Optimizing system effciency without
compromise in reliability seems like an impossible task or is it?
Figure 1 Isolated-Parallel System
FACILITY CORNER
ELECTRICAL
THE DATA CENTER JOURNAL | 5 www.datacenterjournal.com
power from the lower loaded UPS via the IP bus (see Figure 2). It is
the combination of the relative phase angles of the UPS output busses
and the impedance of the IP choke that controls the power fow. Te
relative phase angles of the UPS must be naturally generated in cor-
relation to the load level in order to provide the ability of natural load
sharing among the UPS modules without the necessity of active load
sharing controls.
Figure 2 Example of load sharing in an IP system consisting of 16
UPS Modules
Te infuence of the IP choke should also be considered: With
all UPS modules having the same output voltage, the impedance of
the IP choke inhibits the exchange of reactive current, so that reactive
power control is also not necessary.
Looking at the mechanisms of natural load sharing in an IP
system, it is obvious that a normal UPS bypass operation would
signifcantly disturb the system. So, if the traditional bypass operation
is not allowed in an IP system, what will happen in case of a sudden
shutdown of a UPS Module? To say absolutely nothing would be
slightly exaggerated, but almost nothing is reality.
Figure 3 Example of redundant load supply in the case one UPS fails.
Te associated load is still connected to the IP bus via the IP
choke, which now works as a redundant power source. Te load will
automatically be supplied from the IP bus without interruption. In
this mode, each of the remaining UPS Modules equally feeds power
into the IP bus (Figure 3). Tere is no switching activity necessary to
maintain supply to the load.
An additional breaker between the load and the IP bus allows
connection of the load directly to the IP bus, enabling the isolation of
the faulty UPS under controlled conditions.
UPS TOPOLOGY
Te most suitable UPS topology to achieve the aforementioned
load dependent phase angle in a natural way is a rotary or diesel ro-
tary UPS with an internal coupling choke as shown in Figure 4.
1 Utility bus
2 IP bus
3 IP bus (return)
4 Rotary UPS with fywheel energy store.
5 Load bus
6 IP choke
7 Transfer breaker pair (bypass)
8 IP bus isolation breakers.
Note that a UPS module without a bi-directional energy store
(e.g. battery or induction coupling) can be used but the system is
likely to exhibit lower stability under transient conditions.
FAULT ISOLATION
Tere are two fault locations that must be evaluated: a). down-
stream of each UPS and b) the IP bus itself.
A). A fault on the IP bus is the most critical because it results in
the highest local fault currents. Te fault is parallel fed by each UPS
connected to the IP bus but limited by the sub-transient reactance of
the UPS combined with the impedance of its IP choke. Tis means
that the efect on the individual UPS outputs is minimized and the
focal point remaining is the fault withstand of the IP ring itself.
Figure 4 IP system using Piller UNIBLOCK T Rotary UPS with
bi-directional energy store.
6 | THE DATA CENTER JOURNAL www.datacenterjournal.com
b). A fault on the load side of a UPS is mostly fed by the associ-
ated UPS, limited by its sub-transient reactance only. A current from
each of the non afected UPS is fed into the fault too, but because of
the fact that there are two IP chokes in series between the fault and
each of the non afected UPS, this current contribution is very much
smaller. As a result of this, the disturbance at the non afected loads is
very low. Tis, in combination with the high fault current capability
of rotary ups ensures fast clearing of the fault while efectively isolat-
ing the fault from the other loads.
Figure 5 Example of a fault current distribution in case of a short
circuit on the load side of UPS #2
CONTROL
Te regulation of voltage, power and frequency plus any syn-
chronization is done by the controls inside each UPS module. Te
UPS also controls the UPS related breakers and is able to synchronize
itself to diferent sources. Each system is controlled by a separate
system control PLC, which operates the system related breakers and
initializes synchronization processes if necessary. Te system control
PLC also remotely controls the UPS regarding all operations that are
necessary for proper system integration. Te redundant Master
Control PLCs are used to control the IP system in total. Additional
pilot wires interconnecting the system controls allow safe system
operation in the improbable case that both master control PLCs fail.
MODES OF OPERATION
In case of a mains failure each UPS automatically disconnects
from the mains and the load is initially supplied from the energy
storage device of the UPS. From this moment on, the load sharing
between the units is done by a droop-function based on a power-fre-
quency-characteristic which is implemented in each UPS. No load
sharing communication between the units is required. Afer the Diesel
Engines are started and engaged, the loads are automatically trans-
ferred from the UPS energy storage device to the Diesel Engine so the
energy storage can be recharged and is then available for further use.
To achieve proper load sharing also in Diesel operation, each
Diesel Engine is independently controlled by its UPS, whether the
engine is mechanically coupled to the generator of the UPS (DRUPS)
or an external Diesel-Generator (standby) is used. A special regula-
tor structure inside the UPS in combination with the bi-directional
energy storage device allows active frequency and phase stabilization
while keeping the load supplied from the Diesel Engine.
Te retransfer of the system to utility is controlled by the master
control. Te UPS units are re-transferred one by one, thereby avoid-
ing severe load steps on the utility. Afer the whole system is syn-
chronized and the frst UPS system is reconnected to utility, the load
sharing of those UPS systems which are still in Diesel operation can
not be done by the regular droop function. To overcome this, Piller
Group GmbH invented and patented the Delta-Droop-Control
(DD-Control). Tis allows proper load sharing under this condition
without relying on load sharing communications. With the imple-
mentation of DD-Control into the UPS Modules all UPS systems can
be reconnected to utility step by step until the whole IP system is in
mains operation once more. Tis removes another problem in large
scale systems: that of step-load re-transfer to utility afer mains failure.
MAINTAINABILITY
Te IP bus system is probably the simplest (high reliability)
system to concurrently maintain because the loads are independently
fed by UPS sources and these sources can readily be removed from
and returned to the system without load interruption. Not only that,
but the ring bus can be maintained, as can the IP chokes, also without
load interruption. All the other solutions with similar maintainability
(System, Isolated and Distributed redundant), have far greater com-
plexity of infrastructure, leading to more maintenance and increased
risk during such operations.
PROJECTS
Te frst IP system was realized in 2007 for a data center in Ash-
burn VA. It consists of two IP systems, each equipped with 16 x Piller
UNIBLOCK UBT1670kVA UPS with fywheel energy storage (total
installed capacity > 2 x20MWatts at low voltage). Each of the UPS is
backed up by a separate Diesel Generator with 2810kVA, which can be
connected directly to the UPS load bus and which is able to supply both
the critical and the essential loads. Since the success of this frst instal-
lation, three more data centers have been commissioned, of which the
frst phase of one is complete (a further 20MWatts) as of today.
Tere are further projects planned to be done in medium voltage
and also a confguration combining the benefts of the IP system with
the energy efciency of natural gas engines is planned by Consulting
Engineers.
CONCLUSION
In the form of an IP bus topology, a UPS scheme that combines
high reliability with high efciency is possible.
High reliability is obtained by virtue of the use of rotary UPS
(with MTBF values in the region 3-5 times better than static technolo-
gy), combined with the elimination of load sharing controls, no mode
switching under failure conditions, load fault isolation and simplifed
maintenance.
High efciency can be obtained with such a high reliability
system because of the ability to simulate the System + System fault tol-
erance without the penalty of low operating efciencies. A 20MWatt
design load can run with modules that are 94% loaded and yet, ofer a
reliability that is similar to the S+S scheme that has a maximum mod-
ule loading of just 50%. Tat can translate in to a diference in UPS
electrical efciency of 3 or 4%. Tat means a potential waste in oper-
ating costs of $750,000 per year (ignoring additional cooling costs).
Whats more, the solution is not only concurrently maintainable
and fault tolerant with high reliability and high efciency, but can also
be realized at either low or medium voltages and can be implemented
with DRUPS, separate standby diesel engines or even gas engines for
super-efcient large scale facilities.
For complete information on the invention and history of
IP systems, refer to Piller Group GmbH paper by Frank Herbener
entitled Isolated-Parallel UPS Confguration at www.piller.com.
THE DATA CENTER JOURNAL | 7 www.datacenterjournal.com
www.piller.com
ROTARY UPS SYSTEMS
STATIC UPS SYSTEMS
STATIC TRANSFER SWITCHES
KINETIC ENERGY STORAGE
AIRCRAFT GROUND POWER SYSTEMS
FREQUENCY CONVERTERS
NAVAL POWER SUPPLIES
SYSTEM INTEGRATION
Piller Group GmbH
Abgunst 24,
37520 Osterode,
Germany
T +49 (0) 5522 311 0
E datacenterprotect@piller.com
Piller Australia Pty. Ltd. | Piller France SAS | Piller Germany GmbH & Co. KG | Piller Italia S.r.l. | Piller Iberica S.L.U
Piller Power Singapore Pte. Ltd. | Piller UK Limited | Piller USA Inc.
What do the following
organizations all have
in common?
When it comes to power protection leading organizations dont take chances. Time after time the worlds
leading organizations select Piller power protection systems to safeguard their data centers.
Why? Because there is no higher level of data center power protection available!
Whats more, Piller offers the most cost effective and the greenest through life investment available. So,
if you are planning major data center investment and would like to know more about why the worlds
leading organisations trust their data center power protection to Piller, contact us today.
datacenterprotect@piller.com
Nothing protects quite like Piller
A Langley Holdings Company
They all rely on
data centers protected
by Piller.
3M | ABB | ABN Amro | Abovenet | ADP | AEG | Airbus | Alcan | Alcatel | Aldi | Allianz | Alstom | Altair | AMD | Anz Bank | AOL | Areva | Astra Zeneca | AT & T | Australian Stock Exchange |
Australian Post | Aviva | Bahrain Financial Harbour | Banca d'Italia | Banco Bradesco | Banco Santander | Bank of America | Bank of England | Bank of Hawaii | Bank of Morocco | Bank of Scotland
| Bank Paribas | Barclays | BASF | Bayer | BBC (British Broadcasting Corporation) | BP (British Petroleum) | BICC | Black & Decker | BMW | Bosch | Bouygues Telecom | BA (British Airways) | BG
(British Gas) | BT (British Telecom) | British Civil Service | British Government | Bull Computer | CAA (Civil Aviation Authority) | Canal+ | Capital One | Channel 4 (USA) | Channel 4 (UK) | Chase |
Chevron | Chinese Army | Chinese Navy | Chrysler | Citigroup | Central Intelligence Agency (CIA) | Commerzbank | Conoco | Credit Lyonnais | Credit Mutuel | Credit Suisse | CSC | Daimler Benz |
Danish Intelligence Service | Danish Bank | Danish Radio | Dassault | De Beers | Degussa | Dell Computer | Deutsche Bank | Deutsche Bundesbank | Deutsche Post | Disney | Dow Jones | Dresdner
Bank | DuPont | Dutch Military | EADS | EADS Hamburg | EASYNET | EDF | EDS | Eli Lilly | ESAT Telecom | European Patent Office | European Central Bank | Experian | Federal Reserve Bank |
FedEX | First National Bank | First Tennesee Bank | Ford Motor | France Telecom | French Airforce | French Army | Friends Provident | Fujitsu | GCHQ (British Government Communications Head
Quarters) | Girobank | GlaxoSmithKline | GUS (General United Stores) | Heidelberger | Hewlett Packard | Hitachi | HSBC | Hynix | Hyundai | IBM | ING Bank | Intel | IRS | Iscor | J P Morgan | John
Deere | Knauf | Knorr | Kodak | Lafrage | Linde | Lindsey Oil | Lloyds of London | Lockheed | Los Alamos National Laboratories | Lottery Vienna | Lottery Copenhagen | LSE (London Stock Exchange)
| Marks & Spencer | MBNA | Mercedes Benz | Merrill Lynch | MOD (British Ministry of Defence) | Morgan Grenfell | Morgan Stanley | Motorola | NASA | NASDAQ | National Grid (British) | National
Semiconductor | Natwest Bank | Nestl | Nokia | Nuclear Elektric (Germany) | NYSE (New York Stock Exchange) | NYSE Euronext | Pfizer | Philips | Phillip Morris | Porsche | Proctor & Gamble |
Putnam Investments | Qantas | QVC | Rank Xerox | Raytheon | RBS | Reuters | Rolls Royce | Royal Bank of Canada | Royal & Sun Alliance | RWE | Samsung | Scottish Widows | Sharp | Shell |
Siemens | Sky | Sony | Sony Ericsson | Sweden Television | TelecityGroup | Thyssen Krupp | T-Mobile | Union Bank of Switzerland | United Biscuit | United Health | Verizon | VISA | VW *
* The above is an extract of Piller installations and is by no means exhaustive.
8 | THE DATA CENTER JOURNAL www.datacenterjournal.com
FACILITY CORNER
MECHANICAL
INSIDE THE DATA CENTER
O
ne of the most challenging tasks of
running a data center is managing
the heat load within it. Tis re-
quires balancing a number of fac-
tors including equipment location
adjacencies, power accessibility and available
cooling. As high-density servers continue to
grow in popularity along with in-row and in-
rack solutions, the need for adequate cooling
in the data center will continue to grow at a
substantial rate. To meet the need for cooling
using a typical under foor air distribution
system, a manager ofen adjusts perforated
foor tiles and lets the nearest Computer
Room Air Conditioner (CRAC) unit react
as necessary to each new load. However,
this may cause a sudden and unpredictable
fuctuation in the air distribution system due
to changes in static pressure and air rerouting
to available outlets which can have a ripple
efect on multiple units. With new outlets
available, air, like water, will seek
the path with less resistance; the
new outlets may starve existing
areas of cooling, causing the ex-
isting CRAC units to cycle the air
faster. Tis becomes a wasteful
use of fan energy, let alone fuc-
tuations of cooling load energy
allocation.
Most managers understand
that the air supply plenum needs
to be a totally enclosed space to
achieve pressurization for air
distribution. Oversized or un-
sealed cutouts allow air to escape
the plenum, reducing the static pressure and
efectiveness of the air distribution system.
Cables, conduits for power and piping can
also clog up the air distribution path, so
thoughtful consideration and organization
should be an essential part of the data center
operations plan. However, even the best
laid plan can still end up with areas that are
starved for cooling air.
In a typical layout, there are rows of
computer equipment racks that draw cool air
from the front and expel hot air at the rear.
Tis requires an overall footprint larger than
the rack itself (Figure 1).
When adding new data center equip-
ment, data center managers need to manage
unpredictable temperatures and identify a
new perfect balance of how many perforated
tiles to use and where to locate them. Tey
involve maintenance personnel to adjust
CRAC units, assist with tile layouts, and even
possibly add or relocate the units as neces-
sary. Due to the predetermined raised foor
height, supply air temperature and humidity
necessities, the volatile air distribution system
becomes an infexible piece of the overall
puzzle, at the expense of energy and possibly
performance due to inadequate cooling.
Meanwhile, the CRAC units are operat-
ing at variable rates to meet this load, but
mostly they are operating at their maxi-
mum capacity instead of as-needed. Why?
One reason is where the air temperature is
measured. Each unit is operating on the
return air temperature measured at the unit,
and all units are sharing the same return air.
Tis means that if the load is irregular in the
racks, the units simply cool for the overall
required capacity. Apply this across a data
center, and the units are generally handling
the cooling load without altering their fow
based on changes happening in any localized
area, which consequently allows that large
variance of temperatures in the rows.
Temperature discrep-
ancy is the main concern
for most data center
managers. Tey would like
the air system not to be the
limiting factor when adding
new equipment to racks
and prefer to remove the
variable of fckle air cooling
from the equation of equip-
ment management. At the
same time, almost behind
the scenes, facility costs
from cooling are increas-
ing to match the new load,
Optimizing Air Cooling Using
Dynamic Tracking
BY JOHN PETERSON, MISSION CRITICAL FACILITY EXPERT, HP
Dynamic tracking should be considered as a viable method to optimize the effectiveness of cooling
resources in a data center. Companies using a dynamic tracking control system beneft from reduced
energy consumption and lower data center costs.
Figure 1: Overall footprint needed per rack
THE DATA CENTER JOURNAL | 9 www.datacenterjournal.com Server Technology, Inc. Sentry is a trademark of Server Technology, Inc.
Solutions for the Data Center Equipment Cabinet
1040 Sandhill Drive
Reno, NV 89521USA
sales@servertech.com
www.servertech.com
www.servertechblog.com
tf +1.800.835.1515
tel +1.775.284.2000
fax +1.775.284.2065
Job# 081xxx
Client Server Tech
Job Data Center Journal
SPM Ad
Size 8.375x10.875 Trim+Bleed
Colors 4CP
Pages Full-Page
Rev PROOF
How Do You Measure
the Energy Efficiency
of Your Data Center?
> Sentry POPS
Measure and monitor power information per
outlet, device, application, or cabinet using Web
based CDU Interface
> Sentry Power Manager
Secure software solution to:
> Monitor, manage & control multiple CDUs
> Alarm management, reports & trending
of power info
> ODBC Compliant Database for integration into
your Building Management or other systems
> kW & kW-h IT power billing and monitoring
information per outlet, device, application,
cabinet or DC location
BMS
P R I M A R Y E T H E R N E T P I P E L I N E
WEB BASED CDU INTERFACE
WEB BASED SPM INTERFACE
DATABASE
SENTRY POWER
MANAGER APPLIANCE
Sentry: POPS Switched CDU
With Device Monitoring
> Rack Level Power Management
> Outlet Power Monitoring (POPS)
> Input Power Monitoring
> Environmental Monitoring
> Outlet Groups
> Alarms
Sentry Power Manager
> Enterprise Cabinet Power Mngt.
> Reports & Trends
> Device Monitoring
> Groups & Clusters
> Kilowatt Readings for Billing
> Auto-Discovery of Sentry CDUs
> Alarms
With Sentry Power Manager
(SPM) and
Sentry POPS
re
tu
rn
a
ir te
m
p
e
ra
tu
re
to
co
o
lin
g u
n
its
b
yp
a
ss a
ir o
w
p
e
rfo
ra
te
d
tile
co
u
n
t a
n
d
p
la
ce
m
e
n
t
co
o
lin
g ca
p
a
city fa
cto
r
ca
b
in
e
t circu
la
tio
n
p
a
tte
rn
s
IT
e
q
u
ip
m
e
n
t in
ta
ke
te
m
p
e
ra
tu
re
Receive a free Upsite
Temperature
Stripvisit upsite.com/energy
upsite.com
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
All rights reserved. Upsite Technologies, Inc. 2009
Upsite is an ENERGY STAR Service and Product Provider
Partner, developing ways to optimize data centers and
improve energy efciency.
Count on Upsites systematic
solutions suite to optimize
your existing equipment and
your energy dollars.
Start with an Upsite
Services
cooling health benchmark.
Our diagnostic surveys offer systematic
remediation strategies that will cor-
rect airow inefciencies for improved
cooling capacity. Then increase server
density and defer capital costs, all while
reducing operating expenses.
FLEXIBLE
POWER SOLUTIONS
IN MINUTES.
NOT WEEKS!
Expanding your power distribution capacity shouldnt
be a hardship. And with the flexible Starline Track Busway,
it wont be. Our overhead, scalable, add-as-needed
system can expand quickly, with no down time, and no routine maintenance. Make dealing with the
jungle of under-floor wires a thing of the past, and reliable expansion and reconfigurations a part of your future.
Tolearnmoreabout StarlineTrackBusway andtofindarepresentativenear you, just
visit www.uecorp.com/busway/reps or call us at +1 724 597 7800.
On your mark, get set, go!
uec_datacenter_journal_int.qxd:Layout 1 6/11/09 3:41 PM Page 1
www.datacenterjournal.com
System is a new ofering that enables select
new and existing UPSs to deliver industry-
leading 99 percent efciency, even at low load
levels, while still providing total protection
for critical loads. With this technology, the
UPS operates at extremely high efciency un-
less utility power conditions force the UPS to
work harder to maintain clean power to the
load. Te intelligent power core continu-
ously monitors incoming power conditions
and balances the need for efciency with the
need for premium protection, to match the
conditions of the moment.
When high-efciency UPS systems
are deployed, losses through the auto-trans-
former, the UPS and the server equipment
produce an overall end-to-end efciency of
approximately 84 percent.
400V AC POWERS AHEAD
Te 400V AC power distribution
systems lower equipment cost and higher
end-to-end efciency deliver signifcant CA-
PEX, OPEX and TCO savings as compared
to the 600V AC system. Te 400V AC system
running in conventional double conversion
mode ofers an average 10 percent frst-year
TCO savings and an average 5 percent TCO
savings over its 15-year service life, as com-
pared to the 600V AC system. When running
the 400V AC UPS in high-efciency mode,
the frst-year TCO savings increase to 16 per-
cent, and the 15-year TCO savings increase
to 17 percent, minimizing data center cost in
terms of both CAPEX and OPEX.
In CAPEX investment alone, the
400V AC confguration ofers an average 15
percent savings over the 600V AC confgura-
tion for all system sizes analyzed. Te 400V
AC systems lower CAPEX gives data center
managers a more cost-efective solution for
expanding data center capacity. Te systems
analyzed produced an average annual OPEX
savings of 4 percent with the 400V AC system
running in double conversion mode, and
17 percent when running in high-efciency
mode. OPEX savings rates are linear across
all system sizes, indicating that savings will
continue to increase in direct proportion to
the size of the system.
Terefore, the 400V AC power distri-
bution system ofers the highest degree of
electrical efciency for modern data centers,
signifcantly reducing capital and operational
expenditures and total cost of ownership as
compared to 600V AC power systems. Recent
developments in UPS technologyincluding
the introduction of transformerless UPSs and
new energy management featuresfurther
enhance the 400V AC power distribution
system for maximum efciency.
Tis conclusion is supported by IT
industry experts who theorize that 400V AC
power distribution will become standard as
U.S. data centers transition away from 480V
AC to a more efcient and cost-efective solu-
tion over the next one to four years.
n
About The Author:
Jim Davis is a business unit manager for Eatons
Power Quality and Control Operations Division. He
can be reached at JimRDavis@eaton.com. For more
information about the 400V UPS power scheme,
visit www.eaton.com/400volt.
Chart 1: 15-year TCO (400V AC Energy Saver System vs. 600V AC double conversion mode)
How
about
spending
fewer
energy
dollars?
Eliminate bypass airow with KoldLok
Temperature
Stripvisit upsite.com/energy
Count on Upsites systematic
solutions suite to optimize
your existing equipment and
your energy dollars.
www.datacenterjournal.com
I
f the cost savings are half as great for
data centers as they have been for the
airline industry, we will need to fasten
our seatbelts.
Data centers have always been
power hogs, but the problem has accelerated
in recent years. Ultimately, it boils down to
design, equipment selection and operation
of which measurement is an important part.
Te frst step on an existing Datacenter to
achieve high(er) efciencies is through the
improvement of a data centers Power Usage
Efectiveness (PUEenergy) ratio PUEenergy
ratios can be used as a guide to defne a data
centers efciency or green credentials and
have become the de facto metric in the past
year.
A data center with a low PUEenergy of
1.5, implements lean design and has estab-
lished measurement data with demonstrable
year-on-year improvements can be classi-
fed as green or energy efcient. Te dream
green data center would have a PUEenergy
of one, which means that every watt of power
in the transformer is delivered directly to
the IT equipment without any losses in the
site infrastructure. Unfortunately, this is not
physically possible as some infrastructure
services, such as cooling, always have energy
losses (for time being).
However, an inefcient data center is
recognized as anything with a PUEenergy
of greater than two. Tese are generally
based on legacy equipment, not built in a
modular way and/or not operated well. So,
how does an organization go about optimiz-
ing data center efciency and improving its
PUEenergy?
For organizations to reduce their PUE,
they need to have an active focus on the
following three areas: external efciency,
internal efciency and customer efciency.
Tey need to be monitoring their best
practice PUE ratios that go against industry
standards set by the likes of the Uptime Insti-
tute, the Green Grid and the European Code
of Conduct.
Although PUEenergy has been adopted
by the industry sector, institutions and
government bodies alike as an agreed way to
measure the energy overhead of a data center
it may distract us from the ultimate goal:
A LOWER TOTAL DATACENTER
ENERGY USE AT A LOW
PUEenergy.
If PUE in Power or Energy would be the
only benchmark indicator for governments
to decide the relative energy efciency of data
centers and in turn how best to apply a car-
bon tarif then many data center owners may
decide to switch on servers that were previ-
ously earmarked to meet peaks in demand.
Tis in reality would mean lower PUEenergy
ratios but a higher total energy usage which
defeats the original objective and may be a
problem for all.
Data Center Effciency
Its in the Design
BY LEX COORS, VICE PRESIDENT DATA CENTER TECHNOLOGY AND
ENGINEERING GROUP, INTERXION
Most companies undergoing data center projects have the mindset
of cutting costs rather than helping the environment, however,
they may want to adjust their focus. With data center greenhouse
emissions set to overtake the airline industry in the next fve to
ten years, quadrupling by 2020, it has never been more critical for
organizations to optimize their data center.
* For more information about Dyna-Seal Technology, visit www.coolbalance.biz
800.787.7325
email: coolbalance@sealeze.com
www.sealeze.com
www.coolbalance.biz
SAVE THE SERVERS!
Thats why CoolBalance ofers brush
seals to ft nearly any size opening. Ideal
for retroftting existing data centers or
new installations, Sealezes CoolBalance
foor seals economically seal cable
access holes, facilitating control and
regulation of critical air fow that cools
computer room equipment.
Dyna-Seal strip brush technology
provides an efective seal*
Seal around cable openings in walls
or foors
Variety of sizes; 5x5 inch to 10x24 inch
and 4 & 6 inch circle seals
Easy to install
Economical
Quick and easy on-site
installation
One
size
does not ft all.
CoolBalance
B C
TM
SM
CB Ad 3rd pg DCJ 0309.indd 1 3/25/2009 11:57:35 AM
www.datacenterjournal.com
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
upsite.com
Does
energy
savings
have a
nice
ring?
All rights reserved. Upsite Technologies, Inc. 2009
Upsite is an ENERGY STAR Service and Product Provider
Partner, developing ways to optimize data centers and
improve energy efciency.
Receive a free Upsite
Temperature
Stripvisit upsite.com/energy
Prevent circulation of hot exhaust
air with HotLok
Blanking Panels.
Seal rack unit openings in IT server
cabinets with 99.97% effectiveness.
New research shows that installing
HotLok Panels helps data center
managers achieve up to 29% reduc-
tion in annual operating costs and
simple payback in a few months.
Count on Upsites systematic
solutions suite to optimize
your existing equipment and
your energy dollars.
Bearing this in mind taking some of
the following steps to achieve a better total
performance of the site in energy usage will
help you achieve a more energy efcient
operation.
STEP 1:
Measure the transformer (or other main
source energy usage) and the IT energy
usage and calculate the PUEenergy
STEP 2:
Start harvesting the low hanging fruits
based on the Uptime Institute guide lines
that have been set for many years and
available on their website.
STEP 3:
Measure the transformer and the IT En-
ergy usage again and calculate your new
PUEenergy. You may observe that while
your total energy usage has decreased,
your PUEenergy ratio has increased.
STEP 4:
Start switching of unneeded infrastruc-
ture, while maintaining your redundancy
levels.
STEP 5:
Measure the transformer and the IT
energy usage and calculate your PUEen-
ergy. You may now observe that your
PUEenergy and again that your total
energy usage has decreased.
It comes as no surprise that good
design leads to lower capital expenditure
(CAPEX) and better efciency, but what is
good design? A model that has proved suc-
cessful both in terms of efciency and green
credentials is Modular Design. Modular
Design was developed by Lex Coors, Vice
President of Data Center Technology
and Engineering Group, Interxion, and is
unique since it allows for future data center
expansion without interruption of services
to customers.
Recent research by McKinsey and the
Uptime Institute identifed fve key steps to
achieving operational efciency gains:
n Eliminate decommissioned servers,
which will equal an overall gain of
10-25%
n Virtualize, which leads to gains of
25-30%
n Upgrade older equipment leading to a
10-20% gain
n Reduce demand for new servers, which
can also increase efciency by 10-20%
n Introduce greener and more power ef-
fcient servers and enable power saving
features, this also equates to a 10-20%
gain
By following the above steps, an
organization can look to achieve an overall
efciency gain of 65%, signifcantly improv-
ing its PUE ratio.
Te third and fnal piece of the
efciency puzzle is customer focus. An
efcient data center should have hands-
on expert support in energy efciency
implementation eforts, as well as the best
practice customer installation check lists.
Staf need to be able to advise on how
to reduce temperatures and energy usage
though things like innovative hot and cold
aisle designs. Tey need to have the tools
in place to measure and analyze efciency,
implement the latest efciency ratings,
develop and implement frst phase actions,
and integrate fgures and ratings with
customers CSR. Without such expertise in
place, organization will fnd it hard to reach
their desired efciency gains.
Green and efcient data centers are
real and achievable, but emissions and
cost of energy are rising fast (although
people now and then forget these costs
sometime decrease temporally), so we
need to do more now. Organizations must
work together especially when it comes to
measurement. Vendors should be provid-
ing standard meters on all equipment to
measure energy usage versus productivity;
if you dont know whether youre wasting
energy, how can you change it?
But its not just vendors who are
responsible. Data center providers should
provide leadership for industry standards
and ratings that work, data center design
and operational efciency steps, and sup-
port for all customer IT efciency improve-
ments. What is apparent is that the whole
industry, from the power suppliers to the
rack makers, all need to work together to
improve efciencies and ensure that we are
all at the forefront of efcient, green data
center design.
n
PUEenergy measures efciency over time
using KwH.
www.datacenterjournal.com
Online Backup or
Cloud Recovery?
BY IAN MASTERS, UK SALES & MARKETING DIRECTOR,
DOUBLE-TAKE SOFTWARE
ITCORNER
C
loud recovery can be a nebulous
term, so I would defne it based on
the solution having the following
features:
1. Te ability to recover workloads in the
cloud
2. Efectively unlimited scalability with little
or no up-front provisioning
3. Pay-per-use billing model
4. An infrastructure that is more secure and
more reliable than the one you would build
yourself
5. Complete protection - i.e. non-expert users
should be able to recover everything they
need, by default.
If a solution does not meet up to these
fve criteria, then it should be called an
online backup product. Tis may be right
for your business, but typically they require
more IT knowledge and are based on specifc
resources.
Tere is an old saying in the data
protection business that the whole point of
backing up is preparing to restore. Having
a backup copy of your data is important,
but it takes more than a pile of tapes (or an
on-line account) to restore. You might need
a replacement server, new storage, and maybe
even a new data centre, depending on what
went wrong. Traditionally, you would either
keep spare servers in a disaster recovery data
centre, or sufer a period of downtime while
you order and confgure new equipment.
With a cloud recovery solution, you dont
want just your data in the cloud, you want the
ability to actually start up applications and
use them, no matter what went wrong in your
own environment.
Te next area where cloud recovery can
provide a better level of protection is around
provisioning. Even using online backup
systems, organizations would have to use
replacement servers in the event of an outage.
Te whole point of recovering to the cloud is
that they already have plenty of servers and
additional capacity on tap. If you need more
space to cope with a recovery incident, then
you can add this to your account. Under this
model, your costs are much lower than build-
ing the DR solution yourself, because you get
the beneft of duplicating your environment
without the upfront capital cost.
Removing the up-front price and
long-term commitment shifs the risk away
from the customer, and onto the vendor. Te
vendor just has to keep the quality up to keep
customers loyal, which requires great service
and efcient handling of customer accounts.
Te cloud recovery provider takes on all the
management efort and constant improve-
ment of infrastructure that is required. A
business without in-house staf that is
familiar with business continuity planning
may ultimately be much better of paying a
monthly fee to someone who specializes in
this area.
One area where cloud providers may
be held to account is around security and
reliability, but I think they hold the providers
to the wrong standard. In the end, you have
to compare the results that a cloud services
provider can achieve, the service levels that
they work to, and the cost comparison to
doing it yourself. Te point is that security
and reliability are hard, but they are easier at
scale. Companies like Amazon and Rack-
space do infrastructure for a living, and do it
at huge scale. Amazons outages get reported
in news, but how does this compare to what
an individual business can achieve?
Te last area where cloud recovery can
deliver better results is through usability and
protecting everything that a business needs.
While some businesses know exactly what
fles should be protected, most either dont
have this degree of control, or have got users
into the habit of following standard formats
or saving documents into specifc places.
Te issues that people normally get bitten
by are with databases, confguration changes
and weird applications that only a couple of
people within the organization use. Complete
protection means that all of these things can
be protected without requiring an expert in
either your own systems, or with the cloud
recovery solution.
Cloud means so many diferent things
to so many people, that it sometimes seems
not to mean anything at all. If you are going
to depend on it to protect your data, it had
better mean something specifc. Tese fve
points may not cover every possible protec-
tion goal, but they set a good minimum
standard.
n
Backing up fles and data online has been around for quite a while, but it has
never really taken off in a big way for business customers. There is also a
new solution coming onto the market which uses the cloud for backup and
recovery of company data. While these two approaches to disaster recovery
appear to be similar, there are some signifcant differences as well.
So which one would be right for you?
THE DATA CENTER JOURNAL | 25 www.datacenterjournal.com
26 | THE DATA CENTER JOURNAL www.datacenterjournal.com
ITCORNER
A
ccording to a recent Computing Technology Industry
Association (CompTIA) survey (see http://www.comptia.
org/pressroom/get_pr.aspx?prid=1410), although most re-
spondents still consider viruses and malware the top security
threat, more than half (53 percent) attributed their data
breaches to human error, presenting another dimension to the rising
concern about insider threats. It should serve as a wake-up call to
many organizations, that inadvertent or malicious insider activity can
create a security risk.
For instance, take the recent data breach that impacted the Metro
Nashville Public Schools. In this case, a contractor unintentionally
placed the personal information of more than 18,000 students and
6,000 parents on an unsecured Web server that was searchable via the
Internet. Although this act was largely chalked up to human error and
has since been corrected, anyone accessing the information when it
was freely available online could create a data breach that could cause
signifcant harm to these students and parents.
Moreover, the Identity Tef Resource Center (ITRC) recently
reported that insider thef incidents more than doubled between 2007
and 2008, accounting for more than 15 percent of data breaches. Ac-
cording to the report, human error breaches, as well as those related to
data-in-motion and accidental exposure, accounted for 35 percent of
all data breaches reported, even afer factoring in that the number of
breaches declined slightly during this period.
To signifcantly cut the risk of these insider breaches, enterprises
must have appropriate systems and processes in place to avoid or
reduce human errors caused by inadvertent data leakage, sharing of
passwords, and other seemingly harmless actions.
One approach to address these challenges is digital vault technol-
ogy, which is especially valuable for users with high levels of enter-
prise/network access as well as those handling sensitive information
and/or business processes such as users with privileged access -- in-
cluding third-party vendors or consultants, executive-level personnel
-- or access to the core applications running within an organizations
critical infrastructure.
Instead of trying to protect every facet of an enterprise network,
digital vault technology creates safe havens -- distinct areas for storing,
protecting, and sharing the most critical business information -- and
provides a detailed audit trail for all activity associated within these
safe havens. Tis encourages more secure employee behavior and
signifcantly reduces the risk of human error.
Here are some best practices for organizations serious about
preventing internal breaches, be they accidental or malicious, of any
processes that involve privileged access, privileged data, or privileged
users.
1
ESTABLISH A SAFE HARBOR
By establishing a safe harbor or vault for highly sensitive
data (such as adminstrator account passwords, HR fles, or
intellectual property), build security directly into the business process,
independent of the existing network infrastructure. Tis will protect
the data from the security threats of hackers and the accidental misuse
by employees.
A digital vault is set up as a dedicated, hardened server that
provides a single data access channel with only one way in and one
way out. It is protected with multiple layers of integrated security
including a frewall, VPN, authentication, access control, and full en-
cryption. By separating the server interfaces from the storage engine,
many of the security risks associated with widespread connectivity are
removed.
2
AUTOMATE PRIVILEGED IDENTITIES AND
ACTIVITIES
Ensure that administrative and application identities and
passwords are changed regularly, highly guarded from unauthorized
use, and closely monitored, including full activity capture and record-
ing. Monitor and report actual adherence to the defned policies. Tis
is a critical component in safeguarding organizations and helps to
simplify audit and compliance requirements, as companies are able to
answer questions associated with who has access and what is being
accessed.
As listed among the Consensus Audit Guidelines 20 critical
security controls, the automated and continuous control of adminis-
trative privileges is essential to protecting against future breaches. [Ed-
itors note: the guidelines are available at http://www.sans.org/cag/.]
Five Best Practices for
Mitigating Insider Breaches
BY ADAM BOSNIAN, VP MARKETING CYBER-ARK SOFTWARE
Mismanagement of processes involving privileged access,
privileged data, or privileged users poses serious risks to
organizations. Such mismanagement is also increasing
enterprises vulnerability to internal threats that can be caused
by simple human error or malicious deeds.
THE DATA CENTER JOURNAL | 27 www.datacenterjournal.com
3
IDENTIFY ALL YOUR PRIVILEGED ACCOUNTS
Te best way to start managing privileged accounts is to
create a checklist of operating systems, databases, appli-
ances, routers, servers, directories, and applications throughout the
enterprise. Each target system typically has between one and fve
privileged accounts. Add them up and determine which area poses
the greatest risk. With this data in hand, organizations can easily
create a plan to secure, manage, automatically change, and log all
privileged passwords.
4
SECURE EMBEDDED APPLICATION
ACCOUNTS
Up to 80 percent of system breaches are caused by internal
users, including privileged administrators and power users, who acci-
dentally or deliberately damage IT systems or release confdential data
assets, according to a recent Cyber-Ark survey.
Many times, the accounts leveraged by these users are the ap-
plication identities embedded within scripts, confguration fles, or
an application. Te identities are used to log into a target database or
system and are ofen overlooked within a traditional security review.
Even if located, the account identities are difcult to monitor and log
because they appear to a monitoring system as if the application (not
the person using the account) is logging in.
Tese privileged, application identities are being increasingly
scrutinized by internal and external auditors, especially during PCI-
and SOX-driven audits, and are becoming one of the key reasons that
many organizations fail compliance audits. Terefore, organizations
must have efective control of all privileged identities, including ap-
plication identities, to ensure compliance with audit and regulatory
requirements.
5
AVOID BAD HABITS
To better protect against breaches, organizations must
establish best practices for securely exchanging privileged
information. For instance, employees must avoid bad habits (such
as sending sensitive or highly confdential information via e-mail or
writing down privileged passwords on sticky notes). IT managers
must also ensure they educate employees about the need to create and
set secure passwords for their computers instead of using sequential
password combinations or their frst names.
Te lesson here is that the risk of internal data misuse and
accidental leakage can be signifcantly mitigated by implementing ef-
fective policies and technologies. In doing so, organizations can better
manage, control, and monitor the power they provide to their employ-
ees and systems and avoid the negative economic and reputational
impacts caused by an insider data breach, regardless of whether it was
done maliciously or by human error.
n
2009FALL CONFERENCE
End-to-End Reliability:
For more information
and to register, visit
www.7x24exchange.org
Media Partners:
KEYNOTE TOPICS
Leadership and Accountability
When It Matters
Commander Kirk S. Lippold, USN (Ret)
Commander of the USS Cole
IBM Achieving Data Center
Availability and Energy Efficiency
Steven Sams Vice President
Global Site Facilities Services, IBM
Global Economic Impact on Data
Centers Can ASHRAE Books Help?
Don Beaty President DLB Associates
Past Chair of ASHRAE TC9.9
November 15-18, 2009
JW Marriott Desert Ridge, Phoenix, AZ
MITSUBISHI ELECTRIC
UPS Division
Conference Partners:
DCJ_ad_4fx.qxd:7x24 Conference 9/25/09 6:05 PM Page 1
28 | THE DATA CENTER JOURNAL www.datacenterjournal.com
ITOPS
F
or many shops, this information is unavailable: IT does not
receive an energy bill, and does not use, or have, tools to iden-
tify its share of energy consumption. In the past, electricity
costs, especially in smaller IT shops, were of minor concern
in many cases, the energy bill was simply lef in the hands of
the facilities director or company accountant to pay and fle away.
However, in the same study, Info-Tech fnds that 28% of IT de-
partments are now piloting an energy measurement solution of some
kind, and an additional one-quarter of shops are planning a measure-
ment project within twelve months. Many converging factors drive
interest in measuring and managing energy use, and the major ones
are outlined here:
n Increasing energy costs
Te US Energy Information Administration (EIA) reports that
between 2000 and 2007, the average price of electricity for busi-
nesses increased from 7.4 cents per kilowatt-hour (kWh) to 9.7
cents per kWh an increase of 30%.
n Burgeoning data center energy consumption
According to the American Society of Heating, Refrigerating and
Air-Conditioning Engineers (ASHRAE), energy density of typical
mid-range server setups has increased about four times between
2000 and 2009 (from about 1,000 watts per square foot to almost
4,000). Greater server consumption means more waste in the
form of heat, so energy consumption of cooling and support
systems also spikes simultaneously.
n Green considerations
Energy consumption has an associated carbon footprint. Interest
in reducing energy use has increased in IT and senior manage-
ment ranks.
Ultimately, interest in energy data is driven by the age-old
accounting precept: What gets measured gets done. Realizing that
energy use will become a compounding issue, a growing number of
IT shops seek to quantify energy as an operational cost, just like line
items such as stafng and maintenance. Once the cost is accounted
for, IT has a number to improve on. In this note, learn about three
options for obtaining energy numbers in the data center. A companion
Info-Tech Advisor research note, Energy Measurement Methods for
End-User Infrastructure describes how to obtain energy data at the
user infrastructure level (workstations, printers, and the like).
CONSIDERATIONS FOR CALCULATION
Ultimately, energy data needs to be collected from two cost
buckets: data-serving equipment (servers, storage, networking, UPS)
and support equipment (air conditioning, ventilation, lighting, and
the like). Changes in one bucket may afect the other bucket, and by
tracking both, IT can understand this relationship. Tese buckets are
also necessary for common efciency calculations; for more informa-
tion, refer to the Info-Tech Advisor research note, If You Measure It,
Tey Will Green: Data Center Energy Efciency Metrics. Sofware
for tracking energy use and cost is another consideration. While
assessing the need for a full energy management solution, IT shops
can use something as simple as an Excel spreadsheet to enter energy
fgures and track costs over a few months. Specifcs on collecting data-
serving and support equipment energy data, and tracking sofware, are
discussed further below.
OPTION ONE: You May Already Have Access to
Energy Data
Depending on data center setup, vintage and pedigree of
equipment, some IT shops can already collect energy numbers at the
data-serving or support equipment levels. Te following scenarios are
common starting points when beginning data collection:
n Existing software metering
Newer servers, power-distribution units (PDUs) and UPS
systems have monitoring built into the included management
consoles. For example, newer HP ProLiant blades ship with
power tracking features, and the HP Insight Control management
console provides energy monitoring capabilities.
n Existing hardware metering
Some server racks and PDUs may have hardwired meters built
in. For example, some of APCs more basic PDUs for racks have
built-in power screens.
Unfortunately, built-in metering is rarer in the support equip-
ment bucket. Many older data center air conditioning units and air
handlers do not provide this data. In some cases, one can estimate this
energy number by subtracting the data-serving bucket from the total
data center energy draw. But, since older data centers may not be sub-
metered (the draw of the data center is not measured separately from
the rest of the building), one cannot always perform this calculation,
and installation of a meter is necessary.
Energy Measurement
Methods for the Data Center
A recent Info-Tech study of over 800 mid-sized IT shops found that only
25% have fully adopted an IT energy measurement initiative.
THE DATA CENTER JOURNAL | 29 www.datacenterjournal.com
If existing sofware or hardware metering includes management sofware for trending,
this may be enough to set up a baseline. However, if energy numbers need to be collected, IT
can record data from consoles, panels, or data fles manually for short periods of time. Tis data
can be entered into spreadsheets or dedicated sofware. Te US Department of Energy has a
directory of sofware packages, such as Energy Lens, a $595 US Excel plug-in, and ofers a free
assessment tool, the Data Center Energy Profler.
OPTION TWO: Cheap & Cheerful
If energy numbers are not available through existing equipment or sofware, IT should
make an investment in this capability. Tis is a common scenario for smaller or older facilities,
and is ofen required to measure energy on the support equipment side for many shops. Cheap
and cheerful data collection options include:
n Basic watt readers
Tese measure wattage drawn from the plug. Inexpensive devices provide spot readings
only, starting around $20 US. However, a popular line at a slightly higher price point,
Watts up, ofers energy tracking and PC connectivity with a graphing package, starting
around $130 US. Tese are best-suited to smaller server rooms and data centers but may
not be appropriate for larger or mission-critical facilities with aggressive energy needs.
n Industrial-strength meters
Standard Performance Evaluation Corporation (SPEC) provides a list of heavier-duty
energy meters, which typically run $200 US to more than $2000 US. Tese meters, many
of which are designed for manufacturing and industrial environments, include data con-
nectivity and are better-suited to handling the industrial-grade energy requirements of
multiple PDUs and high-voltage components in data centers. SPEC provides free measure-
ment sofware that is verifed as compatible with these devices.
To collect data in both buckets, IT may need to have an electrician or data center profes-
sional install sub-meters or dedicated measurement devices. If the organization is not yet ready
for such a move, cheap and cheerful options should at least provide a rough cost number for the
data-serving bucket to quantify the true operational cost of servers and storage.
Note that options one or two ofen come along with two major disadvantages. First, some
solutions model energy use of isolated components in the data center. IT still wont understand
how changing energy consumption of a group of components afects other components in the
data center; for example, changing server loads afect heat output and thus air cooling needs.
Second, measuring total data center energy use at only one or a few points causes fat trend-
ing; essentially, IT will have a total energy use/cost number, but wont understand how energy
use trends up and down in diferent areas of the data center. With both of these disadvantages,
long-term optimization remains difcult. Options one and two are good options to get an
overall handle on energy costs, while major optimizations ofen require a bigger investment in
option three, described next.
OPTION THREE: Professional-Grade Management Solutions
An increasing number of hardware vendors and data center energy equipment provid-
ers ofer full management packages for data centers, which include integrated hardware and
sofware and extensive reporting and trending options. In addition, data center planners ofen
include these features in new data center plans, since the additional cost of such a project is
nominal. Complete management solutions tend to come in two forms:
n As an add-on to an existing facility
Both tier one and specialized vendors now provide power management capabilities for
existing facilities. Sentilla, for example, recently introduced a solution that includes wire-
less meters which feed sofware, priced on a per-device basis, starting at $40 US per month
and declining as volumes increase. Te measurement devices can be installed directly or
clamp onto cables of existing equipment. Sentilla has priced this solution to allow a return
on investment of less than one year based on typical optimizations.
n Integrated into equipment upgrades or a new facility
New power equipment, servers, and other data center components ofen include power
30 | THE DATA CENTER JOURNAL www.datacenterjournal.com
EDUCATION
CORNER
A
fer youve visited hundreds of data centers over the last 20+
years (like your authors), you begin to see problems that are
common to many of them. Were taking this opportunity to
list some of them and to recommend how to correct them.
Please understand that we are focusing on existing older
(aka legacy) data centers that must remain in production.
1
PROBLEM: Leaky raised access foor
Most existing data centers employ raised access foor
to route cold air from cooling units to foor air
outlet tiles and grilles that discharge the air where needed.
However, leaks in the foor waste the cold air and reduce
cooling ability.
r REMEDY:
Identify the leaks and close them. Typical culprits
are misftted foor tiles, gaps between foor tiles
and walls and columns, columns not built out
completely to the structural foor beneath, and
oversized foor cable cutouts. Unnecessary
cutouts should be eliminated and necessary
cutouts should be closed with brush-type
closures.
Common Mistakes
in Existing Data
Centers and How
to Correct Them
tracking and management features as standard. Tis may not pro-
vide complete data for both data-serving and support equipment
buckets; however, if an upgrade is being performed anyways, get-
ting these features without incurring additional costs is a bonus.
Have the vendor demonstrate how these features work before
buying.
Professional grade solutions, whether installed independently
or included with data center upgrades, obviously cost more than op-
tions one and two. Tese solutions, which automate collection of very
granular data, are useful once data center operators and IT leaders
fully understand energy use principles and baselines, and when the
business is ready to move to energy optimization and reduction. Op-
tions one and two are better choices for starting to establish energy
cost as an operational line item. Option three is better for long-term
energy and cost reduction goals.
BOTTOM LINE
In the data center, options for energy monitoring and measure-
ment are beginning to proliferate. Understand why IT shops are
benchmarking energy use now, which components need to be mea-
sured in data centers, and three options for getting started with data
collection and trending.
n
Info-Tech Research Group is a global leader in providing IT research and advice.
Info-Techs products and services combine actionable insight and relevant
advice with ready-to-use tools and templates that cover the full spectrum of IT
concerns. www.infotech.com
RECOMMENDATIONS
1. Go cheap and cheerful frst. Automatic data
collection and trending in both data-serving
and support equipment is very useful; it allows
IT to identify when and why energy use spikes.
However, when piloting energy management,
it may be suffcient to collect rough data and
record energy fgures manually, in a spreadsheet
or basic tracking software, a few times a day for
a month or two. Eventually, a more aggressive
solution will be required especially in organiza-
tions responsible for more than 50 servers.
2. Use basic data as a call to action. Tracking en-
ergy use for a month or two, cheaply and cheer-
fully, gives IT a silver bullet. Senior management
now has a real number attached to the cost of
energy; use this to get their attention. Moreover,
a demonstrative energy fgure provides a great
starting point to build the business case for a
comprehensive monitoring solution.
BY CHRISTOPHER M. JOHNSTON, PE AND VALI SORELL, PE
SYSKA HENNESSY GROUP, INC.
www.datacenterjournal.com THE DATA CENTER JOURNAL | 31
2
PROBLEM: Underfoor
volume congested with
cables
Tis condition ofen manifests itself in foor
tiles that wont lay fat and foor air outlet tiles
that wont discharge air.
r REMEDY:
Identify control, signal, and power cables
that are not in service, then carefully remove
(mine) them. If you dont have this expertise
in your staf, then you should engage a skilled
IT cabling contractor.
3
PROBLEM: Space
temperature too cold
In the past, data center managers liked
to keep the room like a meat locker, believing
the theory that a colder space would buy a
little more ride through time when the cool-
ing system went of and had to be restarted.
Te miniscule additional ride through time (a
few seconds) is gained at the high operating
cost of keeping the room unnecessarily cold.
Te current ASHRAE TC9.9 Recommended
Termal Envelope is 64.4 F to 80.6 F dry
bulb air at the server inlet; the warmer the air
temperature the lower your operating cost.
r REMEDY:
Move the control thermostats in each of your
cooling units to the discharge air side if not
already located there (one unit at a time) and
calibrate the thermostat. Set the thermostat
to maintain 60 F discharge air. Once all of
the thermostats are on the discharge air side,
start raising their setpoints 1 F at a time and
monitor the inlet temperature at your warm-
est servers for a day. If the inlet air tempera-
ture at your warmest server is less than 75 F
afer a day, raise the temperature leaving the
cooling units another degree. Continue until
the warmest server has 75 F entering air.
4
PROBLEM: Cooling units
fght each other
We cannot count how many times
weve seen one cooling unit cooling and
dehumidifying while the one beside it is hu-
midifying. Tis is an energy wasting process
that is a relic of the days when the industry
consensus design condition was 72F +/- 2F
and 40% relative humidity +/- 5% (and before
that a relic of the paper punch card days). As
mentioned above, todays thermal envelope is
64.4 F to 80.6 F dry bulb. Te same thermal
envelope specifcation also includes a recom-
mended range of moisture content. Tat
range is defned as 41 F dew point to 59 F
dew point, with a maximum cap of 60% rela-
tive humidity. If the entering air temperature
is 75 F then the relative humidity can fall
anywhere from 33% to 60%. Te days of tight
temperature and humidity control bands are
past and the need for simultaneous humidif-
cation and reheat are over.
r REMEDY:
Disable humidifcation and reheat in all cool-
ing units except two in each room (on op-
posite sides of the room). Change the controls
for those units so they operate based on room
dew point temperature. If multiple sensors
are used, its important that a single average
value be used as the controlled value. Tis can
prevent calibration errors between multiple
sensors from forcing CRAC unit to fght each
other. Set the controls to maintain dew point
within the ASHRAE TC9.9 Recommended
Termal Envelope.
5
PROBLEM: Electrical
redundancy for cooling
units is lower than the
mechanical redundancy
Tis is another one weve lost count of. Te
typical scenario is that the desired site redun-
dancy is Tier III or Tier IV, the mechanical
engineer has done a good job designing to the
desired tier, but the electrical engineer lost fo-
cus and branch circuited every cooling unit to
one or two panelboards. Te end result is that
the redundancy of the site is Tier I because
the electrical redundancy for the cooling
units is lower than the mechanical redundan-
cy. For example, assume that the need is for
10 cooling units and 12 are provided, so the
mechanical redundancy is N+2. Te electrical
engineer however has circuited all cooling
units to one branch circuit panelboard so the
electrical redundancy is N if the one panel-
board fails then all cooling fails.
r REMEDY:
Identify another source to supply backup pow-
er for the cooling units this source may be
direct from the standby generator if need be.
Te main criterium for this Source 2 is that it
is available if the original Source 1 fails. Ten,
add transfer switches for each cooling unit so
that Source 2 will supply if Source 1 fails.
6
PROBLEM: No hot aisle/cold
aisle cabinet arrangement
Tis problem becomes more burden-
some as the critical load density (watts/square
foot) increases. At low critical load densities it
is not a problem.
r REMEDY:
As time passes and technology refreshes,
migrate to a hot aisle/cold aisle arrange-
ment. Tere is no magic bullet for this just
advance planning and attention to detail.
7
PROBLEM: Too many CRAC
units operating
Tis one may seem counterintuitive,
so its no surprise that this occurs in most
legacy data centers. Poor air fow manage-
ment creates hot spots i.e. locations where
the temperature entering the server cabinets
is outside of the TC9.9 thermal envelope.
Te conclusion most data center managers
and facilities managers make is that there is
insufcient capacity, so they run more CRAC
units.
r REMEDY:
Adding more CRAC units when the capac-
ity was already sufcient actually makes the
problem worse, especially when using con-
stant volume CRAC units. Te CRAC units
will operate less efciently, using more energy
to dehumidify the space, which in turn forces
the reheat coils and the humidifers to run
concurrently. Te solution is to eliminate the
humidifers in all but two units (see item #4
above) and disconnect all reheat coils. An
equally important step is to match the load
within the space to the capacity available. It is
common to see 300% of the needed capacity
actually on and operating at any time. Once
the air fow management remedies listed in
items #1 through #4 above is implemented,
the more appropriate capacity that should be
operating at any time is 125% to 150%.
8
PROBLEM: The cabinets
restrict airfow into the
servers contained inside
Sometimes, the data centers worst enemies
are the cabinets selected for the space. Legacy
data centers ofen used cabinets with solid
glass or panel doors. Even though some
breathing holes are provided, they do in
fact ofer too much resistance to the air fow
needed by the computer equipment inside.
r REMEDY:
Replace doors with perforated doors of large
free area. Te larger the free area, the better.
Tis applies to both front and rear doors of
the cabinets.
n
T
he economy has certainly been tough
on all of us these past 12 months. I
thought it might be worthwhile to
revisit an article we published on DCJ
in 2006 concerning technology and its
market potential and duration.
We believe that these questions and can
be easily answered by recalling something
learned years ago in Econ101; the S curve.
Te basic tenants of the S curve are that
1) all successful products follow a
known and predictable path through three
stages; Innovation Growth and Maturity and
2) that these stages are of equal length.
So lets explore the history of the computer
and internet, events, dates and time frames:
Te frst electronic computer was de-
veloped for the US military and was frst put
in use in 1945. By todays standards for elec-
tronic computers the ENIAC was a grotesque
monster. It had thirty separate units, weighed
over thirty tons, used 19,000 vacuum tubes,
1,500 relays, and demanded almost 200,000
watts of electrical power. ENIAC was the
prototype from which most other modern
computers evolved.
1960
the frst commercial computer
with a monitor and keyboard
was introduced, Digitals PDP-1.
1962
the frst personal computer
was introduced. It was called
LINC and each unit cost over $40,000.
1969
ARPANET was created to
link government researchers
scattered across the US at universities and re-
search facilities so that they could share data.
Tis was the start of the Internet.
1976
Apple Computer Company
was created and around 1977,
the frst Apple computer was introduced. It
was a kit that the customer assembled. Te
next year Apple introduced a factory-as-
sembled version. Te volume of sales was
small and the costs high. Te Apple was
followed by an almost endless list of me-too
computers; Timex Sinclair, Commodore,
Tandy, Pet, etc.
1981
IBM introduced Te Personal
Computer. Te IBM name
and open architecture and DOS operat-
ing system enabled other manufacturers to
introduce IBM Compatible PCs, also know
as clones.
Ten in 1985 Microsof introduced
Windows. Windows moved the PC from text
based commands to point and click. Tis
transformed the PC from a tool for only the
most dedicated to something that everyone
could easily master and moved the PC from
something considered as a toy by many to
a legitimate business tool. Sales volumes
kicked up, competition was ferce and prices
dropped dramatically.
By 2001, the PC was readily available,
inexpensive, and standard equipment on
almost every desk in corporate America, a
commodity product with low margins and
slow growth. Tis could be the end of the
story but growth of another technology
would overshadow the development of the
PC, push technology into our everyday lives,
and give the PC a new lease on life.
As PCs developed so did ARPANET.
Te Internet was largely used by IT profes-
sionals, researchers, academia and other
early adapters of technology. It was slow,
text based and difcult to use. In 1994, Jim
Clark and Marc Anderson developed the
Netscape Browser and just as Windows had
made the PC a practical tool, Netscape made
the Internet practical.
Tere were many other milestones that
deserve attention and were perhaps more
important then some of the events mentioned
here, such as the research performed at Xerox
PARK, where modern desktop computing
was created; windows, icons, mice, pull down
menus, What You See Is What You Get
(WYSIWYG) printing, networked worksta-
tions, object-oriented programming -- etc.
What many dont know is that Xerox could
have owned the PC revolution but simply
couldnt bring itself to disrupt its core busi-
ness of making copiers.
Why is all of this so important? Well,
depending on your starting point the innova-
tion phase is likely to have been somewhere
between 20 and 30 years and possibly even
longer. Tis isnt an exact science since we
dont know how large the market will ulti-
mately grow or where the curve really starts.
No matter how you draw the curve we are
likely below the 50% penetration level and
have a long stretch to go.
Te dot-com boom was fueled by the
release of signifcant IT resources and tal-
ent as Y2K preparations drew to a close, an
investment community that recognized the
tremendous technology growth ahead, and
signifcant innovation.
Te dot-com bust occurred because
an over anxious investment community
provided too much money too fast. Te
buying power of the Early Adopters, people
and companies who want to be in the leading
edge and are willing to pay high prices just
wasnt signifcant enough to absorb all of the
innovation. Tis pushed the supply above the
curve. As with all economic imbalances the
market forces correction.
Further, many dot-com innovations
lacked key infrastructure. Just as the automo-
bile could not have been successful without
development of roads, bridges, gas stations,
tire dealers, hotels and even fast food, many
of the services that were introduced during
the dot-com boom required signifcant devel-
opment in other areas.
For example, hosting applications at
remote unmanned data centers or colloca-
tion facilities is only practical with remote
management applications and inexpensive
bandwidth. We may take this for granted
today, bandwidth wasnt inexpensive seven
years ago and remote management tools were
not as sophisticated as they are today.
Yes there have been casualties along the
way but signifcant advancements were made
during the dot-com boom and early adapters
have in many cases reaped many benefts.
Manage Service Providers, Collocation and
other services have sent signifcant growth
and success since we frst published this
article and if our numbers are correct have
quite a run to go
YOURTURN
Technology and the Economy
BY KEN BAUDRY
From our Experts Blog:
WWW.DATACENTERJOURNAL.COM/BLOGS
32 | THE DATA CENTER JOURNAL www.datacenterjournal.com
1960
1960 At Cornell University, Frank Rosenblatt
builds a computerthe Perceptronthat can
learn by trial and error through a neural network.
1960 The Livermore Advance Research Computer
(LARC) by Remington Rand is designed for
scientific work and uses 60,000 transistors.
1960 In November, DEC introduces the
PDP-1, the first commercial computer
with a monitor and keyboard input.
1960 Working at Rand Corp.,
Paul Baran develops the
packet-switching principle for
data communications.
BEGI N
FI LE F ( KI ND=REMOTE) ;
EBCDI C ARRAY E [ 0: 11] ;
REPLACE E BY HELLO WORLD! ;
WHI LE TRUE DO
BEGI N
WRI TE ( F, *, E) ;
END;
END.
1960 Standards
for Algol 60 are
established
jointly by
American and
European
computer
scientists.
http://www.latec.edu/~acm/HelloWorld.shtml
D
i
g
i
t
a
l
E
q
u
i
p
m
e
n
t
C
o
r
p
o
r
a
t
i
o
n
R
a
n
d
C
o
r
p
.
1962-1963
1962 Atlas, considered the worlds
most powerful computer, is
inaugurated in England on December
7. Its advances include virtual memory
and pipelined operations.
1962 The Telstar communications
satellite is launched on July 10 and
relays the first transatlantic television
pictures.
1962 H. Ross Perot founds Electronic
Data Systems, which will become the
worlds largest computer service bureau.
1962 Stanford and Purdue Universities
establish the first departments of
computer science.
1962 Max V. Mathews leads a Bell
Labs team in developing software that
can design, store, and edit synthesized
music.
1962 The first video game is invented by MIT
graduate student Steve Russell. It is soon
played in computer labs all over the US.
T
h
e
C
o
m
p
u
t
e
r
M
u
s
e
u
m
1963 On the basis of an idea of Alan
Turings, Joseph Weizenbaum at MIT
develops a mechanical psychiatrist
called Eliza that appears to possess
intelligence.
1969 1970
1969 Bell Labs withdraws from
Project MAC, which developed
Multic, and begins to develop Unix.
1969 The RS-232-C standard is
introduced to facilitate data exchange
between computers and peripherals.
1969 The US Department of Defense
commissions Arpanet for research
networking, and the first four nodes
become operational at UCLA, UC
Santa Barbara, SRI, and the University
of Utah.
1970 Winston Royce
publishes Managing
the Development of
Large Software
Systems, which
outlines the waterfall
development method.
1970 Shakey,
developed at SRI
International, is
the first robot to
use artificial
intelligence to
navigate.
T
h
e
C
o
m
p
u
t
e
r
M
u
s
e
u
m
T
h
e
C
o
m
p
u
t
e
r
M
u
s
e
u
m
1976
1976 IBM develops the
ink-jet printer.
1976 The Cray-1 from Cray Research is
the first supercomputer with a vectorial
architecture.
1976 OnTyme, the first
commercial e-mail service,
finds a limited market
because the installed base
of potential users is too
small.
1976 Steve Jobs and Steve Wozniak
design and build the Apple I , which
consists mostly of a circuit board.
1976 Gary Kildall develops
the CP/M operating system
for 8-bit PCs.
T
h
e
C
o
m
p
u
t
e
r
M
u
s
e
u
m
T
h
e
C
o
m
p
u
t
e
r
M
u
s
e
u
m
1977 The Apple II is
announced in the spring
and establishes the
benchmark for personal
computers.
1977
1977 Steve Jobs and Steve Wozniak
incorporate Apple Computer on January 3.
1977 Bill Gates and Paul Allen found Microsoft,
setting up shop first in Albuquerque, New Mexico.
A
p
p
l
e
C
o
m
p
u
t
e
r
,
I
n
c
.
M
i
c
r
o
s
o
f
t
A
r
c
h
i
v
e
s
1977 Several companies
begin experimenting
with fiber-optic cable.
1980-1981
1981 The open-architecture IBM PC is launched in
August, signaling to corporate America that desktop
computing is going mainstream.
1981 Japan grabs a big piece of the
chip market by producing chips
with 64 Kbits of memory.
I
B
M
A
r
c
h
i
v
e
s
1980 David A. Patterson at
UC Berkeley begins using
the term reduced-instruction
set and, with John Hennessy
at Stanford, develops the concept.
1981 Xerox introduces a
commercial version of the Alto
called the Xerox Star.
1981 Barry Boehm devises Cocomo
(Constructive Cost Model), a
software cost-estimation model.
1976 1977 1981
www.computer.org/computer/timeline/timeline.pdf
Get your free Building Owners Guide to
precision cooling at www.DataAire.com
For over 40 years, weve been the industry
innovator in precision environmental control.
Specializing in:
n
Precision cooling units built to your specifications
n
Short lead times
n
Advanced control systems
n
Ultra-reliable technology
82
The reliable choice in
precision cooling equipment.
714- 921- 6000
Perfect for a picnic.
Fine for a jog.
Agony for a computer.
DataAire_RunnerAd.indd 1 6/18/08 11:04:30 AM