Вы находитесь на странице: 1из 36

Data Center

Effciency and
Design
Volume 13 | November 2009
Green Power Protection spinning
into a Data Center near you
Isolated-Parallel UPS Systems
Effciency and Reliability?
Powering Tomorrows Data Center:
400V AC versus 600V AC Power Systems
MC5402_OP24_DCJFIN.indd 1 9/15/09 2:30:58 PM
2 | THE DATA CENTER JOURNAL www.datacenterjournal.com
FACILITY
CORNER
ELECTRICAL
4 ISOLATED-PARALLEL UPS
SYSTEMS EFFICIENCY AND
RELIABILITY?
By Frank Herbener & Andrew Dyke, Piller Group
GmbH, Germany
In todays data center world of ever
increasing power demand, the scale
of mission critical business dependent
upon uninterruptible power grows ever
more. More power means more energy
and the battle to reduce running costs is
increasingly ferce.

MECHANICAL
8 OPTIMIZING AIR COOLING
USING DYNAMIC TRACKING
By John Peterson, Mission Critical Facility
Expert, HP
Dynamic tracking should be considered
as a viable method to optimize the
effectiveness of cooling resources in a
data center. Companies using a
dynamic tracking control system beneft
from reduced energy consumption and
lower data center
costs.
CABLING
11 PREPARE NOW FOR THE
NEXT-GENERATION DATA
CENTER
By Jaxon Lang, Vice President, Global
Connectivity Solutions Americas, ADC
Fueled by applications such as IPTV,
Internet gaming, fle sharing and mobile
broadband, the food of data surging
across the worlds networks is rapidly
morphing into a massive tidal wave--
one that threatens to overwhelm any
data center not equipped in advance to
handle the onslaught.
SPOTLIGHTS
ENGINEERING AND
DESIGN
14 GREEN POWER PROTECTION
SPINNING INTO A DATA
CENTER NEAR YOU
By Frank DeLattre, President, VYCON
Flywheel energy storage systems
are gaining strong traction in data
centers, hospitals, industrial and other
mission-critical operations where
energy effciency, costs, space and
environmental impact are concerns.
This green energy storage technology
is solving sophisticated power problems
that challenge computing operations
every day.
18 POWERING TOMORROWS
DATA CENTER: 400V AC
VERSUS 600V AC POWER
SYSTEMS
By Jim Davis, business unit manager, Eaton
Power Quality and Control Operations
While major advancements in electrical
design and uninterruptible power
system (UPS) technology have provided
incremental effciency improvements,
the key to improving system-wide
power effciency within the data center
is power distribution.
22 DATA CENTER EFFICIENCY
ITS IN THE DESIGN
By Lex Coors, Vice President Data Center
Technology and Engineering Group, Interxion
Data centers have always been power
hogs, but the problem has accelerated
in recent years. Ultimately, it boils down
to design, equipment selection and
operation of which measurement is an
important part.
ITCORNER
25 ONLINE BACKUP OR CLOUD
RECOVERY?
By Ian Masters, UK Sales & Marketing Director,
Double-Take Software
There is an old saying in the data
protection business that the whole point
of backing up is preparing to restore.
Having a backup copy of your data is
important, but it takes more than a
pile of tapes (or an on-line account) to
restore.
26 FIVE BEST PRACTICES
FOR MITIGATING INSIDER
BREACHES
By Adam Bosnian, VP Marketing Cyber-Ark
Software
Mismanagement of processes involving
privileged access, privileged data, or
privileged users poses serious risks to
organizations. Such mismanagement is
also increasing enterprises vulnerability
to internal threats that can be caused
by simple human error or malicious
deeds.
All rights reserved. No portion of DATA CENTER Journal may be reproduced without written
permission from the Executive Editor. The management of DATA CENTER Journal is not
responsible for opinions expressed by its writers or editors. We assume that all rights in
communications sent to our editorial staff are unconditionally assigned for publication.
All submissions are subject to unrestricted right to edit and/or to comment editorially.
AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30022
PHONE: 678-762-9366 | FAX: 866-708-3068 | WWW.DATACENTERJOURNAL.COM
DESIGN : NEATWORKS, INC | TEL: 678-392-2992 | WWW.NEATWORKSINC.COM
THE DATA CENTER JOURNAL | 3 www.datacenterjournal.com
ITOPS
28 ENERGY MEASUREMENT
METHODS FOR THE DATA
CENTER
By Info-Tech Research Group
Ultimately, energy data needs to be
collected from two cost buckets:
data-serving equipment (servers,
storage, networking, UPS) and support
equipment (air conditioning, ventilation,
lighting, and the like). Changes in one
bucket may affect the other bucket, and
by tracking both, IT can understand this
relationship.
EDUCATION
CORNER
30 COMMON MISTAKES IN
EXISTING DATA CENTERS &
HOW TO CORRECT THEM
By Christopher M. Johnston, PE and Vali Sorell,
PE, Syska Hennessy Group, Inc.
After youve visited hundreds of data
centers over the last 20+ years
(like your authors), you begin to
see problems that are common
to many of them. Were taking
this opportunity to list some
of them and to recommend
how to correct them.
YOURTURN
32 TECHNOLOGY AND THE
ECONOMY
By Ken Baudry
An article from our Experts Blog
Holis-Tech................................. Inside Front
www.holistechconsulting.com
MovinCool ..................................... pg 1
www.movincool.com
PDU Cables .................................. pg 3
www.pducables.com
Piller ............................................... pg 7
www.piller.com
Server Tech ................................... pg 9
www.servertech.com
Snake Tray .................................... pg 10
www.snaketray.com
Binswanger ................................. pg 13
www.binswanger.com/arlington
Upsite ............................ pgs 19, 21, 23
www.upsite.com
Universal Electric ....................... pg 20
www.uecorp.com
Sealeze .......................................... pg 22
www.coolbalance.biz
AFCOM ........................................... pg 24
www.afcom.com
7x24 Exchange ............................ pg 27
www.7x24exchange.org
Info-Tech Research Group ....... pg 29
www.infotech.com/measureit
Data Aire ....................................... Back
www.dataaire.com
CALENDAR
NOVEMBER
November 15 - November 18, 2009
7x24 Exchange International 2009 Fall Conference
www.7x24exchange.org/fall09/index.htm
DECEMBER
December 2 - December 3, 2009
KyotoCooling Seminar: The Cooling Problem Solved
www.kyotocooling.com/KyotoCooling%20Seminars.html
December 1 - December 10, 2009
Gartner 28th Annual Data Center Conference 2009
www.datacenterdynamics.com
VENDOR
INDEX
4 | THE DATA CENTER JOURNAL www.datacenterjournal.com
Parallel
Redundant
System
Redundant
Isolated
Redundant
Distributed
Redundant
Isolated
Parallel
Redundant
Fault tolerant No Yes Yes Yes Yes
Concurrently
maintainable
No Yes Yes Yes Yes
Load Manage-
ment required
No No Yes Yes No
Typical UPS
module loading
(max)
85% 50% 100%* 85% 94%
Reliability order
(1= best)
5 1 4 3 2
* One module is always completely unloaded.

Table 1 Comparison of UPS scheme topologies.
A
parallel redundant scheme usually
provides N+1 redundancy to boost
reliability but sufers from single
points of failure including the out-
put paralleling bus and the scheme
is limited to around 5 or 6MVA at low volt-
ages. Te whole system is not fault tolerant
and is difcult to concurrently maintain.
A System + System approach can
overcome the maintenance and fault tolerance
issues but sufers from a very low operating
point on the efciency curve. Like the paral-
lel redundant scheme, it too, is limited in scale
at low voltages.
An isolated or distributed redundant
scheme can be employed to tackle all these
problems but such schemes introduce ad-
ditional requirements such as essential load
sharing management and static transfer
switches for single corded loads.
Te Isolated-Parallel (IP) rotary UPS
system eliminates the fundamental draw-
backs of conventional approaches to provide
a highly reliable, fault tolerant, concur-
rently maintainable and yes, highly efcient
solution.
IP SYSTEM CONFIGURATION
Te idea [1] of an IP system is to use
a ring bus structure with individual UPS
modules interconnected via 3-phase isolation
chokes (IP chokes). Each IP choke, is de-
signed to limit fault currents to an acceptable
level at the same time as allowing sufcient
load sharing in the case of a module output
failure. Load sharing communications are
not required and the scale of low voltage
systems can be greatly increased.

LOAD SHARING
In normal operation each critical load
is directly supplied from the mains via its as-
sociated UPS. In the case that the UPSs are all
equally loaded there is no power transferred
through the IP chokes. Each unit indepen-
dently regulates the voltage on its output bus.
In an unbalanced load condition each
UPS still feeds its dedicated load, but the units
with resistive loads greater than the average
load of the system receive additional active
Isolated-Parallel UPS Systems
Effciency and Reliability?
FRANK HERBENER & ANDREW DYKE, PILLER GROUP GMBH, GERMANY
In todays data centre world of ever increasing power demand, the scale of mission critical business
dependant upon uninterruptible power grows ever more. More power means more energy and
the battle to reduce running costs is increasingly ferce. Optimizing system effciency without
compromise in reliability seems like an impossible task or is it?
Figure 1 Isolated-Parallel System
FACILITY CORNER
ELECTRICAL
THE DATA CENTER JOURNAL | 5 www.datacenterjournal.com
power from the lower loaded UPS via the IP bus (see Figure 2). It is
the combination of the relative phase angles of the UPS output busses
and the impedance of the IP choke that controls the power fow. Te
relative phase angles of the UPS must be naturally generated in cor-
relation to the load level in order to provide the ability of natural load
sharing among the UPS modules without the necessity of active load
sharing controls.

Figure 2 Example of load sharing in an IP system consisting of 16
UPS Modules
Te infuence of the IP choke should also be considered: With
all UPS modules having the same output voltage, the impedance of
the IP choke inhibits the exchange of reactive current, so that reactive
power control is also not necessary.
Looking at the mechanisms of natural load sharing in an IP
system, it is obvious that a normal UPS bypass operation would
signifcantly disturb the system. So, if the traditional bypass operation
is not allowed in an IP system, what will happen in case of a sudden
shutdown of a UPS Module? To say absolutely nothing would be
slightly exaggerated, but almost nothing is reality.

Figure 3 Example of redundant load supply in the case one UPS fails.
Te associated load is still connected to the IP bus via the IP
choke, which now works as a redundant power source. Te load will
automatically be supplied from the IP bus without interruption. In
this mode, each of the remaining UPS Modules equally feeds power
into the IP bus (Figure 3). Tere is no switching activity necessary to
maintain supply to the load.
An additional breaker between the load and the IP bus allows
connection of the load directly to the IP bus, enabling the isolation of
the faulty UPS under controlled conditions.
UPS TOPOLOGY
Te most suitable UPS topology to achieve the aforementioned
load dependent phase angle in a natural way is a rotary or diesel ro-
tary UPS with an internal coupling choke as shown in Figure 4.
1 Utility bus
2 IP bus
3 IP bus (return)
4 Rotary UPS with fywheel energy store.
5 Load bus
6 IP choke
7 Transfer breaker pair (bypass)
8 IP bus isolation breakers.
Note that a UPS module without a bi-directional energy store
(e.g. battery or induction coupling) can be used but the system is
likely to exhibit lower stability under transient conditions.
FAULT ISOLATION
Tere are two fault locations that must be evaluated: a). down-
stream of each UPS and b) the IP bus itself.
A). A fault on the IP bus is the most critical because it results in
the highest local fault currents. Te fault is parallel fed by each UPS
connected to the IP bus but limited by the sub-transient reactance of
the UPS combined with the impedance of its IP choke. Tis means
that the efect on the individual UPS outputs is minimized and the
focal point remaining is the fault withstand of the IP ring itself.
Figure 4 IP system using Piller UNIBLOCK T Rotary UPS with
bi-directional energy store.
6 | THE DATA CENTER JOURNAL www.datacenterjournal.com
b). A fault on the load side of a UPS is mostly fed by the associ-
ated UPS, limited by its sub-transient reactance only. A current from
each of the non afected UPS is fed into the fault too, but because of
the fact that there are two IP chokes in series between the fault and
each of the non afected UPS, this current contribution is very much
smaller. As a result of this, the disturbance at the non afected loads is
very low. Tis, in combination with the high fault current capability
of rotary ups ensures fast clearing of the fault while efectively isolat-
ing the fault from the other loads.
Figure 5 Example of a fault current distribution in case of a short
circuit on the load side of UPS #2
CONTROL
Te regulation of voltage, power and frequency plus any syn-
chronization is done by the controls inside each UPS module. Te
UPS also controls the UPS related breakers and is able to synchronize
itself to diferent sources. Each system is controlled by a separate
system control PLC, which operates the system related breakers and
initializes synchronization processes if necessary. Te system control
PLC also remotely controls the UPS regarding all operations that are
necessary for proper system integration. Te redundant Master
Control PLCs are used to control the IP system in total. Additional
pilot wires interconnecting the system controls allow safe system
operation in the improbable case that both master control PLCs fail.
MODES OF OPERATION
In case of a mains failure each UPS automatically disconnects
from the mains and the load is initially supplied from the energy
storage device of the UPS. From this moment on, the load sharing
between the units is done by a droop-function based on a power-fre-
quency-characteristic which is implemented in each UPS. No load
sharing communication between the units is required. Afer the Diesel
Engines are started and engaged, the loads are automatically trans-
ferred from the UPS energy storage device to the Diesel Engine so the
energy storage can be recharged and is then available for further use.
To achieve proper load sharing also in Diesel operation, each
Diesel Engine is independently controlled by its UPS, whether the
engine is mechanically coupled to the generator of the UPS (DRUPS)
or an external Diesel-Generator (standby) is used. A special regula-
tor structure inside the UPS in combination with the bi-directional
energy storage device allows active frequency and phase stabilization
while keeping the load supplied from the Diesel Engine.
Te retransfer of the system to utility is controlled by the master
control. Te UPS units are re-transferred one by one, thereby avoid-
ing severe load steps on the utility. Afer the whole system is syn-
chronized and the frst UPS system is reconnected to utility, the load
sharing of those UPS systems which are still in Diesel operation can
not be done by the regular droop function. To overcome this, Piller
Group GmbH invented and patented the Delta-Droop-Control
(DD-Control). Tis allows proper load sharing under this condition
without relying on load sharing communications. With the imple-
mentation of DD-Control into the UPS Modules all UPS systems can
be reconnected to utility step by step until the whole IP system is in
mains operation once more. Tis removes another problem in large
scale systems: that of step-load re-transfer to utility afer mains failure.
MAINTAINABILITY
Te IP bus system is probably the simplest (high reliability)
system to concurrently maintain because the loads are independently
fed by UPS sources and these sources can readily be removed from
and returned to the system without load interruption. Not only that,
but the ring bus can be maintained, as can the IP chokes, also without
load interruption. All the other solutions with similar maintainability
(System, Isolated and Distributed redundant), have far greater com-
plexity of infrastructure, leading to more maintenance and increased
risk during such operations.
PROJECTS
Te frst IP system was realized in 2007 for a data center in Ash-
burn VA. It consists of two IP systems, each equipped with 16 x Piller
UNIBLOCK UBT1670kVA UPS with fywheel energy storage (total
installed capacity > 2 x20MWatts at low voltage). Each of the UPS is
backed up by a separate Diesel Generator with 2810kVA, which can be
connected directly to the UPS load bus and which is able to supply both
the critical and the essential loads. Since the success of this frst instal-
lation, three more data centers have been commissioned, of which the
frst phase of one is complete (a further 20MWatts) as of today.
Tere are further projects planned to be done in medium voltage
and also a confguration combining the benefts of the IP system with
the energy efciency of natural gas engines is planned by Consulting
Engineers.
CONCLUSION
In the form of an IP bus topology, a UPS scheme that combines
high reliability with high efciency is possible.
High reliability is obtained by virtue of the use of rotary UPS
(with MTBF values in the region 3-5 times better than static technolo-
gy), combined with the elimination of load sharing controls, no mode
switching under failure conditions, load fault isolation and simplifed
maintenance.
High efciency can be obtained with such a high reliability
system because of the ability to simulate the System + System fault tol-
erance without the penalty of low operating efciencies. A 20MWatt
design load can run with modules that are 94% loaded and yet, ofer a
reliability that is similar to the S+S scheme that has a maximum mod-
ule loading of just 50%. Tat can translate in to a diference in UPS
electrical efciency of 3 or 4%. Tat means a potential waste in oper-
ating costs of $750,000 per year (ignoring additional cooling costs).
Whats more, the solution is not only concurrently maintainable
and fault tolerant with high reliability and high efciency, but can also
be realized at either low or medium voltages and can be implemented
with DRUPS, separate standby diesel engines or even gas engines for
super-efcient large scale facilities.
For complete information on the invention and history of
IP systems, refer to Piller Group GmbH paper by Frank Herbener
entitled Isolated-Parallel UPS Confguration at www.piller.com.
THE DATA CENTER JOURNAL | 7 www.datacenterjournal.com
www.piller.com
ROTARY UPS SYSTEMS
STATIC UPS SYSTEMS
STATIC TRANSFER SWITCHES
KINETIC ENERGY STORAGE
AIRCRAFT GROUND POWER SYSTEMS
FREQUENCY CONVERTERS
NAVAL POWER SUPPLIES
SYSTEM INTEGRATION
Piller Group GmbH
Abgunst 24,
37520 Osterode,
Germany
T +49 (0) 5522 311 0
E datacenterprotect@piller.com
Piller Australia Pty. Ltd. | Piller France SAS | Piller Germany GmbH & Co. KG | Piller Italia S.r.l. | Piller Iberica S.L.U
Piller Power Singapore Pte. Ltd. | Piller UK Limited | Piller USA Inc.
What do the following
organizations all have
in common?
When it comes to power protection leading organizations dont take chances. Time after time the worlds
leading organizations select Piller power protection systems to safeguard their data centers.
Why? Because there is no higher level of data center power protection available!
Whats more, Piller offers the most cost effective and the greenest through life investment available. So,
if you are planning major data center investment and would like to know more about why the worlds
leading organisations trust their data center power protection to Piller, contact us today.
datacenterprotect@piller.com
Nothing protects quite like Piller
A Langley Holdings Company
They all rely on
data centers protected
by Piller.
3M | ABB | ABN Amro | Abovenet | ADP | AEG | Airbus | Alcan | Alcatel | Aldi | Allianz | Alstom | Altair | AMD | Anz Bank | AOL | Areva | Astra Zeneca | AT & T | Australian Stock Exchange |
Australian Post | Aviva | Bahrain Financial Harbour | Banca d'Italia | Banco Bradesco | Banco Santander | Bank of America | Bank of England | Bank of Hawaii | Bank of Morocco | Bank of Scotland
| Bank Paribas | Barclays | BASF | Bayer | BBC (British Broadcasting Corporation) | BP (British Petroleum) | BICC | Black & Decker | BMW | Bosch | Bouygues Telecom | BA (British Airways) | BG
(British Gas) | BT (British Telecom) | British Civil Service | British Government | Bull Computer | CAA (Civil Aviation Authority) | Canal+ | Capital One | Channel 4 (USA) | Channel 4 (UK) | Chase |
Chevron | Chinese Army | Chinese Navy | Chrysler | Citigroup | Central Intelligence Agency (CIA) | Commerzbank | Conoco | Credit Lyonnais | Credit Mutuel | Credit Suisse | CSC | Daimler Benz |
Danish Intelligence Service | Danish Bank | Danish Radio | Dassault | De Beers | Degussa | Dell Computer | Deutsche Bank | Deutsche Bundesbank | Deutsche Post | Disney | Dow Jones | Dresdner
Bank | DuPont | Dutch Military | EADS | EADS Hamburg | EASYNET | EDF | EDS | Eli Lilly | ESAT Telecom | European Patent Office | European Central Bank | Experian | Federal Reserve Bank |
FedEX | First National Bank | First Tennesee Bank | Ford Motor | France Telecom | French Airforce | French Army | Friends Provident | Fujitsu | GCHQ (British Government Communications Head
Quarters) | Girobank | GlaxoSmithKline | GUS (General United Stores) | Heidelberger | Hewlett Packard | Hitachi | HSBC | Hynix | Hyundai | IBM | ING Bank | Intel | IRS | Iscor | J P Morgan | John
Deere | Knauf | Knorr | Kodak | Lafrage | Linde | Lindsey Oil | Lloyds of London | Lockheed | Los Alamos National Laboratories | Lottery Vienna | Lottery Copenhagen | LSE (London Stock Exchange)
| Marks & Spencer | MBNA | Mercedes Benz | Merrill Lynch | MOD (British Ministry of Defence) | Morgan Grenfell | Morgan Stanley | Motorola | NASA | NASDAQ | National Grid (British) | National
Semiconductor | Natwest Bank | Nestl | Nokia | Nuclear Elektric (Germany) | NYSE (New York Stock Exchange) | NYSE Euronext | Pfizer | Philips | Phillip Morris | Porsche | Proctor & Gamble |
Putnam Investments | Qantas | QVC | Rank Xerox | Raytheon | RBS | Reuters | Rolls Royce | Royal Bank of Canada | Royal & Sun Alliance | RWE | Samsung | Scottish Widows | Sharp | Shell |
Siemens | Sky | Sony | Sony Ericsson | Sweden Television | TelecityGroup | Thyssen Krupp | T-Mobile | Union Bank of Switzerland | United Biscuit | United Health | Verizon | VISA | VW *
* The above is an extract of Piller installations and is by no means exhaustive.
8 | THE DATA CENTER JOURNAL www.datacenterjournal.com
FACILITY CORNER
MECHANICAL
INSIDE THE DATA CENTER
O
ne of the most challenging tasks of
running a data center is managing
the heat load within it. Tis re-
quires balancing a number of fac-
tors including equipment location
adjacencies, power accessibility and available
cooling. As high-density servers continue to
grow in popularity along with in-row and in-
rack solutions, the need for adequate cooling
in the data center will continue to grow at a
substantial rate. To meet the need for cooling
using a typical under foor air distribution
system, a manager ofen adjusts perforated
foor tiles and lets the nearest Computer
Room Air Conditioner (CRAC) unit react
as necessary to each new load. However,
this may cause a sudden and unpredictable
fuctuation in the air distribution system due
to changes in static pressure and air rerouting
to available outlets which can have a ripple
efect on multiple units. With new outlets
available, air, like water, will seek
the path with less resistance; the
new outlets may starve existing
areas of cooling, causing the ex-
isting CRAC units to cycle the air
faster. Tis becomes a wasteful
use of fan energy, let alone fuc-
tuations of cooling load energy
allocation.
Most managers understand
that the air supply plenum needs
to be a totally enclosed space to
achieve pressurization for air
distribution. Oversized or un-
sealed cutouts allow air to escape
the plenum, reducing the static pressure and
efectiveness of the air distribution system.
Cables, conduits for power and piping can
also clog up the air distribution path, so
thoughtful consideration and organization
should be an essential part of the data center
operations plan. However, even the best
laid plan can still end up with areas that are
starved for cooling air.
In a typical layout, there are rows of
computer equipment racks that draw cool air
from the front and expel hot air at the rear.
Tis requires an overall footprint larger than
the rack itself (Figure 1).
When adding new data center equip-
ment, data center managers need to manage
unpredictable temperatures and identify a
new perfect balance of how many perforated
tiles to use and where to locate them. Tey
involve maintenance personnel to adjust
CRAC units, assist with tile layouts, and even
possibly add or relocate the units as neces-
sary. Due to the predetermined raised foor
height, supply air temperature and humidity
necessities, the volatile air distribution system
becomes an infexible piece of the overall
puzzle, at the expense of energy and possibly
performance due to inadequate cooling.
Meanwhile, the CRAC units are operat-
ing at variable rates to meet this load, but
mostly they are operating at their maxi-
mum capacity instead of as-needed. Why?
One reason is where the air temperature is
measured. Each unit is operating on the
return air temperature measured at the unit,
and all units are sharing the same return air.
Tis means that if the load is irregular in the
racks, the units simply cool for the overall
required capacity. Apply this across a data
center, and the units are generally handling
the cooling load without altering their fow
based on changes happening in any localized
area, which consequently allows that large
variance of temperatures in the rows.
Temperature discrep-
ancy is the main concern
for most data center
managers. Tey would like
the air system not to be the
limiting factor when adding
new equipment to racks
and prefer to remove the
variable of fckle air cooling
from the equation of equip-
ment management. At the
same time, almost behind
the scenes, facility costs
from cooling are increas-
ing to match the new load,
Optimizing Air Cooling Using
Dynamic Tracking
BY JOHN PETERSON, MISSION CRITICAL FACILITY EXPERT, HP
Dynamic tracking should be considered as a viable method to optimize the effectiveness of cooling
resources in a data center. Companies using a dynamic tracking control system beneft from reduced
energy consumption and lower data center costs.
Figure 1: Overall footprint needed per rack
THE DATA CENTER JOURNAL | 9 www.datacenterjournal.com Server Technology, Inc. Sentry is a trademark of Server Technology, Inc.
Solutions for the Data Center Equipment Cabinet
1040 Sandhill Drive
Reno, NV 89521USA
sales@servertech.com
www.servertech.com
www.servertechblog.com
tf +1.800.835.1515
tel +1.775.284.2000
fax +1.775.284.2065
Job# 081xxx
Client Server Tech
Job Data Center Journal
SPM Ad
Size 8.375x10.875 Trim+Bleed
Colors 4CP
Pages Full-Page
Rev PROOF
How Do You Measure
the Energy Efficiency
of Your Data Center?
> Sentry POPS
Measure and monitor power information per
outlet, device, application, or cabinet using Web
based CDU Interface
> Sentry Power Manager
Secure software solution to:
> Monitor, manage & control multiple CDUs
> Alarm management, reports & trending
of power info
> ODBC Compliant Database for integration into
your Building Management or other systems
> kW & kW-h IT power billing and monitoring
information per outlet, device, application,
cabinet or DC location
BMS
P R I M A R Y E T H E R N E T P I P E L I N E
WEB BASED CDU INTERFACE
WEB BASED SPM INTERFACE
DATABASE
SENTRY POWER
MANAGER APPLIANCE
Sentry: POPS Switched CDU
With Device Monitoring
> Rack Level Power Management
> Outlet Power Monitoring (POPS)
> Input Power Monitoring
> Environmental Monitoring
> Outlet Groups
> Alarms
Sentry Power Manager
> Enterprise Cabinet Power Mngt.
> Reports & Trends
> Device Monitoring
> Groups & Clusters
> Kilowatt Readings for Billing
> Auto-Discovery of Sentry CDUs
> Alarms
With Sentry Power Manager

(SPM) and
Sentry POPS

(Per Outlet Power Sensing) CDUs!


10 | THE DATA CENTER JOURNAL www.datacenterjournal.com
driving the need for more efcient use of ex-
isting resources. A Gartner report shows that
over 63% of respondents to a recent survey
indicated that they rely on air systems to cool
their data center over liquid cooling. Of those
same respondents nearly 45% shared that
they are facing insufcient power which will
need to be addressed in the near future.
1
As
IT managers are able to correct their power
constraints they are able to deploy a more
demanding infrastructure and subsequently
will require additional power and cooling.
DYNAMIC TRACKING
Although the air fow in a data center
is complex, an opportunity now exists to op-
timize the efectiveness of cooling resources
and better manage the air system within the
data center. Tere are ways to monitor air
1 Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-
Research, February 2009
temperatures within each row of cooling, and
even the temperature entering a specifc rack
at a particular height. From these tempera-
tures, an intelligent system can react to meet
the need for cooling air at that location,
eliminating the work of juggling foor tiles
and guessing at the air fow.
How is this done? To begin with, the
temperature is measured diferently. A
number of racks are mounted with sensors
that measure the supply air temperature at
the front of the rack. Tis information is
relayed to a central monitoring system that
responds accordingly by adjusting the CRAC
units. Te units then function as a team and
not independently, meeting specifc needs as
monitored in real time by the sensors. Since
the temperature is tracked from the source
and adjusts based on real time needs, this
method of measurement and control is some-
times referred to as dynamic tracking.
In the initial setup of dynamic track-
ing, the intelligent control system tests and
learns which areas of the data center each
CRAC unit afects. Ten, the units are tested
together, and the control system modulates
them to provide the most uniform distribu-
tion within the constraints of the layout and
room architecture. Tis data allows the air
system to gather intelligence on how to com-
pensate for leaks and barriers in the plenum.
From there, the system knows how the units
interact, and can intelligently judge how to
respond to changes within the data center. It
is also able to rebalance when one of the units
fails or is being serviced.
To prevent a large fuctuation, the
temperatures are measured over an extended
period of time and temperature is adjusted
depending on the cooling needs of the space.
Te CRAC units respond based on the his-
tory of how each unit has afected the specifc
area. Te overarching intelligence of the dy-
namic tracking control system gauges wheth-
er an increase in temperature is sustained or
a series of momentary heat spikes and adjusts
itself accordingly. Tis prevents units from
cycling out of control from variables such as
human error, short peak demands, and sud-
den changes in load.
Once installed, a dynamic tracking
system can show how the CRAC units have
operated in the past and how they are cur-
rently performing. Most of the time, the units
operate at less than peak conditions, which is
an opportunity to increase energy efciency
and create signifcant savings. Also, if the
units can measure and meet the load more
closely, the cost savings carry directly over to
the mechanical cooling plant as well.
Dynamic tracking systems can help
transform the air distribution and energy
use within a data center, and should be
considered as a viable solution to handle
variable and complex heat loads. Te ability
of dynamic tracking to reduced energy and
preserve data center fexibility are promising
factors for driving optimization. n
1 Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-
Research, February 2009
Dynamic tracking systems can help transform the air distribution and energy use within
a data center, and should be considered as a viable solution to handle variable and
complex heat loads. The ability of dynamic tracking to reduced energy and preserve data
center fexibility are promising factors for driving optimization.
THE DATA CENTER JOURNAL | 11 www.datacenterjournal.com
T
he 2009 edition of the annual Cisco
Visual Networking Index predicts that
the overall volume of Internet Proto-
col (IP) trafc fowing across global
networks will quintuple between 2008
and 2013, with a compound annual growth
rate (CAGR) of 40 percent. During that
same period, business IP trafc moving on
the public Internet will grow by 31 percent,
according to the Cisco study, while enterprise
IP trafc remaining within the corporate
WAN will grow by 36 percent.
Faced with this looming challenge, data
center managers know they must prepare
now to deploy the solutions necessary to ac-
complish three tasks: transmit this deluge of
information, store it and help lower total cost
of ownership (TCO). Specifcally, within the
next fve to seven years, they will need:
more bandwidth
faster connections
more and faster servers and
more and faster storage
Todays data center operations account
for up to half of total costs over the life cycle
of a typical enterprise and retrofts make up
another 25 percent. Managers want solutions
that boost efciencies immediately while
also making future upgrades easier and more
afordable.
Among the technologies that promise to
provide these solutions are 40 and 100Gbps
Ethernet (GbE); Fibre Channel over Ethernet
(FCoE); and server virtualization. Because
they directly afect the infrastructure, these
technologies will require new approaches to
cabling and connectors; higher fber densi-
ties; higher bandwidth performance; and
more reliable, fexible and scalable opera-
tions. Although managers want to deploy
technologies that will satisfy their future
requirements, they also want to determine to
what extent they can leverage their exist-
ing infrastructures to meet those needs. As
they do so, many are discovering there are
strategies available today that can help them
achieve both goals.
40GBE AND 100GBE ARE
COMING
Although most data centers today run
10GbE between core devices, and some run
40GbE via aggregated 10GbE links, they
inevitably will need even faster connections
to support high-speed applications, new
server technologies and greater aggregation.
In response, the Institute of Electrical and
Electronics Engineers (IEEE) is developing a
standard for 40 and 100GbE data rates (IEEE
802.3ba).
Scheduled for ratifcation next year,
the standard addresses multimode and
singlemode optical-fber cabling, as well as
copper cabling over very short distances (10
meters, as of publication date). It is helpful
to examine the proposed standard and then
look at various strategies for evolving the data
center accordingly. Currently, IEEE 802.3ba
specifes the following:
Multimode Fiber
Running 40 GbE and 100 GbE will require:
1) multi-fber push-on (MPO) connectors
2) laser-optimized 50/125 micrometer (m)
optical fber and
3) an increase in the quantity of fber--40
GbE requires six times the number of
fbers needed to run 10 GbE, and 100 GbE
requires 12 times that amount.
MPO Connectors
A single MPO connector, factory-pre-
terminated to multi-fber cables purchased in
predetermined lengths, terminates up to 12
or 24 fbers. 40-GbE transmission up to 100
meters will require parallel optics, with eight
multimode fbers transmitting and receiving
at 10 Gbps, using an MPO-style connector.
Running 100 GbE will require 20 fbers, each
transmitting and receiving at 10 Gbps, within
a single 24-fber MPO-style connector.
To achieve 10-GbE data rates for
distances up to 300 meters , some managers
have used MPO connectors to install laser-
optimized multimode fber cables, either ISO
11801 Optical Mode 3 (OM3 or 50/125 m)
or OM4 (50/125 m) fber cables. Tus they
already have taken an important step to pre-
pare for 40 and 100GbE transmission rates.
Working with their vendors, they can retroft
their 12-fber MPO connectors to support
40 GbE. It may even be possible to achieve
100GbE rates by creating a special patch cord
that combines two of those 12-fber MPO
connectors. Although the proposed standard
specifes 100 meters for 40 and 100GbE (a
departure from 300 meters for 10GbE), the
vast majority of data center links currently
cover 55 meters or less.
Tose who are not using MPO-style
connectors today may have options other
than forklif upgrades for achieving 40 and
100GbE data rates. Initially, most data center
managers will only run 40 and 100GbE on
a select few circuits--perhaps 10 percent or
20 percent. So, depending on when they
will need more bandwidth, they can begin
to deploy MPO terminated, laser-optimized,
multimode fber cables and evolve gradually.
High-performance Cabling
Compliance with the proposed standard
will require a minimum of OM3 laser-opti-
mized 50 m multimode fber with reduced
insertion loss (2.0dB link loss) and minimal
delay skew. As noted earlier, managers who
cap their investments in OM1 (62.5/125 m)
Prepare Now for the
Next-Generation Data Center
BY JAXON LANG, VICE PRESIDENT, GLOBAL CONNECTIVITY SOLUTIONS AMERICAS, ADC
FACILITY CORNER
CABLING
Fueled by applications such as IPTV, Internet gaming, fle sharing and mobile broadband, the food
of data surging across the worlds networks is rapidly morphing into a massive tidal wave--one that
threatens to overwhelm any data center not equipped in advance to handle the onslaught.
and OM2 (standard 50/125 m) cabling now
and install high-performance cabling and
components going forward can position the
data center for eventual 40GbE and 100GbE
requirements.
Much More Fiber
Running a 10GbE application requires
two fbers today, but running a 40GbE appli-
cation will require eight fbers, and a 100GbE
application will require 20 fbers. Terefore,
it is important to devise strategies today for
managing the much higher fber densities
of tomorrow. Managers must determine
not only how much physical space will be
required but also how to manage and route
large amounts of fber in and above racks.
Singlemode Fiber
Running 40GbE over singlemode fber
will require two fbers transmitting at 10Gbps
over four channels using coarse wavelength
division multiplexing (CWDM) technology.
Running 100GbE with singlemode fber will
require two fbers transmitting at 25Gbps over
4 channels using LAN wave division multi-
plexing (WDM).
Although using WDM to run 40GbE
and 100GbE over singlemode fber is ideal for
long distances (up to 10 km) and extended
reach (up to 40 km), it probably will not be
the most cost-efective option for the data
centers shorter (100-meter) distances. As the
industry fnalizes the standard and vendors
introduce equipment, managers will have
a window of time in which to evaluate the
evolving cost diferences among singlemode,
multimode and copper cabling solutions for
both 40GbE and 100GbE.
Typically, the elapsed time between the
release of a standard and the point at which
the price of associated electronics comes
down to a cost-efective level is about fve
years. For example, the cost of the frst 10GbE
ports, which emerged right afer the standard
was adopted in 2002, was roughly $32,000;
today, that same port costs about $2,000. If
40GbE and 100GbE ports follow that pattern,
managers who already have adopted an MPO
connectorization strategy will have until
about 2015 to plan for and actually implement
the upgrades necessary to access the faster
technologies.
Managers who have not opted for MPO
connectors but have invested in OM3 multi-
mode fber that satisfes the length require-
ments nevertheless may be able to devise
a migration path. Tey could work with
vendors to create a cord that combines 12 LC-
type connectors into an MPO. However, they
would have to test the site for length, insertion
loss and delay skew to ensure compliance with
the 802.3ba standard.
Copper
Te proposed standard specifes the
transmission of 40GbE and 100GbE over
short distances of copper cabling, with
10Gbps speeds over each lane--four lanes
for 40GbE and 10 lanes for 100GbE. Not
intended for backbone and horizontal cabling,
the use of copper probably will be limited to
very short distances for equipment-to-equip-
ment connections within or between racks.
FIBRE CHANNEL OVER ETHERNET
(FCOE) BOOSTS STORAGE
Because of Fibre Channels reliability
and low latency, most managers use it today
for high-speed communications among their
SAN servers and storage systems. Yet because
they rely on Ethernet for client-to-server or
server-to-server transmissions, they have
been forced to invest in parallel networks and
interfaces, which obviously increase costs and
create management headaches.
In response, the industry has devel-
oped a new standard (ANSI FC-BB-5) which
combines Fibre Channel and Ethernet data
transmission onto a common network inter-
face, basically by encapsulating Fibre Channel
frames within Ethernet data packets. FCoE
allows data centers to use the same cable
for both types of transmission and delivers
signifcant benefts, including better server
utilization; fewer required ports; lower power
consumption; easier cable management; and
reduced costs.
To most cost efectively deploy FCoE,
managers may opt to use top-of-rack switches,
rather than traditional centralized switch-
ing, to provide access to existing Ethernet
LAN and Fibre Channel SANs. Although the
top-of-rack approach reduces the amount of
cabling, it requires more fexible, manageable
operations, simply because managers will
have to reconfgure each rack. In addition,
40GbE and 100GbE require a higher-speed
cabling medium.
As they try to devise workable, aford-
able strategies for deploying FCoE, managers
must take into account several factors. First,
they have some time to move to FCoE. Cur-
rent FCoE deployment rates are less than 5
percent of storage ports sold. Te emerging
technologies of 40 GbE and 100 GbE certainly
make FCoE more enticing.
FCoE can be a two-step approach. Ini-
tially, the current investment in Fibre Chan-
nel-based equipment disk arrays, servers and
switches can continue to be utilized. As FCoE
equipment becomes more cost efective and
readily available, a wholesale change can be
made at that time.
FCoE becomes possible due to the
advent of Data Center Bridging (DCB) which
enhances Ethernet to work in data center en-
vironments. By deploying the electronics that
support FCoE, which overlays Fibre Channel
on top of Ethernet, managers can eliminate
the need for--and costs of--parallel infrastruc-
tures; reduce the overall amount and costs
of required cabling; and reduce cooling and
power-consumption levels. If they also begin
to invest now in the OM3/OM4-compliant
cabling for 40GbE and 100GbE, managers
will position their data centers for a smooth
upgrade to FCoE-based equipment.
SERVER VIRTUALIZATION
PRESENTS ITS OWN ISSUES
By running multiple virtual operating
systems on one physical server, managers are
tackling several challenges: accommodating
the space constraints created by more equip-
ment; reducing capital expenditures by buying
fewer servers; improving server utilization;
and reducing power and cooling consump-
tion. Currently, virtualization consolidates
applications on one physical server at a ratio
of 4:1, but that could increase to 20:1. So
many applications running on one server
obviously require much greater availability
and signifcantly more bandwidth.
Server virtualization means that down-
time limits access to multiple applications. To
provide the necessary redundancy, managers
are deploying a second set of cables. Te addi-
tional bandwidth needed to support increased
data transmission to and from the servers will
require additional services, which, in turn, will
demand still more bandwidth. While virtu-
alization theoretically reduces the number of
servers and cabling volumes, the redundancy
needed to support virtualization, in fact,
means the data center needs more cabling.
THE DRIVE TO REDUCE TCO
Although technologies such as FCoE
and server virtualization are aimed at
reducing TCO, the overall increase in data
requirements and equipment is putting a tre-
mendous strain on power, cooling and space
requirements. As a result, every enterprise
today tries to balance the need to deploy new
technologies with the need to reduce TCO.
To do so, data center operators are looking for
solutions that can handle changing confgura-
tion requirements and reduce energy con-
sumption--which inevitably will rise as more
equipment comes online.
By devising migration strategies that
protect existing investments and simultane-
ously prepare for the deployment of new,
high-speed technologies, managers can
enhance the capabilities, scalability and reli-
ability of the data center. In the process, they
can reduce TCO through more efcient op-
erations and reduced power consumption. n
12 | THE DATA CENTER JOURNAL www.datacenterjournal.com
THE DATA CENTER JOURNAL | 13 www.datacenterjournal.com
For complete details contact:
BINSWANGER
1200 THREE LINCOLN CENTRE, 5430 LBJ FREEWAY, DALLAS, TX 75240
972-663-9494 FAX: 972-663-9461 E-MAIL: HDAVIS@BINSWANGER.COM
WORLDWIDE COVERAGE WWW.BINSWANGER.COM/ARLINGTON
50 acres available for expansion
Ex-semiconductor site;
low risk, low power costs
Significant power to site
Ceiling heights and floor loadings
well-suited to data use
5,860 tons of chiller capacity
4,930 KW emergency generator capacity
2.1 million gallon per day water capacity
Plant systems include UPS, bulk gas,
DI water system; compressed air plant,
PCW plant and waste treatment plant
Electric power 40.8 megawatts of power
or 20.4 megawatts per feed
Approximately 91,000 sq. ft. of high-
quality office space
Ideally located in the heart of the
Dallas/Fort Worth Metroplex, minutes
to I-20 and 25 minutes to DFW Airport
Spectacular, 441,362 sq. ft.
high-tech complex on
21 acres in
ARLINGTON, TEXAS
DALLAS/FORT WORTH METROPLEX
Fort Worth, 14 miles
s
Dallas, 22 miles
s
B
E
W
.

B
a
r
d
i
n

R
o
a
d
A
71 acres
Building A
375,000 sq. ft.
Building B
51,400 sq. ft.
Building E
9,130 sq. ft.
Green Power Protection
spinning into a Data
Center near You
BY FRANK DELATTRE, PRESIDENT, VYCON
14 | THE DATA CENTER JOURNAL www.datacenterjournal.com
K
eeping critical operations especially computer networks and
other vital process applications up and running during power
disturbances has been most commonly handled by uninterrupt-
ible power systems (UPSs) and stand-by generators. Whether
depending on centralized or distributed power protection, bat-
teries used with UPS systems have been the typical standard due primarily
to their low cost. However, when one is looking to increase reliability and
deploy green initiatives, toxic lead-acid batteries are not the best solution.
Frequent battery maintenance, testing, cooling requirements, weight,
toxic and hazardous chemicals and disposal issues are key concerns. Mak-
ing matters worse, one dead cell in a battery string can render the entire
battery bank useless not good when youre depending on your power
backup system to perform need it most. Every time the batteries are used
(cycled), even for a split second, the more likely it is that they will fail the
next time they are needed.
CLEAN BACKUP POWER
Flywheel energy storage systems are gaining strong traction in data
centers, hospitals, industrial and other mission-critical operations where
energy efciency, costs, space and environmental impact are concerns. Tis
green energy storage technology is solving sophisticated power problems
that challenge computing operations every day. According to the Meta
Group, the cost of downtime can average a million dollars per hour for a
typical data center, so managers cant aford to take any risks. Flywheels
used with three-phase double-conversion UPS systems provide reliable
mission-critical protection against costly transients, harmonics, voltage
sags, spikes and blackouts.
A fywheel system can replace lead-acid batteries used with UPSs and
works like a dynamic battery that stores energy kinetically by spinning a
mass around an axis. Electrical input spins the fywheel rotor up to speed,
and a standby charge keeps it spinning 24/7 until called upon to release the
stored energy. (Fig.1) Te amount of energy available and its duration is
proportional to its mass and the square of its revolution speed. Specifc to
fywheels, doubling mass doubles energy capacity, but doubling rotational
speed quadruples energy capacity:

Depends on the shape of the rotating mass
M Mass of the fywheel
Angular velocity
Fig. 1 Flywheel Cutaway
www.datacenterjournal.com THE DATA CENTER JOURNAL | 15
Today, data center and
facility managers have many
considerations to evaluate
when it comes to increasing
energy effciencies and reducing
ones carbon footprint. The
challenge becomes how to
implement green technologies
without disrupting high nines
of availability and achieve a low
total cost of ownership (TCO).
This challenge becomes even
more crucial when looking at the
power protection infrastructure.
16 | THE DATA CENTER JOURNAL www.datacenterjournal.com
During a power event, the fywheel provides backup power
seamlessly and instantaneously. Whats nice is that its not an either
or situation as the fywheel can be used with or without batteries.
When used with batteries, the fywheel is the frst line of defense
against damaging power glitches the fywheel absorbs all the short
duration discharges thereby reducing the number and frequency of
discharges, which shortens the life of the battery. Since UPS batteries
are the weakest link in the power continuity scheme, fywheels paral-
leled with batteries give data center and facility managers peace of
mind that their batteries are safeguarded against premature aging and
unexpected failures. When the fywheel is used just with the UPS and
no batteries, the system will provide instant power to the connected
load exactly as it would do with a battery string. However, if the power
event lasts long enough to be considered a hard outage (rather than
just a transient outage), the fywheel will gracefully hand of to the
facilities engine-generator. Its important to know that according to
the Electric Power Research Institute (EPRI), 80 percent of all utility
power anomalies/disturbances last less than two seconds and 98 per-
cent last less than ten seconds. In the real world, the fywheel energy
storage system has plenty of time for the Automatic Transfer Switch
(ATS) to determine if the outage is more than a transient and to start
the generator and safely manage the hand-of.
SHINING LIGHT ON REAL WORLD EXPERIENCE
SunGard, one of the worlds leading sofware and IT services
companies that serves more than 25,000 customers in more than 70
countries, frst tried out fywheels in their data centers three years ago
to see how they would perform over a period of time.
Te driver for utilizing fywheels is to reduce the life-cycle
cost and maintenance requirements when installing large banks of
batteries. In addition, the space savings by using fywheel and less
batteries means lower construction costs and allows the optimum
space utilization, commented Karl Smith, Head Of Critical Environ-
ments for SunGard Availability Services. Today, SunGards legacy data
centers still have batteries, but as it becomes necessary to replace the
batteries, they plan to reduce the number of strings of batteries and
complement them with a string of fywheels. For future data center
builds, SunGard is planning to have a combination for short run time
batteries, in parallel with a bank of fywheels.
BEATING THE CLOCK
Many users are under a false sense of security by having 10 or 15
minutes of battery run time. Tey assume that if the generator does
not start they will be able to have a chance to correct the issue. It is
true that batteries provide much longer ride-through time, but the
most important ride-through time is in the frst 30 seconds. We dont
need much more than this to have our stand-by generators come on
line. In most cases, our generators are on-line and loads are switched
over in 30 to 40 seconds. Te fywheels are our frst line of defense,
but should we need a few extra minutes to get a redundant genera-
tor on-line, then the battery can be utilized, said Smith. Having the
fywheels discharge frst means the batteries are not discharged in
normal operation, thus their life can be extended.
In various industry studies such as the IEEE Gold Book, genset
start reliability for critical and non-critical applications was mea-
sured at 99.5%. For applications where the genset is tested regularly
and maintained properly, reliability substantially increases. When
the genset fails to start, 80% of the time it is because of failure of the
battery being used to start the generator. Just monitoring or adding a
redundant starting system can remove 80% of the non-start issues.
Fig. 3 Lifecycle costs of batteries vs. fywheels. Battery costs are
based on a 4-year replacement cycle.
Fig. 2 Power protection scheme with UPSs, batteries and fywheel
SunGard Data Center
THE DATA CENTER JOURNAL | 17 www.datacenterjournal.com
IT PAYS TO BE GREEN
Te latest fywheel designs sold by world-leaders in 3-phase UPS
systems take advantage of higher speeds and full magnetic levitation
packing more green energy storage into a much smaller footprint and
removing any kind of bearing maintenance requirements. As shown
in fgure 2, over a 20-year design lifespan, cost savings from a hazmat-
free fywheel versus a 5-minute valve regulated lead-acid (VRLA)
battery bank are in the range of $100,000 to $200,000 per fywheel
deployed.
Tese fgures (Figure 3) are based on a typical installation of a
250kVA UPS using 10-year design life VRLA batteries housed in a
cabinet. Te yearly maintenance for the batteries is based on a recom-
mended quarterly check on the battery health to have some predict-
ability on their availability. Moreover, these fgures dont include
foor space or cooling cost savings that can be achieved by using the
fywheel energy storage vs. batteries.
BATTERIES UNPREDICTABLE FAILURES
While UPS systems have long used banks of lead-acid batteries
to provide the energy storage needed to ride through a power event,
they are, as stated earlier, notoriously unreliable. In fact, according
to the Electric Power Research Institute (EPRI), Batteries are the
primary feld failure problem with UPS systems. Predicting when
one battery in a string of dozens will fail is next to impossible even
with regular testing and frequent individual battery replacements. Te
truth is that engineering personnel dont test them as ofen as they
should, and may not have testing/monitoring systems in place to do
so properly. Since fywheel systems are electro/mechanical devices,
they can constantly self monitor and report to assure the user, that
they are ready for use or advise of the need for service. Tis is nearly
impossible to accomplish in a chemically based system. Every time a
battery is used, it becomes less responsive to the next event. Batteries
generate heat, and heat reduces battery life. If operated 10F above
their optimum setting of 75F, the lifespan of lead-acid batteries is
cut in half. If operated at colder temperatures, chemical reactions are
slowed and performance is afected. Batteries can also release explo-
sive gases that must be ventilated away.
Battery reliability is always in question. Are they fully charged?
Has a cell gone
bad in the battery
string? When was
the last time they
were tested? Some
facility managers
resist testing their
batteries as the
battery test in itself
depletes battery
life. By contrast,
fywheel systems
provide reliable
energy storage
instantaneously to
assure a predictable
transition to the
stand-by genset.
Hazmat permits, acid leak containment, foor loading issues,
slow recharge times, lead disposal compliance and transporting are
causing facility managers to look closely at alternatives to energy
storage.
Protecting critical systems against costly power outages in a
manner that is energy efcient, environmentally-friendly and provides
a low total cost of ownership is a priority with most data center and
facility managers. Double-conversion UPSs paired with fywheels
(Figure 4) is the next step in greening the power infrastructure.
BENEFITS OF FLYWHEEL TECHNOLOGY
From 40kVA to over a megawatt, fywheel systems are increas-
ingly being used to assure the highest level of power quality and
reliability for mission-critical applications. Te fexibility of these
systems allows a variety of confgurations that can be custom-tailored
to achieve the exact level of power protection required by the end user
based on budget, space available and environmental considerations. In
any of these confgurations, the user will ultimately beneft from the
many unique benefts of fywheel-based systems.
Flywheels today comply with the highest international standards
for performance and safety including those from UL and CE. Some
units, like those from VYCON, incorporate a host of advanced fea-
tures that users expect to make the systems easy to use, maintain and
monitor such as self-diagnostics, log fles, adjustable voltage settings,
RS-232/485 interface, alarm status contacts, sof-start precharge from
the DC bus and push-button shutdown. Available options include DC
disconnect, remote monitoring, Modbus and SNMP communications
and real-time monitoring sofware.
Data center managers throughout the U.S. and around the world
are evaluating technologies that will increase overall reliability while
reducing costs. While the highest level of nines is the frst require-
ment, being environmentally-friendly is certainly an added bonus. By
enhancing battery strings or eliminating them altogether with the use
of fywheels, managers take one more step in greening their facilities
and lowering TCO.
n
Fig 4. VYCONs VDC Flywheel Energy Storage System paired
with Eatons three-phase double-conversion UPS.
BENEFITS OF FLYWHEEL TECHNOLOGY
n No cooling required
n High power density - small footprint
n Parallel capability for future expansion and
redundancy
n Fast recharge (under 150 seconds)
n 99% effciency for reduced operating cost
n No special facilities required
n Front access to the fywheel eliminates space
issues and opens up installation site fexibility
in support of future operational expansions and
re-arrangements
n Low maintenance
n 20-year useful life
n Simple installation
n Quiet operation
n Wide temperature tolerance (-4F to 104F)
18 | THE DATA CENTER JOURNAL www.datacenterjournal.com
I
ndustry reports show that data center
energy costs as a percent of total revenue
is at an all-time high, and data center
electricity consumption accounts for
almost .5 percent of the worlds green-
house gas emissions. As a result, data center
managers are under pressure to maximize
data center performance while reducing cost
and minimizing environmental impact, mak-
ing data center energy efciency critical.
According to a 2007 Frost & Sullivan
survey of 400 information technology (IT)
and facilities managers responsible for large
data centers, 78 percent of respondents
indicated that they were likely to adopt more
energy efcient power equipment in next fve
years, a solution thats ofen less costly and
more quickly and easily implemented than
data virtualization or cooling systems.
While major advancements in electri-
cal design and uninterruptible power system
(UPS) technology have provided incremental
efciency improvements, the key to improv-
ing system-wide power efciency within the
data center is power distribution. However,
todays 480V AC power distribution sys-
temsstandard in most U.S. data centers and
IT facilitiesare not optimized for efciency.
Of the several alternative power distribution
systems currently available, 400V AC and
600V AC systems are generally accepted as
the most viable. While both have been proven
reliable in the feld, conform to current
National Electrical Code (NEC) guidelines,
and can be easily deployed into existing 480V
AC infrastructure, there are important dif-
ferences in efciency and cost that must be
carefully weighed.
Tis article ofers a quantitative com-
parison of 400V AC and 600V AC power dis-
tribution confgurations at varying load levels
using readily available equipment, taking into
account the technology advancements and
installation and operating costs that drive
total cost of ownership (TCO).
THE TRADITIONAL U.S. DATA
CENTER POWER SYSTEM
In most U.S. data centers today, afer
power is received from the electrical grid
and distributed within the facility, the UPS
ensures a reliable and consistent level of
power and provides seamless backup power
protection. Isolation transformers step down
the incoming voltage to the utilization volt-
age and power distribution units (PDUs) feed
the power to multiple branch circuits. Te
isolation transformer and PDU are normally
combined in a single PDU component, many
of which are required throughout the facility.
Finally, the server or equipment internal
power supply converts the utilization voltage
to the specifc voltage needed. Most IT equip-
ment can operate at multiple voltages. Losses
through the UPS, the isolation transformer/
PDU and the server equipment produce an
overall end-to-end efciency of approximate-
ly 76 percent.
Data center efciency is ofen evalu-
ated using the efciency ratings of the server
and IT equipment alone. Despite recent
advances in energy management and server
technology, maximum efciency can be
achieved only by taking a holistic view of the
power distribution system. Each component
impacts the end-to-end cost and efciency
of the system. Te entire system must be
optimized in order for the data center to fully
realize the efciency gains ofered by new
server technologies.
Powering Tomorrows Data Center:
400V AC versus 600V AC Power Systems
BY JIM DAVIS, BUSINESS UNIT MANAGER, EATON POWER QUALITY AND CONTROL OPERATIONS
Figure 1: End-to-end effciency in the 400V AC power distribution system
A growing demand for network bandwidth and faster, fault-free data processing has driven an
exponential increase in data center energy consumption, a trend with no end in sight.
www.datacenterjournal.com
THE 400V AC POWER SYSTEM
Te 400V AC power distribution
model ofers a number of advantages in terms
of efciency, reliability and cost, as compared
to the 480V AC and 600V AC models. In
a 400V system, the neutral is distributed
throughout the building, eliminating the
need for PDU isolation transformers and
delivering 230V phase-neutral power directly
to the load. Tis enables the system to per-
form more efciently and reliably, and ofers
signifcantly lower overall cost by omitting
multiple isolation transformers and branch
circuit conductors.
Figure 1 shows that losses through the
auto-transformer, the UPS and the server
equipment produce an overall end-to-end
efciency of approximately 80 percent.
THE 600V AC POWER SYSTEM
Te 600V AC power system, while of-
fering certain advantages over both the 480V
AC and 400V AC systems, carries inherent
inefciencies making it an impractical solu-
tion for most U.S. data centers. Te 600V AC
system ofers a small equipment cost savings
over the 480V AC and 400V AC systems, re-
quiring less copper wiring feeding and lower
currents, which reduce energy cost.
In unique circumstances where larger
data centers deploy multi-module parallel
redundant UPS systems, 600V AC power
equipment can support more modules with a
single 4000A switchboard than in a 400V AC
system, allowing data center managers to add
a small amount of extra capacity at a nominal
cost and with no increase in footprint.
With 600V AC power, the distribution
system requires multiple isolation trans-
former-based PDUs to step down the incom-
ing voltage to the 208/120V AC utilization
voltage, adding signifcant cost and reducing
overall efciency. Some UPS vendors create a
600V AC UPS using isolation transformers in
conjunction with a 480V AC UPS, reducing
efciency even further.
As shown in Figure 2, losses through
the UPS, the isolation transformer/PDU,
and the server equipment produce an overall
end-to-end efciency of approximately 76
percentcomparable to the efciency of
todays traditional 480V AC power distribu-
tion system.
COMPARING TOTAL COST OF
OWNERSHIP
TCO for the power distribution system
is determined by adding capital expendi-
tures (CAPEX) such as equipment purchase,
installation and commissioning costs, and
operational expenditures (OPEX), which
include the cost of electricity to run both the
UPS and the cooling equipment that removes
heat resulting from the normal operation of
the UPS.
Te end-to-end efciency of the 400V
AC power distribution system is 80 percent
versus 76 percent efciency in the 600V
AC system, with both systems running in
conventional double conversion mode. Te
400V AC systems higher efciency drives
signifcant OPEX savings over the 600V
AC system, substantially lowering the data
centers TCO both in the frst year of service
and over the 15-year typical service life of the
power equipment.
To further reduce OPEX, many UPS
manufacturers ofer high-efciency systems
that use various hardware- and sofware-
based technologies to deliver efciency
ratings between 96 and 99 percent, without
sacrifcing reliability. Te Energy Saver
Figure 2: End-to-end effciency in the 600V AC power distribution system
Where
are your
energy
dollars
going?

re
tu
rn
a
ir te
m
p
e
ra
tu
re
to
co
o
lin
g u
n
its

b
yp
a
ss a
ir o
w

p
e
rfo
ra
te
d
tile
co
u
n
t a
n
d
p
la
ce
m
e
n
t

co
o
lin
g ca
p
a
city fa
cto
r

ca
b
in
e
t circu
la
tio
n
p
a
tte
rn
s

IT
e
q
u
ip
m
e
n
t in
ta
ke
te
m
p
e
ra
tu
re
Receive a free Upsite

Temperature
Stripvisit upsite.com/energy
upsite.com
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
All rights reserved. Upsite Technologies, Inc. 2009
Upsite is an ENERGY STAR Service and Product Provider
Partner, developing ways to optimize data centers and
improve energy efciency.
Count on Upsites systematic
solutions suite to optimize
your existing equipment and
your energy dollars.
Start with an Upsite

Services
cooling health benchmark.
Our diagnostic surveys offer systematic
remediation strategies that will cor-
rect airow inefciencies for improved
cooling capacity. Then increase server
density and defer capital costs, all while
reducing operating expenses.
FLEXIBLE
POWER SOLUTIONS
IN MINUTES.
NOT WEEKS!
Expanding your power distribution capacity shouldnt
be a hardship. And with the flexible Starline Track Busway,
it wont be. Our overhead, scalable, add-as-needed
system can expand quickly, with no down time, and no routine maintenance. Make dealing with the
jungle of under-floor wires a thing of the past, and reliable expansion and reconfigurations a part of your future.
Tolearnmoreabout StarlineTrackBusway andtofindarepresentativenear you, just
visit www.uecorp.com/busway/reps or call us at +1 724 597 7800.
On your mark, get set, go!
uec_datacenter_journal_int.qxd:Layout 1 6/11/09 3:41 PM Page 1
www.datacenterjournal.com
System is a new ofering that enables select
new and existing UPSs to deliver industry-
leading 99 percent efciency, even at low load
levels, while still providing total protection
for critical loads. With this technology, the
UPS operates at extremely high efciency un-
less utility power conditions force the UPS to
work harder to maintain clean power to the
load. Te intelligent power core continu-
ously monitors incoming power conditions
and balances the need for efciency with the
need for premium protection, to match the
conditions of the moment.
When high-efciency UPS systems
are deployed, losses through the auto-trans-
former, the UPS and the server equipment
produce an overall end-to-end efciency of
approximately 84 percent.
400V AC POWERS AHEAD
Te 400V AC power distribution
systems lower equipment cost and higher
end-to-end efciency deliver signifcant CA-
PEX, OPEX and TCO savings as compared
to the 600V AC system. Te 400V AC system
running in conventional double conversion
mode ofers an average 10 percent frst-year
TCO savings and an average 5 percent TCO
savings over its 15-year service life, as com-
pared to the 600V AC system. When running
the 400V AC UPS in high-efciency mode,
the frst-year TCO savings increase to 16 per-
cent, and the 15-year TCO savings increase
to 17 percent, minimizing data center cost in
terms of both CAPEX and OPEX.
In CAPEX investment alone, the
400V AC confguration ofers an average 15
percent savings over the 600V AC confgura-
tion for all system sizes analyzed. Te 400V
AC systems lower CAPEX gives data center
managers a more cost-efective solution for
expanding data center capacity. Te systems
analyzed produced an average annual OPEX
savings of 4 percent with the 400V AC system
running in double conversion mode, and
17 percent when running in high-efciency
mode. OPEX savings rates are linear across
all system sizes, indicating that savings will
continue to increase in direct proportion to
the size of the system.
Terefore, the 400V AC power distri-
bution system ofers the highest degree of
electrical efciency for modern data centers,
signifcantly reducing capital and operational
expenditures and total cost of ownership as
compared to 600V AC power systems. Recent
developments in UPS technologyincluding
the introduction of transformerless UPSs and
new energy management featuresfurther
enhance the 400V AC power distribution
system for maximum efciency.
Tis conclusion is supported by IT
industry experts who theorize that 400V AC
power distribution will become standard as
U.S. data centers transition away from 480V
AC to a more efcient and cost-efective solu-
tion over the next one to four years.
n
About The Author:
Jim Davis is a business unit manager for Eatons
Power Quality and Control Operations Division. He
can be reached at JimRDavis@eaton.com. For more
information about the 400V UPS power scheme,
visit www.eaton.com/400volt.
Chart 1: 15-year TCO (400V AC Energy Saver System vs. 600V AC double conversion mode)
How
about
spending
fewer
energy
dollars?
Eliminate bypass airow with KoldLok

Raised Floor Grommets. Seal cable


openings with 98% effectiveness.
Studies show that installing KoldLok
Grommets facilitates data center man-
agers turning off 18% of CRAC units at
an annual operating cost savings of ap-
proximately $5,000 per unit.
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
upsite.com
All rights reserved. Upsite Technologies, Inc. 2009
Upsite is an ENERGY STAR Service and Product Provider
Partner, developing ways to optimize data centers and
improve energy efciency.
Receive a free Upsite

Temperature
Stripvisit upsite.com/energy
Count on Upsites systematic
solutions suite to optimize
your existing equipment and
your energy dollars.
www.datacenterjournal.com
I
f the cost savings are half as great for
data centers as they have been for the
airline industry, we will need to fasten
our seatbelts.
Data centers have always been
power hogs, but the problem has accelerated
in recent years. Ultimately, it boils down to
design, equipment selection and operation
of which measurement is an important part.
Te frst step on an existing Datacenter to
achieve high(er) efciencies is through the
improvement of a data centers Power Usage
Efectiveness (PUEenergy) ratio PUEenergy
ratios can be used as a guide to defne a data
centers efciency or green credentials and
have become the de facto metric in the past
year.
A data center with a low PUEenergy of
1.5, implements lean design and has estab-
lished measurement data with demonstrable
year-on-year improvements can be classi-
fed as green or energy efcient. Te dream
green data center would have a PUEenergy
of one, which means that every watt of power
in the transformer is delivered directly to
the IT equipment without any losses in the
site infrastructure. Unfortunately, this is not
physically possible as some infrastructure
services, such as cooling, always have energy
losses (for time being).
However, an inefcient data center is
recognized as anything with a PUEenergy
of greater than two. Tese are generally
based on legacy equipment, not built in a
modular way and/or not operated well. So,
how does an organization go about optimiz-
ing data center efciency and improving its
PUEenergy?
For organizations to reduce their PUE,
they need to have an active focus on the
following three areas: external efciency,
internal efciency and customer efciency.
Tey need to be monitoring their best
practice PUE ratios that go against industry
standards set by the likes of the Uptime Insti-
tute, the Green Grid and the European Code
of Conduct.
Although PUEenergy has been adopted
by the industry sector, institutions and
government bodies alike as an agreed way to
measure the energy overhead of a data center
it may distract us from the ultimate goal:
A LOWER TOTAL DATACENTER
ENERGY USE AT A LOW
PUEenergy.
If PUE in Power or Energy would be the
only benchmark indicator for governments
to decide the relative energy efciency of data
centers and in turn how best to apply a car-
bon tarif then many data center owners may
decide to switch on servers that were previ-
ously earmarked to meet peaks in demand.
Tis in reality would mean lower PUEenergy
ratios but a higher total energy usage which
defeats the original objective and may be a
problem for all.
Data Center Effciency
Its in the Design
BY LEX COORS, VICE PRESIDENT DATA CENTER TECHNOLOGY AND
ENGINEERING GROUP, INTERXION
Most companies undergoing data center projects have the mindset
of cutting costs rather than helping the environment, however,
they may want to adjust their focus. With data center greenhouse
emissions set to overtake the airline industry in the next fve to
ten years, quadrupling by 2020, it has never been more critical for
organizations to optimize their data center.
* For more information about Dyna-Seal Technology, visit www.coolbalance.biz
800.787.7325
email: coolbalance@sealeze.com
www.sealeze.com
www.coolbalance.biz
SAVE THE SERVERS!
Thats why CoolBalance ofers brush
seals to ft nearly any size opening. Ideal
for retroftting existing data centers or
new installations, Sealezes CoolBalance
foor seals economically seal cable
access holes, facilitating control and
regulation of critical air fow that cools
computer room equipment.
Dyna-Seal strip brush technology
provides an efective seal*
Seal around cable openings in walls
or foors
Variety of sizes; 5x5 inch to 10x24 inch
and 4 & 6 inch circle seals
Easy to install
Economical
Quick and easy on-site
installation
One
size
does not ft all.
CoolBalance
B C
TM
SM
CB Ad 3rd pg DCJ 0309.indd 1 3/25/2009 11:57:35 AM
www.datacenterjournal.com
upsite corporate headquarters
santa fe, new mexico usa 505.982.7800
upsite europe
utrecht, netherlands +31 (0)30 7523670
upsite.com
Does
energy
savings
have a
nice
ring?
All rights reserved. Upsite Technologies, Inc. 2009
Upsite is an ENERGY STAR Service and Product Provider
Partner, developing ways to optimize data centers and
improve energy efciency.
Receive a free Upsite

Temperature
Stripvisit upsite.com/energy
Prevent circulation of hot exhaust
air with HotLok

Blanking Panels.
Seal rack unit openings in IT server
cabinets with 99.97% effectiveness.
New research shows that installing
HotLok Panels helps data center
managers achieve up to 29% reduc-
tion in annual operating costs and
simple payback in a few months.
Count on Upsites systematic
solutions suite to optimize
your existing equipment and
your energy dollars.
Bearing this in mind taking some of
the following steps to achieve a better total
performance of the site in energy usage will
help you achieve a more energy efcient
operation.
STEP 1:
Measure the transformer (or other main
source energy usage) and the IT energy
usage and calculate the PUEenergy
STEP 2:
Start harvesting the low hanging fruits
based on the Uptime Institute guide lines
that have been set for many years and
available on their website.
STEP 3:
Measure the transformer and the IT En-
ergy usage again and calculate your new
PUEenergy. You may observe that while
your total energy usage has decreased,
your PUEenergy ratio has increased.
STEP 4:
Start switching of unneeded infrastruc-
ture, while maintaining your redundancy
levels.
STEP 5:
Measure the transformer and the IT
energy usage and calculate your PUEen-
ergy. You may now observe that your
PUEenergy and again that your total
energy usage has decreased.
It comes as no surprise that good
design leads to lower capital expenditure
(CAPEX) and better efciency, but what is
good design? A model that has proved suc-
cessful both in terms of efciency and green
credentials is Modular Design. Modular
Design was developed by Lex Coors, Vice
President of Data Center Technology
and Engineering Group, Interxion, and is
unique since it allows for future data center
expansion without interruption of services
to customers.
Recent research by McKinsey and the
Uptime Institute identifed fve key steps to
achieving operational efciency gains:
n Eliminate decommissioned servers,
which will equal an overall gain of
10-25%
n Virtualize, which leads to gains of
25-30%
n Upgrade older equipment leading to a
10-20% gain
n Reduce demand for new servers, which
can also increase efciency by 10-20%
n Introduce greener and more power ef-
fcient servers and enable power saving
features, this also equates to a 10-20%
gain
By following the above steps, an
organization can look to achieve an overall
efciency gain of 65%, signifcantly improv-
ing its PUE ratio.
Te third and fnal piece of the
efciency puzzle is customer focus. An
efcient data center should have hands-
on expert support in energy efciency
implementation eforts, as well as the best
practice customer installation check lists.
Staf need to be able to advise on how
to reduce temperatures and energy usage
though things like innovative hot and cold
aisle designs. Tey need to have the tools
in place to measure and analyze efciency,
implement the latest efciency ratings,
develop and implement frst phase actions,
and integrate fgures and ratings with
customers CSR. Without such expertise in
place, organization will fnd it hard to reach
their desired efciency gains.
Green and efcient data centers are
real and achievable, but emissions and
cost of energy are rising fast (although
people now and then forget these costs
sometime decrease temporally), so we
need to do more now. Organizations must
work together especially when it comes to
measurement. Vendors should be provid-
ing standard meters on all equipment to
measure energy usage versus productivity;
if you dont know whether youre wasting
energy, how can you change it?
But its not just vendors who are
responsible. Data center providers should
provide leadership for industry standards
and ratings that work, data center design
and operational efciency steps, and sup-
port for all customer IT efciency improve-
ments. What is apparent is that the whole
industry, from the power suppliers to the
rack makers, all need to work together to
improve efciencies and ensure that we are
all at the forefront of efcient, green data
center design.
n
PUEenergy measures efciency over time
using KwH.
www.datacenterjournal.com
Online Backup or
Cloud Recovery?
BY IAN MASTERS, UK SALES & MARKETING DIRECTOR,
DOUBLE-TAKE SOFTWARE
ITCORNER
C
loud recovery can be a nebulous
term, so I would defne it based on
the solution having the following
features:
1. Te ability to recover workloads in the
cloud
2. Efectively unlimited scalability with little
or no up-front provisioning
3. Pay-per-use billing model
4. An infrastructure that is more secure and
more reliable than the one you would build
yourself
5. Complete protection - i.e. non-expert users
should be able to recover everything they
need, by default.
If a solution does not meet up to these
fve criteria, then it should be called an
online backup product. Tis may be right
for your business, but typically they require
more IT knowledge and are based on specifc
resources.
Tere is an old saying in the data
protection business that the whole point of
backing up is preparing to restore. Having
a backup copy of your data is important,
but it takes more than a pile of tapes (or an
on-line account) to restore. You might need
a replacement server, new storage, and maybe
even a new data centre, depending on what
went wrong. Traditionally, you would either
keep spare servers in a disaster recovery data
centre, or sufer a period of downtime while
you order and confgure new equipment.
With a cloud recovery solution, you dont
want just your data in the cloud, you want the
ability to actually start up applications and
use them, no matter what went wrong in your
own environment.
Te next area where cloud recovery can
provide a better level of protection is around
provisioning. Even using online backup
systems, organizations would have to use
replacement servers in the event of an outage.
Te whole point of recovering to the cloud is
that they already have plenty of servers and
additional capacity on tap. If you need more
space to cope with a recovery incident, then
you can add this to your account. Under this
model, your costs are much lower than build-
ing the DR solution yourself, because you get
the beneft of duplicating your environment
without the upfront capital cost.
Removing the up-front price and
long-term commitment shifs the risk away
from the customer, and onto the vendor. Te
vendor just has to keep the quality up to keep
customers loyal, which requires great service
and efcient handling of customer accounts.
Te cloud recovery provider takes on all the
management efort and constant improve-
ment of infrastructure that is required. A
business without in-house staf that is
familiar with business continuity planning
may ultimately be much better of paying a
monthly fee to someone who specializes in
this area.
One area where cloud providers may
be held to account is around security and
reliability, but I think they hold the providers
to the wrong standard. In the end, you have
to compare the results that a cloud services
provider can achieve, the service levels that
they work to, and the cost comparison to
doing it yourself. Te point is that security
and reliability are hard, but they are easier at
scale. Companies like Amazon and Rack-
space do infrastructure for a living, and do it
at huge scale. Amazons outages get reported
in news, but how does this compare to what
an individual business can achieve?
Te last area where cloud recovery can
deliver better results is through usability and
protecting everything that a business needs.
While some businesses know exactly what
fles should be protected, most either dont
have this degree of control, or have got users
into the habit of following standard formats
or saving documents into specifc places.
Te issues that people normally get bitten
by are with databases, confguration changes
and weird applications that only a couple of
people within the organization use. Complete
protection means that all of these things can
be protected without requiring an expert in
either your own systems, or with the cloud
recovery solution.
Cloud means so many diferent things
to so many people, that it sometimes seems
not to mean anything at all. If you are going
to depend on it to protect your data, it had
better mean something specifc. Tese fve
points may not cover every possible protec-
tion goal, but they set a good minimum
standard.
n
Backing up fles and data online has been around for quite a while, but it has
never really taken off in a big way for business customers. There is also a
new solution coming onto the market which uses the cloud for backup and
recovery of company data. While these two approaches to disaster recovery
appear to be similar, there are some signifcant differences as well.
So which one would be right for you?
THE DATA CENTER JOURNAL | 25 www.datacenterjournal.com
26 | THE DATA CENTER JOURNAL www.datacenterjournal.com
ITCORNER
A
ccording to a recent Computing Technology Industry
Association (CompTIA) survey (see http://www.comptia.
org/pressroom/get_pr.aspx?prid=1410), although most re-
spondents still consider viruses and malware the top security
threat, more than half (53 percent) attributed their data
breaches to human error, presenting another dimension to the rising
concern about insider threats. It should serve as a wake-up call to
many organizations, that inadvertent or malicious insider activity can
create a security risk.
For instance, take the recent data breach that impacted the Metro
Nashville Public Schools. In this case, a contractor unintentionally
placed the personal information of more than 18,000 students and
6,000 parents on an unsecured Web server that was searchable via the
Internet. Although this act was largely chalked up to human error and
has since been corrected, anyone accessing the information when it
was freely available online could create a data breach that could cause
signifcant harm to these students and parents.
Moreover, the Identity Tef Resource Center (ITRC) recently
reported that insider thef incidents more than doubled between 2007
and 2008, accounting for more than 15 percent of data breaches. Ac-
cording to the report, human error breaches, as well as those related to
data-in-motion and accidental exposure, accounted for 35 percent of
all data breaches reported, even afer factoring in that the number of
breaches declined slightly during this period.
To signifcantly cut the risk of these insider breaches, enterprises
must have appropriate systems and processes in place to avoid or
reduce human errors caused by inadvertent data leakage, sharing of
passwords, and other seemingly harmless actions.
One approach to address these challenges is digital vault technol-
ogy, which is especially valuable for users with high levels of enter-
prise/network access as well as those handling sensitive information
and/or business processes such as users with privileged access -- in-
cluding third-party vendors or consultants, executive-level personnel
-- or access to the core applications running within an organizations
critical infrastructure.
Instead of trying to protect every facet of an enterprise network,
digital vault technology creates safe havens -- distinct areas for storing,
protecting, and sharing the most critical business information -- and
provides a detailed audit trail for all activity associated within these
safe havens. Tis encourages more secure employee behavior and
signifcantly reduces the risk of human error.
Here are some best practices for organizations serious about
preventing internal breaches, be they accidental or malicious, of any
processes that involve privileged access, privileged data, or privileged
users.
1
ESTABLISH A SAFE HARBOR
By establishing a safe harbor or vault for highly sensitive
data (such as adminstrator account passwords, HR fles, or
intellectual property), build security directly into the business process,
independent of the existing network infrastructure. Tis will protect
the data from the security threats of hackers and the accidental misuse
by employees.
A digital vault is set up as a dedicated, hardened server that
provides a single data access channel with only one way in and one
way out. It is protected with multiple layers of integrated security
including a frewall, VPN, authentication, access control, and full en-
cryption. By separating the server interfaces from the storage engine,
many of the security risks associated with widespread connectivity are
removed.
2
AUTOMATE PRIVILEGED IDENTITIES AND
ACTIVITIES
Ensure that administrative and application identities and
passwords are changed regularly, highly guarded from unauthorized
use, and closely monitored, including full activity capture and record-
ing. Monitor and report actual adherence to the defned policies. Tis
is a critical component in safeguarding organizations and helps to
simplify audit and compliance requirements, as companies are able to
answer questions associated with who has access and what is being
accessed.
As listed among the Consensus Audit Guidelines 20 critical
security controls, the automated and continuous control of adminis-
trative privileges is essential to protecting against future breaches. [Ed-
itors note: the guidelines are available at http://www.sans.org/cag/.]
Five Best Practices for
Mitigating Insider Breaches
BY ADAM BOSNIAN, VP MARKETING CYBER-ARK SOFTWARE
Mismanagement of processes involving privileged access,
privileged data, or privileged users poses serious risks to
organizations. Such mismanagement is also increasing
enterprises vulnerability to internal threats that can be caused
by simple human error or malicious deeds.
THE DATA CENTER JOURNAL | 27 www.datacenterjournal.com
3
IDENTIFY ALL YOUR PRIVILEGED ACCOUNTS
Te best way to start managing privileged accounts is to
create a checklist of operating systems, databases, appli-
ances, routers, servers, directories, and applications throughout the
enterprise. Each target system typically has between one and fve
privileged accounts. Add them up and determine which area poses
the greatest risk. With this data in hand, organizations can easily
create a plan to secure, manage, automatically change, and log all
privileged passwords.
4
SECURE EMBEDDED APPLICATION
ACCOUNTS
Up to 80 percent of system breaches are caused by internal
users, including privileged administrators and power users, who acci-
dentally or deliberately damage IT systems or release confdential data
assets, according to a recent Cyber-Ark survey.
Many times, the accounts leveraged by these users are the ap-
plication identities embedded within scripts, confguration fles, or
an application. Te identities are used to log into a target database or
system and are ofen overlooked within a traditional security review.
Even if located, the account identities are difcult to monitor and log
because they appear to a monitoring system as if the application (not
the person using the account) is logging in.
Tese privileged, application identities are being increasingly
scrutinized by internal and external auditors, especially during PCI-
and SOX-driven audits, and are becoming one of the key reasons that
many organizations fail compliance audits. Terefore, organizations
must have efective control of all privileged identities, including ap-
plication identities, to ensure compliance with audit and regulatory
requirements.
5
AVOID BAD HABITS
To better protect against breaches, organizations must
establish best practices for securely exchanging privileged
information. For instance, employees must avoid bad habits (such
as sending sensitive or highly confdential information via e-mail or
writing down privileged passwords on sticky notes). IT managers
must also ensure they educate employees about the need to create and
set secure passwords for their computers instead of using sequential
password combinations or their frst names.
Te lesson here is that the risk of internal data misuse and
accidental leakage can be signifcantly mitigated by implementing ef-
fective policies and technologies. In doing so, organizations can better
manage, control, and monitor the power they provide to their employ-
ees and systems and avoid the negative economic and reputational
impacts caused by an insider data breach, regardless of whether it was
done maliciously or by human error.
n
2009FALL CONFERENCE
End-to-End Reliability:
For more information
and to register, visit
www.7x24exchange.org
Media Partners:
KEYNOTE TOPICS
Leadership and Accountability
When It Matters
Commander Kirk S. Lippold, USN (Ret)
Commander of the USS Cole
IBM Achieving Data Center
Availability and Energy Efficiency
Steven Sams Vice President
Global Site Facilities Services, IBM
Global Economic Impact on Data
Centers Can ASHRAE Books Help?
Don Beaty President DLB Associates
Past Chair of ASHRAE TC9.9
November 15-18, 2009
JW Marriott Desert Ridge, Phoenix, AZ
MITSUBISHI ELECTRIC
UPS Division
Conference Partners:
DCJ_ad_4fx.qxd:7x24 Conference 9/25/09 6:05 PM Page 1
28 | THE DATA CENTER JOURNAL www.datacenterjournal.com
ITOPS
F
or many shops, this information is unavailable: IT does not
receive an energy bill, and does not use, or have, tools to iden-
tify its share of energy consumption. In the past, electricity
costs, especially in smaller IT shops, were of minor concern
in many cases, the energy bill was simply lef in the hands of
the facilities director or company accountant to pay and fle away.
However, in the same study, Info-Tech fnds that 28% of IT de-
partments are now piloting an energy measurement solution of some
kind, and an additional one-quarter of shops are planning a measure-
ment project within twelve months. Many converging factors drive
interest in measuring and managing energy use, and the major ones
are outlined here:
n Increasing energy costs
Te US Energy Information Administration (EIA) reports that
between 2000 and 2007, the average price of electricity for busi-
nesses increased from 7.4 cents per kilowatt-hour (kWh) to 9.7
cents per kWh an increase of 30%.
n Burgeoning data center energy consumption
According to the American Society of Heating, Refrigerating and
Air-Conditioning Engineers (ASHRAE), energy density of typical
mid-range server setups has increased about four times between
2000 and 2009 (from about 1,000 watts per square foot to almost
4,000). Greater server consumption means more waste in the
form of heat, so energy consumption of cooling and support
systems also spikes simultaneously.
n Green considerations
Energy consumption has an associated carbon footprint. Interest
in reducing energy use has increased in IT and senior manage-
ment ranks.
Ultimately, interest in energy data is driven by the age-old
accounting precept: What gets measured gets done. Realizing that
energy use will become a compounding issue, a growing number of
IT shops seek to quantify energy as an operational cost, just like line
items such as stafng and maintenance. Once the cost is accounted
for, IT has a number to improve on. In this note, learn about three
options for obtaining energy numbers in the data center. A companion
Info-Tech Advisor research note, Energy Measurement Methods for
End-User Infrastructure describes how to obtain energy data at the
user infrastructure level (workstations, printers, and the like).
CONSIDERATIONS FOR CALCULATION
Ultimately, energy data needs to be collected from two cost
buckets: data-serving equipment (servers, storage, networking, UPS)
and support equipment (air conditioning, ventilation, lighting, and
the like). Changes in one bucket may afect the other bucket, and by
tracking both, IT can understand this relationship. Tese buckets are
also necessary for common efciency calculations; for more informa-
tion, refer to the Info-Tech Advisor research note, If You Measure It,
Tey Will Green: Data Center Energy Efciency Metrics. Sofware
for tracking energy use and cost is another consideration. While
assessing the need for a full energy management solution, IT shops
can use something as simple as an Excel spreadsheet to enter energy
fgures and track costs over a few months. Specifcs on collecting data-
serving and support equipment energy data, and tracking sofware, are
discussed further below.
OPTION ONE: You May Already Have Access to
Energy Data
Depending on data center setup, vintage and pedigree of
equipment, some IT shops can already collect energy numbers at the
data-serving or support equipment levels. Te following scenarios are
common starting points when beginning data collection:
n Existing software metering
Newer servers, power-distribution units (PDUs) and UPS
systems have monitoring built into the included management
consoles. For example, newer HP ProLiant blades ship with
power tracking features, and the HP Insight Control management
console provides energy monitoring capabilities.
n Existing hardware metering
Some server racks and PDUs may have hardwired meters built
in. For example, some of APCs more basic PDUs for racks have
built-in power screens.
Unfortunately, built-in metering is rarer in the support equip-
ment bucket. Many older data center air conditioning units and air
handlers do not provide this data. In some cases, one can estimate this
energy number by subtracting the data-serving bucket from the total
data center energy draw. But, since older data centers may not be sub-
metered (the draw of the data center is not measured separately from
the rest of the building), one cannot always perform this calculation,
and installation of a meter is necessary.
Energy Measurement
Methods for the Data Center
A recent Info-Tech study of over 800 mid-sized IT shops found that only
25% have fully adopted an IT energy measurement initiative.
THE DATA CENTER JOURNAL | 29 www.datacenterjournal.com
If existing sofware or hardware metering includes management sofware for trending,
this may be enough to set up a baseline. However, if energy numbers need to be collected, IT
can record data from consoles, panels, or data fles manually for short periods of time. Tis data
can be entered into spreadsheets or dedicated sofware. Te US Department of Energy has a
directory of sofware packages, such as Energy Lens, a $595 US Excel plug-in, and ofers a free
assessment tool, the Data Center Energy Profler.
OPTION TWO: Cheap & Cheerful
If energy numbers are not available through existing equipment or sofware, IT should
make an investment in this capability. Tis is a common scenario for smaller or older facilities,
and is ofen required to measure energy on the support equipment side for many shops. Cheap
and cheerful data collection options include:
n Basic watt readers
Tese measure wattage drawn from the plug. Inexpensive devices provide spot readings
only, starting around $20 US. However, a popular line at a slightly higher price point,
Watts up, ofers energy tracking and PC connectivity with a graphing package, starting
around $130 US. Tese are best-suited to smaller server rooms and data centers but may
not be appropriate for larger or mission-critical facilities with aggressive energy needs.
n Industrial-strength meters
Standard Performance Evaluation Corporation (SPEC) provides a list of heavier-duty
energy meters, which typically run $200 US to more than $2000 US. Tese meters, many
of which are designed for manufacturing and industrial environments, include data con-
nectivity and are better-suited to handling the industrial-grade energy requirements of
multiple PDUs and high-voltage components in data centers. SPEC provides free measure-
ment sofware that is verifed as compatible with these devices.
To collect data in both buckets, IT may need to have an electrician or data center profes-
sional install sub-meters or dedicated measurement devices. If the organization is not yet ready
for such a move, cheap and cheerful options should at least provide a rough cost number for the
data-serving bucket to quantify the true operational cost of servers and storage.
Note that options one or two ofen come along with two major disadvantages. First, some
solutions model energy use of isolated components in the data center. IT still wont understand
how changing energy consumption of a group of components afects other components in the
data center; for example, changing server loads afect heat output and thus air cooling needs.
Second, measuring total data center energy use at only one or a few points causes fat trend-
ing; essentially, IT will have a total energy use/cost number, but wont understand how energy
use trends up and down in diferent areas of the data center. With both of these disadvantages,
long-term optimization remains difcult. Options one and two are good options to get an
overall handle on energy costs, while major optimizations ofen require a bigger investment in
option three, described next.
OPTION THREE: Professional-Grade Management Solutions
An increasing number of hardware vendors and data center energy equipment provid-
ers ofer full management packages for data centers, which include integrated hardware and
sofware and extensive reporting and trending options. In addition, data center planners ofen
include these features in new data center plans, since the additional cost of such a project is
nominal. Complete management solutions tend to come in two forms:
n As an add-on to an existing facility
Both tier one and specialized vendors now provide power management capabilities for
existing facilities. Sentilla, for example, recently introduced a solution that includes wire-
less meters which feed sofware, priced on a per-device basis, starting at $40 US per month
and declining as volumes increase. Te measurement devices can be installed directly or
clamp onto cables of existing equipment. Sentilla has priced this solution to allow a return
on investment of less than one year based on typical optimizations.
n Integrated into equipment upgrades or a new facility
New power equipment, servers, and other data center components ofen include power
30 | THE DATA CENTER JOURNAL www.datacenterjournal.com
EDUCATION
CORNER
A
fer youve visited hundreds of data centers over the last 20+
years (like your authors), you begin to see problems that are
common to many of them. Were taking this opportunity to
list some of them and to recommend how to correct them.
Please understand that we are focusing on existing older
(aka legacy) data centers that must remain in production.
1
PROBLEM: Leaky raised access foor
Most existing data centers employ raised access foor
to route cold air from cooling units to foor air
outlet tiles and grilles that discharge the air where needed.
However, leaks in the foor waste the cold air and reduce
cooling ability.
r REMEDY:
Identify the leaks and close them. Typical culprits
are misftted foor tiles, gaps between foor tiles
and walls and columns, columns not built out
completely to the structural foor beneath, and
oversized foor cable cutouts. Unnecessary
cutouts should be eliminated and necessary
cutouts should be closed with brush-type
closures.
Common Mistakes
in Existing Data
Centers and How
to Correct Them
tracking and management features as standard. Tis may not pro-
vide complete data for both data-serving and support equipment
buckets; however, if an upgrade is being performed anyways, get-
ting these features without incurring additional costs is a bonus.
Have the vendor demonstrate how these features work before
buying.
Professional grade solutions, whether installed independently
or included with data center upgrades, obviously cost more than op-
tions one and two. Tese solutions, which automate collection of very
granular data, are useful once data center operators and IT leaders
fully understand energy use principles and baselines, and when the
business is ready to move to energy optimization and reduction. Op-
tions one and two are better choices for starting to establish energy
cost as an operational line item. Option three is better for long-term
energy and cost reduction goals.
BOTTOM LINE
In the data center, options for energy monitoring and measure-
ment are beginning to proliferate. Understand why IT shops are
benchmarking energy use now, which components need to be mea-
sured in data centers, and three options for getting started with data
collection and trending.
n
Info-Tech Research Group is a global leader in providing IT research and advice.
Info-Techs products and services combine actionable insight and relevant
advice with ready-to-use tools and templates that cover the full spectrum of IT
concerns. www.infotech.com
RECOMMENDATIONS
1. Go cheap and cheerful frst. Automatic data
collection and trending in both data-serving
and support equipment is very useful; it allows
IT to identify when and why energy use spikes.
However, when piloting energy management,
it may be suffcient to collect rough data and
record energy fgures manually, in a spreadsheet
or basic tracking software, a few times a day for
a month or two. Eventually, a more aggressive
solution will be required especially in organiza-
tions responsible for more than 50 servers.
2. Use basic data as a call to action. Tracking en-
ergy use for a month or two, cheaply and cheer-
fully, gives IT a silver bullet. Senior management
now has a real number attached to the cost of
energy; use this to get their attention. Moreover,
a demonstrative energy fgure provides a great
starting point to build the business case for a
comprehensive monitoring solution.
BY CHRISTOPHER M. JOHNSTON, PE AND VALI SORELL, PE
SYSKA HENNESSY GROUP, INC.
www.datacenterjournal.com THE DATA CENTER JOURNAL | 31
2
PROBLEM: Underfoor
volume congested with
cables
Tis condition ofen manifests itself in foor
tiles that wont lay fat and foor air outlet tiles
that wont discharge air.
r REMEDY:
Identify control, signal, and power cables
that are not in service, then carefully remove
(mine) them. If you dont have this expertise
in your staf, then you should engage a skilled
IT cabling contractor.
3
PROBLEM: Space
temperature too cold
In the past, data center managers liked
to keep the room like a meat locker, believing
the theory that a colder space would buy a
little more ride through time when the cool-
ing system went of and had to be restarted.
Te miniscule additional ride through time (a
few seconds) is gained at the high operating
cost of keeping the room unnecessarily cold.
Te current ASHRAE TC9.9 Recommended
Termal Envelope is 64.4 F to 80.6 F dry
bulb air at the server inlet; the warmer the air
temperature the lower your operating cost.
r REMEDY:
Move the control thermostats in each of your
cooling units to the discharge air side if not
already located there (one unit at a time) and
calibrate the thermostat. Set the thermostat
to maintain 60 F discharge air. Once all of
the thermostats are on the discharge air side,
start raising their setpoints 1 F at a time and
monitor the inlet temperature at your warm-
est servers for a day. If the inlet air tempera-
ture at your warmest server is less than 75 F
afer a day, raise the temperature leaving the
cooling units another degree. Continue until
the warmest server has 75 F entering air.
4
PROBLEM: Cooling units
fght each other
We cannot count how many times
weve seen one cooling unit cooling and
dehumidifying while the one beside it is hu-
midifying. Tis is an energy wasting process
that is a relic of the days when the industry
consensus design condition was 72F +/- 2F
and 40% relative humidity +/- 5% (and before
that a relic of the paper punch card days). As
mentioned above, todays thermal envelope is
64.4 F to 80.6 F dry bulb. Te same thermal
envelope specifcation also includes a recom-
mended range of moisture content. Tat
range is defned as 41 F dew point to 59 F
dew point, with a maximum cap of 60% rela-
tive humidity. If the entering air temperature
is 75 F then the relative humidity can fall
anywhere from 33% to 60%. Te days of tight
temperature and humidity control bands are
past and the need for simultaneous humidif-
cation and reheat are over.
r REMEDY:
Disable humidifcation and reheat in all cool-
ing units except two in each room (on op-
posite sides of the room). Change the controls
for those units so they operate based on room
dew point temperature. If multiple sensors
are used, its important that a single average
value be used as the controlled value. Tis can
prevent calibration errors between multiple
sensors from forcing CRAC unit to fght each
other. Set the controls to maintain dew point
within the ASHRAE TC9.9 Recommended
Termal Envelope.
5
PROBLEM: Electrical
redundancy for cooling
units is lower than the
mechanical redundancy
Tis is another one weve lost count of. Te
typical scenario is that the desired site redun-
dancy is Tier III or Tier IV, the mechanical
engineer has done a good job designing to the
desired tier, but the electrical engineer lost fo-
cus and branch circuited every cooling unit to
one or two panelboards. Te end result is that
the redundancy of the site is Tier I because
the electrical redundancy for the cooling
units is lower than the mechanical redundan-
cy. For example, assume that the need is for
10 cooling units and 12 are provided, so the
mechanical redundancy is N+2. Te electrical
engineer however has circuited all cooling
units to one branch circuit panelboard so the
electrical redundancy is N if the one panel-
board fails then all cooling fails.
r REMEDY:
Identify another source to supply backup pow-
er for the cooling units this source may be
direct from the standby generator if need be.
Te main criterium for this Source 2 is that it
is available if the original Source 1 fails. Ten,
add transfer switches for each cooling unit so
that Source 2 will supply if Source 1 fails.
6
PROBLEM: No hot aisle/cold
aisle cabinet arrangement
Tis problem becomes more burden-
some as the critical load density (watts/square
foot) increases. At low critical load densities it
is not a problem.
r REMEDY:
As time passes and technology refreshes,
migrate to a hot aisle/cold aisle arrange-
ment. Tere is no magic bullet for this just
advance planning and attention to detail.
7
PROBLEM: Too many CRAC
units operating
Tis one may seem counterintuitive,
so its no surprise that this occurs in most
legacy data centers. Poor air fow manage-
ment creates hot spots i.e. locations where
the temperature entering the server cabinets
is outside of the TC9.9 thermal envelope.
Te conclusion most data center managers
and facilities managers make is that there is
insufcient capacity, so they run more CRAC
units.
r REMEDY:
Adding more CRAC units when the capac-
ity was already sufcient actually makes the
problem worse, especially when using con-
stant volume CRAC units. Te CRAC units
will operate less efciently, using more energy
to dehumidify the space, which in turn forces
the reheat coils and the humidifers to run
concurrently. Te solution is to eliminate the
humidifers in all but two units (see item #4
above) and disconnect all reheat coils. An
equally important step is to match the load
within the space to the capacity available. It is
common to see 300% of the needed capacity
actually on and operating at any time. Once
the air fow management remedies listed in
items #1 through #4 above is implemented,
the more appropriate capacity that should be
operating at any time is 125% to 150%.
8
PROBLEM: The cabinets
restrict airfow into the
servers contained inside
Sometimes, the data centers worst enemies
are the cabinets selected for the space. Legacy
data centers ofen used cabinets with solid
glass or panel doors. Even though some
breathing holes are provided, they do in
fact ofer too much resistance to the air fow
needed by the computer equipment inside.
r REMEDY:
Replace doors with perforated doors of large
free area. Te larger the free area, the better.
Tis applies to both front and rear doors of
the cabinets.
n
T
he economy has certainly been tough
on all of us these past 12 months. I
thought it might be worthwhile to
revisit an article we published on DCJ
in 2006 concerning technology and its
market potential and duration.
We believe that these questions and can
be easily answered by recalling something
learned years ago in Econ101; the S curve.
Te basic tenants of the S curve are that
1) all successful products follow a
known and predictable path through three
stages; Innovation Growth and Maturity and
2) that these stages are of equal length.
So lets explore the history of the computer
and internet, events, dates and time frames:
Te frst electronic computer was de-
veloped for the US military and was frst put
in use in 1945. By todays standards for elec-
tronic computers the ENIAC was a grotesque
monster. It had thirty separate units, weighed
over thirty tons, used 19,000 vacuum tubes,
1,500 relays, and demanded almost 200,000
watts of electrical power. ENIAC was the
prototype from which most other modern
computers evolved.
1960
the frst commercial computer
with a monitor and keyboard
was introduced, Digitals PDP-1.
1962
the frst personal computer
was introduced. It was called
LINC and each unit cost over $40,000.
1969
ARPANET was created to
link government researchers
scattered across the US at universities and re-
search facilities so that they could share data.
Tis was the start of the Internet.
1976
Apple Computer Company
was created and around 1977,
the frst Apple computer was introduced. It
was a kit that the customer assembled. Te
next year Apple introduced a factory-as-
sembled version. Te volume of sales was
small and the costs high. Te Apple was
followed by an almost endless list of me-too
computers; Timex Sinclair, Commodore,
Tandy, Pet, etc.
1981
IBM introduced Te Personal
Computer. Te IBM name
and open architecture and DOS operat-
ing system enabled other manufacturers to
introduce IBM Compatible PCs, also know
as clones.
Ten in 1985 Microsof introduced
Windows. Windows moved the PC from text
based commands to point and click. Tis
transformed the PC from a tool for only the
most dedicated to something that everyone
could easily master and moved the PC from
something considered as a toy by many to
a legitimate business tool. Sales volumes
kicked up, competition was ferce and prices
dropped dramatically.
By 2001, the PC was readily available,
inexpensive, and standard equipment on
almost every desk in corporate America, a
commodity product with low margins and
slow growth. Tis could be the end of the
story but growth of another technology
would overshadow the development of the
PC, push technology into our everyday lives,
and give the PC a new lease on life.
As PCs developed so did ARPANET.
Te Internet was largely used by IT profes-
sionals, researchers, academia and other
early adapters of technology. It was slow,
text based and difcult to use. In 1994, Jim
Clark and Marc Anderson developed the
Netscape Browser and just as Windows had
made the PC a practical tool, Netscape made
the Internet practical.
Tere were many other milestones that
deserve attention and were perhaps more
important then some of the events mentioned
here, such as the research performed at Xerox
PARK, where modern desktop computing
was created; windows, icons, mice, pull down
menus, What You See Is What You Get
(WYSIWYG) printing, networked worksta-
tions, object-oriented programming -- etc.
What many dont know is that Xerox could
have owned the PC revolution but simply
couldnt bring itself to disrupt its core busi-
ness of making copiers.
Why is all of this so important? Well,
depending on your starting point the innova-
tion phase is likely to have been somewhere
between 20 and 30 years and possibly even
longer. Tis isnt an exact science since we
dont know how large the market will ulti-
mately grow or where the curve really starts.
No matter how you draw the curve we are
likely below the 50% penetration level and
have a long stretch to go.
Te dot-com boom was fueled by the
release of signifcant IT resources and tal-
ent as Y2K preparations drew to a close, an
investment community that recognized the
tremendous technology growth ahead, and
signifcant innovation.
Te dot-com bust occurred because
an over anxious investment community
provided too much money too fast. Te
buying power of the Early Adopters, people
and companies who want to be in the leading
edge and are willing to pay high prices just
wasnt signifcant enough to absorb all of the
innovation. Tis pushed the supply above the
curve. As with all economic imbalances the
market forces correction.
Further, many dot-com innovations
lacked key infrastructure. Just as the automo-
bile could not have been successful without
development of roads, bridges, gas stations,
tire dealers, hotels and even fast food, many
of the services that were introduced during
the dot-com boom required signifcant devel-
opment in other areas.
For example, hosting applications at
remote unmanned data centers or colloca-
tion facilities is only practical with remote
management applications and inexpensive
bandwidth. We may take this for granted
today, bandwidth wasnt inexpensive seven
years ago and remote management tools were
not as sophisticated as they are today.
Yes there have been casualties along the
way but signifcant advancements were made
during the dot-com boom and early adapters
have in many cases reaped many benefts.
Manage Service Providers, Collocation and
other services have sent signifcant growth
and success since we frst published this
article and if our numbers are correct have
quite a run to go
YOURTURN
Technology and the Economy
BY KEN BAUDRY
From our Experts Blog:
WWW.DATACENTERJOURNAL.COM/BLOGS
32 | THE DATA CENTER JOURNAL www.datacenterjournal.com
1960
1960 At Cornell University, Frank Rosenblatt
builds a computerthe Perceptronthat can
learn by trial and error through a neural network.
1960 The Livermore Advance Research Computer
(LARC) by Remington Rand is designed for
scientific work and uses 60,000 transistors.
1960 In November, DEC introduces the
PDP-1, the first commercial computer
with a monitor and keyboard input.
1960 Working at Rand Corp.,
Paul Baran develops the
packet-switching principle for
data communications.
BEGI N
FI LE F ( KI ND=REMOTE) ;
EBCDI C ARRAY E [ 0: 11] ;
REPLACE E BY HELLO WORLD! ;
WHI LE TRUE DO
BEGI N
WRI TE ( F, *, E) ;
END;
END.
1960 Standards
for Algol 60 are
established
jointly by
American and
European
computer
scientists.
http://www.latec.edu/~acm/HelloWorld.shtml
D
i
g
i
t
a
l

E
q
u
i
p
m
e
n
t

C
o
r
p
o
r
a
t
i
o
n
R
a
n
d

C
o
r
p
.
1962-1963
1962 Atlas, considered the worlds
most powerful computer, is
inaugurated in England on December
7. Its advances include virtual memory
and pipelined operations.
1962 The Telstar communications
satellite is launched on July 10 and
relays the first transatlantic television
pictures.
1962 H. Ross Perot founds Electronic
Data Systems, which will become the
worlds largest computer service bureau.
1962 Stanford and Purdue Universities
establish the first departments of
computer science.
1962 Max V. Mathews leads a Bell
Labs team in developing software that
can design, store, and edit synthesized
music.
1962 The first video game is invented by MIT
graduate student Steve Russell. It is soon
played in computer labs all over the US.
T
h
e

C
o
m
p
u
t
e
r

M
u
s
e
u
m
1963 On the basis of an idea of Alan
Turings, Joseph Weizenbaum at MIT
develops a mechanical psychiatrist
called Eliza that appears to possess
intelligence.
1969 1970
1969 Bell Labs withdraws from
Project MAC, which developed
Multic, and begins to develop Unix.
1969 The RS-232-C standard is
introduced to facilitate data exchange
between computers and peripherals.
1969 The US Department of Defense
commissions Arpanet for research
networking, and the first four nodes
become operational at UCLA, UC
Santa Barbara, SRI, and the University
of Utah.
1970 Winston Royce
publishes Managing
the Development of
Large Software
Systems, which
outlines the waterfall
development method.
1970 Shakey,
developed at SRI
International, is
the first robot to
use artificial
intelligence to
navigate.
T
h
e

C
o
m
p
u
t
e
r

M
u
s
e
u
m
T
h
e

C
o
m
p
u
t
e
r

M
u
s
e
u
m
1976
1976 IBM develops the
ink-jet printer.
1976 The Cray-1 from Cray Research is
the first supercomputer with a vectorial
architecture.
1976 OnTyme, the first
commercial e-mail service,
finds a limited market
because the installed base
of potential users is too
small.
1976 Steve Jobs and Steve Wozniak
design and build the Apple I , which
consists mostly of a circuit board.
1976 Gary Kildall develops
the CP/M operating system
for 8-bit PCs.
T
h
e

C
o
m
p
u
t
e
r

M
u
s
e
u
m
T
h
e

C
o
m
p
u
t
e
r

M
u
s
e
u
m
1977 The Apple II is
announced in the spring
and establishes the
benchmark for personal
computers.
1977
1977 Steve Jobs and Steve Wozniak
incorporate Apple Computer on January 3.
1977 Bill Gates and Paul Allen found Microsoft,
setting up shop first in Albuquerque, New Mexico.
A
p
p
l
e

C
o
m
p
u
t
e
r
,

I
n
c
.
M
i
c
r
o
s
o
f
t

A
r
c
h
i
v
e
s
1977 Several companies
begin experimenting
with fiber-optic cable.
1980-1981
1981 The open-architecture IBM PC is launched in
August, signaling to corporate America that desktop
computing is going mainstream.
1981 Japan grabs a big piece of the
chip market by producing chips
with 64 Kbits of memory.
I
B
M

A
r
c
h
i
v
e
s
1980 David A. Patterson at
UC Berkeley begins using
the term reduced-instruction
set and, with John Hennessy
at Stanford, develops the concept.
1981 Xerox introduces a
commercial version of the Alto
called the Xerox Star.
1981 Barry Boehm devises Cocomo
(Constructive Cost Model), a
software cost-estimation model.
1976 1977 1981
www.computer.org/computer/timeline/timeline.pdf
Get your free Building Owners Guide to
precision cooling at www.DataAire.com
For over 40 years, weve been the industry
innovator in precision environmental control.
Specializing in:
n
Precision cooling units built to your specifications
n
Short lead times
n
Advanced control systems
n
Ultra-reliable technology
82
The reliable choice in
precision cooling equipment.
714- 921- 6000
Perfect for a picnic.
Fine for a jog.
Agony for a computer.
DataAire_RunnerAd.indd 1 6/18/08 11:04:30 AM

Вам также может понравиться