Вы находитесь на странице: 1из 59

Dear Readers,

We are proud to introduce the May issue of Enterprise


IT Security magazine!
This issue cover story is related to Cloud Computing
security which seems to be a hot and important topic
lately.
Best practices of using cloud environment are presen-
ted in the article by Chris Poulin on pages 6-8.
It is essential to understand what Cloud Computing
really is and how does it work. Its main ideas and featu-
res are described by Gary S. Miliefsky on pages 10-12.
After familiarizing with those two articles, we can move
forward to Ajay Porus article on pages 14-15 in order
to get to know how does the cloud afect security.
We also recommend further articles written by profes-
sionals in IT security feld. I hope that you will fnd the
articles presented in this issue useful and that you will
spend some pleasant time reading Enterprise IT Secu-
rity magazine.
We put a great efort into this magazine collecting the
best articles for our readers. We hope you apprecia-
te the content of Enterprise IT Security magazine. In
order to keep on developing the magazine has to be
paid. The subscription cost is $15 per issue. Everybody
interested in the enterprise IT security feld would fnd
out that our publication is worth the price.
You are welcome to contact us with your feedback.
Enjoy reading!
Best regards,
ukasz Koka
Publisher:
Pawe Marciniak
Editor in Chief:
Ewa Dudzic
ewa.dudzic@enterpriseitsecuritymag.com
Managing Editor:
ukasz Koka
lukasz.koska@software.com.pl
Art Director:
Marcin Zikowski
Graphics & Design Studio
www.gdstudio.pl
Production Director:
Andrzej Kuca
andrzej.kuca@software.com.pl
Marketing:
Kinga Poyczuk
kinga.polynczuk@software.com.pl
Editorial Offce:
Software Press
www.enterpriseitsecuritymag.com
Enterprise Advisory Board:
Rishi Narang, CEH www.wtfuzz.com
Contributing Editors:
Wayne Tufek, William McBorrough
Beta Testers:
Wayne Tufek, William McBorrough,
Manu Zacharia
Proofreaders:
Wayne Tufek, William McBorrough,
Manu Zacharia
Publisher:
Software Press Sp. z o.o. SK
02-682 Warszawa, ul. Bokserska 1
Phone: 1 917 338 3631
Whilst every effort has been made to ensure the high qual-
ity of the magazine, the editors make no warranty, express
or implied, concerning the results of content usage.
All trade marks presented in the magazine were used only
for informative purposes.
All rights to trade marks presented in the magazine are
reserved by the companies which own them.
DISCLAIMER!
The techniques described in our articles may only be used
in private, local networks. The editors hold no responsibil-
ity for misuse of the presented techniques or consequent
data loss.
4
1/2011
Content issue 01/2011
CLOUD COMPUTING
06 Defining Best Practices for IT Security
Within a Cloudy World
by Chris Poulin
Much noise is made about the security infrastructure
that cloud providers invest in, but the aphorism, Trust but
verify, is warranted. Public cloud providers are looking at it
from a traditional perimeter security framework [].
10 Cloud Computing - Is it Worth the Move?
by Gary S. Miliefsky
[] does anyone know what the Cloud really is? How
does it differ from the Web or the Internet and why is it
so important? Once we have a grasp of what the Cloud
is, then we can better understand why it is a Hacker
Haven and a Malware Magnet.
14 Coud Computing
and Its Security Benefits
by Ajay Porus
Cloud computing is a new type of computing. There
are many differences in cloud computing from normal
computing. According to the definition Cloud computing
is described by its five characteristics, four deployment
models and three services.
16 Cloud Security Advancement and Its
Effects on Data-Centric Persistence Layer
by Stephen Mallik
Clouds provide an easy access to IT resources. We
have to be aware of the accompanying security bag-
gage. Cloud computing security is typically grouped into
three areas: Security and Privacy, Compliance, and Le-
gal or Contractual Issues.
18 Evaluating the Security of Multi-Tenant
Cloud Architecture
by David Rokita
As businesses evaluate opportunities to capitalize on
the benefits of cloud computing, its important to under-
stand how the cloud architecture either increases/de-
creases vulnerability to potential threats.
22 How Application Intelligence
Solves Five Common Cloud
Computing Problems
by Patrick Sweeney
While cloud-based Web applications can offer business
benefits in certain scenarios, they have the potential to
rob companies of bandwidth, productivity, and confiden-
tial data, and subsequently put them at risk of regula-
tory noncompliance.
24 Cloud Computing Standards
by Justin Pirie
The cloud itself is a competent and established busi-
ness tool that solves a range of security issues and
drives efficiency. However once one delves deeper into
the world of cloud computing, one finds that there are
some issues that still need resolving.
MONITORING & AUDITING
26 Firewall, IPS Whats Next?
by Pavel Minarik
The frequency and sophistication of IT infrastructure at-
tacks is increasing, but, fortunately, the network protec-
tion tools are rapidly developing too. Firewalls are the
first line of protection. It was believed that the second
and the last line of the computer network defense are
Intrusion Prevention Systems (IPSs). But even those
are not enough.
IDENTITY MANAGEMENT
30 Privileged Identity Management
Demystified
by Jim Zierick
Privileged Identity refers to any type of user that holds
special permissions within the enterprise systems.
Privileged identities are usually categorized into the fol-
lowing types: Generic/Shared Administrative Accounts,
Privileged Personal Accounts, Application Accounts and
Emergency Accounts.
5
1/2011
ATTACKS & RECOVERY
36 Defence-In-Depth The Onion Approach
by Matthew Pascucci
It is impossible for any industry to be completely secure
but there are ways to mitigate potential risk and breaches
to an organization. By adopting a defense-in-depth strat-
egy, you can continue to strive towards being secure.
A successful defense-in-depth program is constituted of
three equal legs: People, Technology and Operations.
40 Ready or Not Industrial Cyber Warfare
Comes
by Itzik Kotler
Viruses and Malwares are nothing new, but Stuxnet,
a Microsoft Windows computer worm discovered in July
2010, is not just any virus its a cyber weapon. A cyber
weapon that pushes the concept of cyber warfare into
the realm of possible.
SECURITY IMPLEMENTATION
42 Enterprise IT Security Management
by the Numbers
by Shayne Champion
Because computer security involves the enterprises to-
tal set of exposures, from the local workstation or server
to the Intranet and beyond, it cannot be attained by sim-
ply implementing a magic bullet software product solu-
tion or by installing a firewall. Computer security must
be implemented by reliable mechanisms that perform
security-related tasks at each of several levels in the
environment. Implementation also involves applying se-
curity procedures and policies at each of these levels.
46 Management of Knowledge
Based Grids
by Sin Louise Haynes & Stilianos Vidalis
Grid computing is a technology that enables people
and machines to effectively capture, publish, share and
manage resources. The information overload is so big
that human beings are not able to analyse that data in a
timely manner and extract the much seeked knowledge
that will allow to further science and better our lives.
50 Security Testing
by Mark Lohman
Every organization has to protect their customer informa-
tion, employee data, and assets. This protection should
start with a well written information security program with
supproting procedures. Security Testing is the way to
confirm that everything is being followed as it should be.
52 Simplifying IT Security Management
by Richard Stiennon
Another phase of rapid change and evolution of our IT
environments has recently began. The two major inno-
vations are cloud computing and an explosion of intel-
ligent devices. Therefore ensuring that security is main-
tained has become even more daunting.
MANAGER CENTER
54 Security Challenges Facing
Enterprises in 2011
by Amit Klein
At the end of 2010 there were warning signs of how
threats are developing and what the weapons of choice
are shaping up to be. Mobile phone security, the blurred
perimeter, financial malware, cloud computing security,
platform diversification, consumerisation of IT, browser
threat to main frames and social networks seem to be
on the top of the list.
TECH CORNER
56 Top 8 Firewall Capabilities for Effective
Application Control
by Patrick Sweeney
Todays leading companies use a Next-Generation
Firewall that can deliver comprehensive intelligence,
control, identification and visualization of all the ap-
plications on their networks. This is effective because
Next-Generation Firewalls can tightly integrate applica-
tion control with other intrusion prevention and malware
protection features.
LETS TALK
58 YOU As a Password
by Tom Helou
Your palm, your face, and your typing pattern are all
unique characteristics that cannot be replicated, lost
or stolen as more traditional methods of identification
increasingly can. Instead, you become the key to your
data. You become the password.
CLOUD COMPUTING
6
1/2011
F
or example, the Eighth Annual Global Information Se-
curity Survey conducted jointly by CSO and CIO maga-
zines and PriceWaterhouseCooper found that Sixty-two
percent of respondents have little to no confidence in their abil-
ity to secure all the data within the cloud. Even among the 49
percent of respondents that have already deployed cloud com-
puting, more than a third (39 percent) have major qualms about
security. The survey contacted by over 12,000 business and
technology executives around the world is one of many similar
studies that all highlight the same perceived security concerns
around cloud computing.
New technology, old school security
The biggest fear in embracing a cloud strategy is loss of con-
trol. But the extent to which this is a factor depends on whether
youre the cloud provider or consumer. In many cases youll be
both, since virtualization was in the Gartner top 3 for 2010 and
is projected to have a strategic impact in 2011 and help pay for
IT projects.
If youre a consumer, there is some degree of trust you have
to place in the providers hands. Of course, you should do your
research, ensure the provider has implemented the appropri-
ate security and privacy controls with adequate protection for
the sensitivity of your data, and have a contract in place to hold
them responsible.
Much noise is made about the security infrastructure that
cloud providers invest in, but the aphorism, Trust but verify, is
warranted. Public cloud providers are looking at it from the old
school perimeter security framework: provide protection around
the cloud with firewalls, IDS/IPS, host-hardening and other point
solutions. These are critical security controls, to be sure, but the
defense-in-depth security doesnt go far enough in a cloud en-
vironment; we cant rely solely on protection mechanisms, no
matter how layered.
If nothing else proves the point, cloud computing has blasted
a gaping hole in the network perimeter, and the game is now
about visibility and security intelligence. Information security in
the cloud needs to protect the data as well as the infrastructure.
This requires an understanding of the data and its value to the
customers business. Cloud 1.0 isnt taking that into account as
market forces drive rapid build-out and early adoption. Cloud
providers havent fully defined a comprehensive and mature
public cloud security model.
One central issue is the collocation of customer data, which
poses a number of interesting questions. For example, how do
cloud providers react when one bad client attracts a criminal in-
Defining Best Practices
for IT Security Within
a Cloudy World
The rise of cloud computing has been one of the highlights of the
last few years. The notion that information technology can be
delivered in a highly scalable utility model is generating excitement
for both customers and vendors alike. However, there is still a lot of
concern regarding security.
Defining Best Practice IT Security Within a Cloudy World
7
1/2011
vestigation? There is the potential for data to get mixed in with
the malefactors and subjected to scrutiny by law enforcement.
What happens if one client is running a service that heightens
the risk of attracting an APT or a DoS attack? How do cloud pro-
viders understand context around customer data and provide
targeted mitigating controls? There are many difficult questions
to answer but they are not unsolvable.
Always start with a risk assessment
Although cloud is perceived as a new technology, you could
make the case that its just an evolution of managed services
or web application hosting injected with a large dose of Inter-
net. Like any shift, such as the move to a client server model,
under closer examination, it turns out the cloud requires the
same thought process needed as any other new technology:
organizations have to define the benefits and risks of the
cloud, and implement processes and technology to mitigate
and enforce their chosen risk management policy.
Lets agree on a basic assumption that says, before you can
define an enforceable security policy that spans internal IT,
cloud, and hosted environments, you need to understand the
details of your data, business processes, and information flow;
you have to conduct a risk assessment. Only after careful risk
analysis can you define your cloud strategy.
Phase one: Discovery
The first step to adopting the cloud is determining which da-
ta stays completely within your control and which data can be
housed in the cloud. This process begins with eDiscovery: find-
ing the data you have and where it lives. eDiscovery can be an
arduous process and many organizations are shocked by the
results: data is often co-mingled and spread far and wide. Per-
sonally Identifiable Information (PII), such as employee records,
may be stored on servers with orders for marketing actvities,
and the security controls inadequate for protecting the more
sensitive data; financial records or source code may be stored
on laptops without encryption.
Phase two: Classify
Once you know where your data lives, you have to classify it.
Your classification scheme could be based on the Unclassi-
fied, Confidential, Secret, and Top Secret scheme used by the
US DoD, or as simple as PII and Not PII, or even Green, Yel-
low, Red. Then, organize the data on systems hosting data of
the same classification and configured with associated security
controls. Just this step alone can improve your organizations
security posture and get you much of the way to meeting many
compliance mandates.
Phase three: Data transit
After going through the exercise of classifying your data, its es-
sential to define a data transit policy to decide what data stays
at home and which is cloud-bound. Security Information and
Event Management (SIEM) can watch your endpoints, firewalls,
DLP solution users, content, and network activity to monitor data
on its way to the cloud and determine if it should be allowed.
With firewall and proxy logs or network activity, this may be
through simple source/destination combination rules or on-the-
fly content identification. For example, you may create a rule
that states that PCI servers should not send data to the cloud.
With content-aware network profiling, you can also watch for
social security or credit card numbers on the wire and alert on
matches, features typically found in Data Loss Prevention (DLP)
solutions. And if you have a purpose-built DLP application, you
can get even more granular and feed its results to the SIEM to
perform complex correlations against other intelligence feeds,
including those in-built to the SIEM. Through Windows object-
auditing or endpoint protection software, SIEM can monitor file
and directory access and correlate it to firewall logs or network
activity. There are plenty of use cases that support security and
the cloud, with SIEM as the cornerstone providing the overarch-
ing security intelligence.
Building in visibility
In a traditional on-premise implementation of IT, organizations
can define, implement, and monitor operational and security
policies. The flow of information is controlled from end-to-end.
In a cloud environment, security information is more difficult to
gather. Even though security in the cloud is mostly opaque to
consumers, SIEM can still be used to monitor data in transit and
detect policy violations.
If youre a provider, consider building in consumer-facing
visibility. This means giving access to the application, iden-
tity, and system events relevant to their tenancy, and to the
extent possible, infrastructure intelligence from network ac-
tivity.
Next-generation SIEM solutions will even offer a virtual ver-
sion of application profiling capability built for virtual environ-
ments. And depending on how you compartment your infra-
structure, you can give access to customers to view their data
and charge for the service.
SIEM as a cloud
Looking at it from a different angle, SIEM itself provides cloud
capability. I submit that in medium to large enterprises, SIEM
should be managed as an internal security intelligence cloud.
The role of a SIEM is to gather vast amounts of data from
many internal groups, and even organizational divides, with dif-
ferent interests:
The frewall management group may feed logs into the
CLOUD COMPUTING
8
1/2011
SIEM to be alerted on security events such as port scan-
ning across multiple frewalls which may indicate a low-
and-slow attempt to breach the perimeter.
ThesystemsmanagementgroupmayfeedMicrosoftWin-
dows Active Directory events into the SIEM to be alerted
on user login failures signaling a brute-force password at-
tack or escalation of privileges attempt.
Thenetworkmanagementgroupmayfeedfowdatain-
to the SIEM to detect denial-of-service attacks or trouble-
shoot asymmetric routing problems.
While many groups feed data into the SIEM, the security or
risk management group often manages it. This is analogous to
cloud services, which have consumers and a provider. When the
two are in separate organizations, theres a clear dividing line
between roles and responsibilities. Providers understand that
the data belongs to the consumer, its customers, and providers
have an obligation to:
Protect the data under their care, employing the widely
adoptedConfdentiality,Integrity,andAvailabilitymodel.
Segmentdatabetweencustomers.
Provideappropriatecontrolstoprotectunauthorizedac-
cess to customer data from external entities and between
customers sharing the same cloud.
Avoidaccessingcustomerdatafortheprovidersuseor
beneftunlessspecifcallyallowedbythecustomer
Respondtoneedsoftheircustomers,suchascreatingre-
ports or adding new users.
Think of SalesForce.com or Google Mail: they provide the
cloud service and have many customers. They must adhere,
contractually, to the tenets above. Where this model differs
between traditional public cloud services and an internal
SIEM cloud, is that the SIEM provider generally has an over-
arching security responsibility that spans the data from all
groups. For example, security and risk management need
to correlate firewall logs with IPS alerts and network activity
to detect threats. The difference is only slight, though, be-
cause while Google would not read customer emails, they
do provide anti-spam filtering, track usage statistics, and
look for intrusion attempts. While privacy advocates may
see this as a violation, most consumers are fine with this
level of access.
What consumers would not tolerate, however, is if Google
decided to start forwarding customer emails to other custom-
ers out of their cloud. With a SIEM providing total context,
or Security Intelligence, the security and risk management
group may detect security threats, policy violations, or other
actionable incidents that need to be escalated.
When different groups are responsible for different areas
and manage and consume data from different user commu-
nities, the escalation process has to be treated with a level
of diplomacy and maturity.
Just like a public cloud provider, the escalation process
needs to be clearly defined and the procedure must in-
clude involving the data owners. This ensures that a chain
of responsibility is followed and allows the issue to be re-
solved closest to the group responsible for managing the
incident.
The point is SIEM can bridge the gap between security
silos in an organization. There has to be a clear contract be-
tween the operational management function and the SIEM
consumers. The contract has to separate the duties of the
managing entity and prescribe a process for handling inci-
dents and policy violations that empowers the data owners,
just like a cloud provider would be obligated to do. When
managed properly, SIEM as a cloud engenders trust and co-
operation, and ultimately yields a benefit to the SIEM con-
sumers and the business at large.
The SIEM-ready cloud
It may not be a popular message in our industry, but some ex-
perts contend that SIEM is not cloud ready. I disagree. If you can
get the telemetry to the SIEM, you can use it to provide effective
security visibility. At this stage in the game its not SIEM thats
holding up security intelligence in the cloud; its that clouds are
not yet SIEM ready. For that, well have to wait for Cloud 2.0. In
the meantime, secure all the data.
ChrIS PoulIN,
ChIEf SECurITy offICEr,
Q1 lABS
Chris Poulin brings a balance of management
experience and technical skills encompassing his
25 years in IT, information security, and software
development to his role as Chief Security Ofcer
at Q1 Labs.
As a key member of the companys Security Co-
uncil, Poulin is responsible for the continual evo-
lution of the QRadar family of solutions to keep pace with emerging security
threats, customer needs, and industry trends, as well as evangelizing QRadar
to strategic partners and customers.
Prior to joining Q1 Labs in July 2009, Poulin spent eight years in the U.S. Air
Force managing global intelligence networks and developing software. He
left the Department of Defense to leverage his leadership and technical skills
to found and build FireTower, Inc., a successful information security consul-
ting practice.
CLOUD COMPUTING
10
1/2011
W
hen it comes to regulatory compliance, if your cloud
provider is not SAS-70 audited regularly (most are
NOT) then dont expect them to be responsible for
your compliance posture. If there is a breach in the cloud, the
bottom line is that its your responsibility, if you are using Cloud
Computing to host servers or services used for your outward
facing business or if you store confidential customer records
in the cloud.
I would argue that it increases your risk and there can be no
shift of blame for a successful Cloud attack and breach of confi-
dential data stored in the Cloud. You are ultimately responsible.
So before you make the move, lets get a better understanding
of what the Cloud is and then you can decide if it is worth the
move.
Cloud Computing is the concept of offloading data storage,
software applications and computing resources to one or more
remote locations using various internet protocols. The big prob-
lem with the Cloud is that you shift risk and lose control to gain
flexibility, availability and the cost savings of shared, remote re-
sources. This, of course, opens the doors wide open for hack-
ers, cybercriminals and their malware. Ill give you some ideas
on how to deal with this problem later in this article.
For a more in depth understanding of Cloud Computing, Ive
decided to use the definition provided by my friends at the Na-
tional Institute of Standards and Technology (NIST.gov), as it is
the best, most comprehensive so why try to recreate a good
thing?
According to NIST, Cloud computing is a model for enabling
convenient, on-demand network access to a shared pool of con-
figurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and
released with minimal management effort or service provider
interaction.
This cloud model promotes availability and is composed of:
A. Five essential characteristics,
B. Three service models, and
C. Four deployment models.
Essential Characteristics:
1. On-demand self-service. A consumer can unilaterally pro-
vision computing capabilities, such as server time and net-
work storage, as needed automatically without requiring
human interaction with each services provider.
Cloud Computing
Is It Worth the Move?
Once we have a grasp of what the Cloud is, then we can better
understand why it is a Hacker Haven and a Malware Magnet. With
this understanding, we will be able to make intelligent judgments
about whether this ecosystem is one in which we will shift portions
of risk for our own organizations and how to ensure the risk is as
minimal as possible.
Cloud Computing: Is It Worth the Move?
11
1/2011
2. Broad network access. Capabilities are available over the
network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client plat-
forms (e.g., mobile phones, laptops, and PDAs).
3. Resource pooling. The providers computing resources
are pooled to serve multiple consumers using a multi-ten-
ant model, with different physical and virtual resources dy-
namically assigned and reassigned according to consum-
er demand. There is a sense of location independence in
that the customer generally has no control or knowledge
over the exact location of the provided resources but may
be able to specify location at a higher level of abstraction
(e.g., country, state, or datacenter). Examples of resources
include storage, processing, memory, network bandwidth,
and virtual machines.
4. Rapid elasticity. Capabilities can be rapidly and elastically
provisioned, in some cases automatically, to quickly scale
out and rapidly released to quickly scale in. To the con-
sumer, the capabilities available for provisioning often ap-
pear to be unlimited and can be purchased in any quantity
at any time.
5. Measured Service. Cloud systems automatically control
and optimize resource use by leveraging a metering capa-
bility at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth, and active
user accounts). Resource usage can be monitored, con-
trolled, and reported providing transparency for both the
provider and consumer of the utilized service.
Service Models:
1. Cloud Software as a Service (SaaS). The capability pro-
vided to the consumer is to use the providers applications
running on a cloud infrastructure. The applications are ac-
cessible from various client devices through a thin client
interface such as a web browser (e.g., web-based email).
The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating
systems, storage, or even individual application capabili-
ties, with the possible exception of limited user-specifc ap-
plication confguration settings.
2. Cloud Platform as a Service (PaaS). The capability pro-
vided to the consumer is to deploy onto the cloud infra-
structure consumer-created or acquired applications cre-
ated using programming languages and tools supported
by the provider. The consumer does not manage or control
the underlying cloud infrastructure including network, serv-
ers, operating systems, or storage, but has control over the
deployed applications and possibly application hosting en-
vironment confgurations.
3. Cloud Infrastructure as a Service (IaaS). The capability
provided to the consumer is to provision processing, stor-
age, networks, and other fundamental computing resourc-
es where the consumer is able to deploy and run arbitrary
software, which can include operating systems and appli-
cations. The consumer does not manage or control the un-
derlying cloud infrastructure but has control over operat-
ing systems, storage, deployed applications, and possibly
limited control of select networking components (e.g., host
frewalls).
Deployment Models:
1. Private cloud. The cloud infrastructure is operated solely
for an organization. It may be managed by the organization
or a third party and may exist on premise or off premise.
2. Community cloud. The cloud infrastructure is shared by
several organizations and supports a specifc community
that has shared concerns (e.g., mission, security require-
ments, policy, and compliance considerations). It may be
managed by the organizations or a third party and may ex-
ist on premise or off premise.
3. Public cloud. The cloud infrastructure is made available to
the general public or a large industry group and is owned
by an organization selling cloud services.
4. Hybrid cloud. The cloud infrastructure is a composition of
two or more clouds (private, community, or public) that re-
main unique entities but are bound together by standard-
ized or proprietary technology that enables data and ap-
plication portability (e.g., cloud bursting for load-balancing
between clouds).
Cloud software takes full advantage of the cloud paradigm by
being service oriented with a focus on statelessness, low cou-
pling, modularity, and semantic interoperability.
Some of the largest software and computing companies in the
world have joined in the excitement and are now offering Cloud
services. Here are a few Im sure youll recognize:
Amazon Web Services (AWS),
found at http://aws.amazon.com
Google Cloud Services,
found at http://www.Google.com/Apps/Business
Microsoft Windows Azure Platform,
found at http://www.microsoft.com/windowsazure/
RackSpace,
found at http://www.rackspacecloud.com/
CLOUD COMPUTING
12
1/2011
SalesForce,
found at http://www.salesforce.com/platform/
When becoming a Cloud consumer, there are also security ad-
vantages to consider:
1. Reducing internal risk exposure to sensitive data because
its no longer in your data center, on your server, in your
building, on your network its offsite and hosted else-
where.
2. Homogenous Clouds means security audits and vulnera-
bility tests of these Clouds is easier what works for one
private Cloud service for one customer also works for an-
other customer.
3. Cloud computing enables automated security manage-
ment. Auditing, Patch Management, Virus Scanning, Deep
packet inspection can all be part of an automated security
tools portfolio across the entire Cloud service operation.
In other words, if the Cloud security is strong, multiple users
benefit from the common security practices, tools and counter-
measures. You also gain the advantage of redundancy and dis-
aster recovery requirements necessary to deploy a more secure
and stable data warehouse.
However, one cannot simply assume that the Cloud is secure.
Do you trust your Cloud vendors security model and what are
their best practices? Have they been audited? Can you review
the results of the audit? How did they respond to audit findings?
You will lose physical control over your applications, data, serv-
ers and services when you shift them to the Cloud. In addition, if
the vendor you choose claims to have a proprietary solution be
it the application or an internal encryption or security methodol-
ogy, how can you trust the security of their implementation?
So, before you take the chance and move to the Cloud, you
should consider which Cloud Computing model will best fit your
needs maybe a Private Cloud is the best answer because you
can maintain some level of confidentiality over a Public Cloud
service.
Ive found the following areas of risk that you can address
by ensuring you are completely satisfied with the service level
agreement (SLA) of your Cloud Computing service provider:
1. Confdentiality
2. Availability
3. Integrity
4. Reporting
5. Alerting
6. Compliance
7. Policies
8. Quality of Service
If you can get some level of guarantees in these eight areas
that meet your own internal self-assessment requirements
for best practices in providing uptime or access and the qual-
ity you expect, youll be better positioned to make the right
decision on which Cloud Computing service provider is best
for you.
Heres a quick side story. Netflix launched an amazing video
on demand streaming service over the internet, following their
successful DVD distribution model. For only $8.00 USD per
month, you can stream any video you want from a collection
of over 6,000 (and growing daily). Netflix decided to move this
service to Amazons AWS Cloud Computing service offering
because Amazon offered nearly unlimited bandwidth and could
help guarantee little to no lag in the video streaming quality
this is one of the elasticity features of their AWS offering the
more customers, the more replicated virtual video streaming
servers and bandwidth, worldwide.
Seemed great on paper. Then, as Netflix grew, Amazon
thought what a great idea lets offer our own TV and Movie
streaming service. So, once Netflix put their service in the Public
Cloud of Amazon, they were able to study it in detail and build
their own similar service. Seems there was no non-compete
agreement here.
Later, Comcast, a large internet service provider (ISP) in the
USA was tired of some users sucking up all their bandwidth
when Comcast realized it was Netflix video streaming, alleg-
edly they decided to send out TCP/IP session resets to slow
down these videos (causing them to halt in the middle of the
best scenes of course) and are planning to charge for higher
Quality of Service (QoS) to their customers who demand these
video streams.
The battle rages on but the story is very interesting com-
peting service providers, one Cloud Computing host, and an
ISP, all affecting the Quality of Service (QoS) of the real cus-
tomer in this story, Netflix. You dont want to become the next
Netflix in the Cloud Computing model learn from this story.
Ensure you have Non-disclosure agreements (NDAs) in place,
get a SLA from your provider that you are comfortable with and
ensure you can frequently test them and review that they are
actually providing you the Security and Compliance posture
you need so that if you ever do have a problem, you have the
paperwork to show your own due care and due diligence at
protecting your most treasured assets that youve moved to
the Cloud.
Conclusion
The Cloud has many benefits but like all paradigm shifts, it
opens up new doors and new possibilities for both increased
rewards and risks. If you are certain that the benefits far
outweight the risks, make sure you can back it up with an
enforceable agreement from your Cloud Computing service
provider.
Remember, Its up to you to document the proper steps at se-
curing your data in the Cloud and complying with regulations,
no matter who you trust. The Cloud Computing provider is an
extension of your own IT service offerings to your own organiza-
tion, so do not hand over the keys to the castle without knowing
who youve given them to and how they will guard your most
critical and confidential assets, when youve moved the data
into the Cloud.
Gary S. MIlIefSky,
fMDHS, CISSP
Gary S. Miliefsky is a frequent contributor to various publications including
Hakin9 Magazine, NetworkWorld, CIO Magazine and SearchCIO. He is the
founder and Chief Technology Ofcer (CTO) of NetClarity, Inc, where he can
be found at http://www.netclarity.net. He is a 20+ year information securi-
ty veteran and computer scientist. He is a member of ISC2.org and a CISSP.
Miliefsky is a Founding Member of the US Department of Homeland Secu-
rity (http://www.DHS.gov), serves on the advisory board of MITRE on the
CVE Program (http://CVE.mitre.org) and is a founding Board member of the
National Information Security Group (http://www.NAISG.org).

BLACK BOX

Become a Black Box channel partner.


Join the winning team!
Call today to learn more:
888 - 245 - 6215
Black Box, a leader in connectivity for more than three decades, has entered the security
arena and is looking for channel partners. Now you can join our winning team and
enhance your bottom line with an extensive suite of security solutions. Take advantage
of award-winning products, generous margins, outstanding 24/7 technical support,
and more by becoming a Black Box channel partner. Call for details today!
Award-winning solutions:
NAC Secure
Gateway
WAN
Encryption
Biometric Access
Control
CLOUD COMPUTING
14
1/2011
Evolution of Cloud Computing
Cloud computing is an evolution and collaboration of many dif-
ferent technologies together moving towards a new type of com-
puting. There are many differences in cloud computing from
normal computing as it differentiates cloud owner from cloud
processors, as well as applications, information resources from
the infrastructure, and the delivering and processing methods.
This all began with the first mainframe architecture by IBM, then
client-server architecture and finally to cloud computing that is
now taking the lead in the industry. Cloud computing is pre-
dominantly comprised of applications, services, information, in-
frastructure, operating systems and processes which are com-
promised of network, storage and computational power. Cloud
computing enhances the agility, scaling, elasticity, cost effective
computing, utility of computing, availability, resilience and col-
laboration.
According to the NIST Cloud Computing Definition: Cloud
computing is described by its five characteristics, four deploy-
ment models and three services.
Characteristics of Cloud Computing
On-demand self-service. A consumer can unilaterally provi-
sion computing capabilities, such as server time and network
storage, as needed, automatically, without requiring human in-
teraction with a service provider.
Broad network access. Capabilities are available over the
network and accessed through standard mechanisms that pro-
mote use by heterogeneous thin or thick client platforms (i.e.,
mobile phones, laptops, and PDAs), as well as, other traditional
or cloud-based software services.
Resource pooling. The providers computing resources are
pooled to serve multiple consumers using a multi-tenant model,
with different physical and virtual resources that are dynamically
assigned and reassigned according to the consumers demand.
There is a degree of location independence since the customer
generally has no control or knowledge over the exact location
of the provided resources, but may be able to specify location at
a higher level of abstraction (i.e., country, state, or data center).
Examples of resources include storage, processing, memory,
network bandwidth, and virtual machines. Even private clouds
tend to pool resources between different parts of the same or-
ganization.
Rapid elasticity. Capabilities can be rapidly and elastically
provisioned, in some case automatically, to quickly scale up or
rapidly released to quickly scale down. To the consumer, the ca-
pabilities available for provisioning often appear to be unlimited
and can be purchased in any quantity at any time.
Measured service. Cloud systems automatically control and
optimize resource usage by leveraging a metering capability
at some level of abstraction appropriate to the type of service
(i.e., storage, processing, bandwidth, or active user accounts).
Resource usage can be monitored, controlled, and reported,
providing transparency for both the provider and consumer of
the service.
Cloud Deployment Models:
There are four deployments models
Public Cloud. The cloud infrastructure is made available to
the general public or a large industry group and is owned
by an organization selling cloud services.
Private Cloud. The cloud infrastructure is operated solely
for a single organization. It may be managed by the organ-
ization or a third party, and may exist on-premises or off-
premises.
Community Cloud. The cloud infrastructure is shared by
several organizations and supports a specifc communi-
ty that has shared concerns (i.e., mission, security require-
ments, policy, or compliance considerations). It may be
managed by the organizations or a third party, and may
exist on-premises or off-premises.
Hybrid Cloud. The cloud infrastructure is a composition of
two or more clouds (private, community, or public) that re-
main unique entities, but are bound together by standard-
ized or proprietary technology, which enables data and ap-
plication portability (i.e., cloud bursting for load-balancing
between clouds).
Cloud Service Models:
There are three Cloud Service Models
Cloud Software as a Service (SaaS). The capability provided
to the consumer is to use the providers applications running
on a cloud infrastructure. The applications are accessible from
various client devices through a thin client interface, such as
a web browser (i.e., web-based email). The consumer does not
manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited
user-specific application configuration settings..
Cloud Platform as a Service (PaaS). The capability pro-
vided to the consumer is to deploy onto the cloud infrastructure
consumer-created or acquired applications created using pro-
gramming languages and tools supported by the provider. The
consumer does not manage or control the underlying cloud in-
frastructure including network, servers, operating systems, stor-
age, but has control over the deployed applications and possibly
application hosting environment configurations.
Cloud Infrastructure as a Service (IaaS). The capability
provided to the consumer is to provision processing, storage,
Cloud Computing
and Its Security Benefits
Cloud computing is an evolution and collaboration of many different
technologies together moving towards a new type of computing.
Cloud Computing and Its Security Benefits
15
1/2011
networks, and other fundamental computing resources, where
the consumer is able to deploy and run arbitrary software, which
can include operating systems and applications. The consumer
does not manage or control the underlying cloud infrastructure,
but has control over operating systems, storage, deployed ap-
plications, and possibly limited control of select networking com-
ponents (i.e., host firewalls).
All these factors differentiate between the normal computing
and Cloud Computing, but as every technology has its Pros
and Cons, so does Cloud Computing.
One of the major concern in adapting or implementing Cloud
Computing is Security, which includes all aspects starting from
administrative to legal, compliance to technological. There are
many security issues with Cloud Computing, but considerable
research on-going on these security issues. Next, we are going
to talk about some of the benefits of Cloud Computing.
Cloud Computing Security Benefits. There are many fac-
tor in information security: Confidentiality, Integrity, Availability,
Privacy and Non-Repudiation of data. There are many benefits
of cloud computing many of them are follows:
Monetary Benefits of cloud computing pertaining to security:
To use the best security measures available in the industry, can
cost considerable amounts of money, which small or medium
businesses cannot afford. Large enterprises usually have higher
budgets, which allows them to stay compliant to various stand-
ards. In the case of cloud computing, the service provider is usu-
ally a large enterprise who is required to maintain the highest
level of security to protect their customers data.
Another benefit of cloud computing is its massive scalability,
multiple geographic locations, edge networks, and time effec-
tiveness to respond to incidents, attacks and provide counter-
measures. Accessibility to the cloud is very easy and can be
accessed in various ways and on different platforms, including
hand-held devices.
Standardization. Standardization is a good practice for any
industry. Maintaining standard products for employees and
a companys customers reduces the overall provisioning and
deployment costs. This is a service that Cloud Computing com-
panies can offer to reduce their customers implementation ex-
penses.
Scalability and elasticity. This is another factor for using
Cloud Computing. During peak times or demand, a companys
own resources could become depleted trying to manage the in-
creased load. This could lead to slow program operation, long
web site access times, server crashes or buffer overflow. Cloud
Computing services can rapidly scale up the services to meet
the increased demand and reduce it back down once the de-
mand returns to normal.
Auditing and Evidence-Gathering. Auditing is the proc-
ess insure that controls are in place for security compliance. In
Cloud Computing services, centralized logging and auditing can
be provided. Forensic images of Virtual Machines (VM) can also
be provided, as necessary at a companys request, while con-
tinuing to maintain the operation of the Cloud resources.
Resiliency in the Cloud. Cloud providers use mirroring tech-
nologies to protect their customers in case of computing failure.
In the event of a natural disaster or catastrophic failure, a mir-
rored images located in different geographic locations provide
the resiliency and capacity to ensure sustainability through the
unexpected event.
Centralized Data. Centralized Data is a type of data ware-
housing, where a single data warehouse serves the needs of
several separate businesses simultaneously, or one large cus-
tomer with multiple company locations, using a single data mod-
el that spans the needs of all business divisions. Businesses
that have multiple locations do not have to spread their data
across multiple company-owned servers and attempt to keep
the data in-sync across all servers. Cloud Computing synchro-
nizes the data into a single virtual location so that all company
locations and customers have access to the latest information.
Monitoring Benefits. Cloud Computing providers maintain
24x7 monitoring of their systems and data access. They main-
tain the latest virus protections and firewalls to attempt to keep
the data safe from inappropriate access. The Cloud Comput-
ing clients do not have to add additional expenses into their
budgets to maintain constant security or have personnel avail-
able on a constant basis to respond to these types of security
emergencies.
Password or key security testing. Cloud Computing is about
increased computing power and scalability, which is a perfect en-
vironment to perform password assurance testing. If an organiza-
tion wishes to perform password strength testing by employing
password strength verification tools, computing power can be in-
creased during the time to crack a password and then decrease
it when the task has completed. This makes Cloud Computing
a perfect environment for the password security testing. The
same applies with encryption keys as well. But, we also need
to consider for all of these activities we need to have dedicated
non-production computing instances, rather than a distributed re-
sources, which can bring sensitive data in that environment.
Logging. A companys data center usually uses their server
storage for development and production programs, and data.
Disk space is sometimes limited as managers approve budg-
ets for company growth according to projected data projects or
services. Logging is not usually considered a high priority and
thus disk space can end up being limited. When using Cloud
Computing, data storage is usually less expensive and clients
are charged on a pay per use basis. Additional disk storage
for essential logs is easier to get approved. Logging of E-mail,
business transactions and other audit components are neces-
sary because of various legal requirements to protect this data.
Cloud Computing can be used to provide the extra disk storage
space necessary to maintain and protect this data.
Secure builds. Cloud Computing also enables a company to
use more secure systems by using pre-hardened VM images
prior to the installation of company data and programs. This is
one of the major benefits of virtualization, by keeping the hard-
ened and secured images in a secure location and off of the
network. Installing and maintaining secure operating systems
across an enterprise can be a daunting task. Testing a system
for application compatibility and security can take months to
complete. Allowing a Cloud Computing organization to provide
this service lowers the cost to a company, since they do not
have to provide the resources necessary for penetration test-
ing and checking for security holes to ensure that the operating
system is secure. Once a VM image has been proven secure,
it can be cloned across many systems for implementation. It is
the requirement of the Cloud Computing corporation to maintain
the security of these images and to properly update them for
known viruses or security holes. This can be performed usually
faster than an enterprise can do it.
AjAy PoruS
Founder and Director Cloud Security Alliance Hyderabad Chapter
Lead implementer Honeynet Project India
CLOUD COMPUTING
16
1/2011
T
hus many universities, government agencies, informa-
tion technology advisory and strategy consulting organi-
zations are proactively researching around the topic of
Cloud Security. Not to forget the glorified IBM and Googles
multi-university project, Academic Cloud Computing Initiative
(ACCI), designed to address unknown challenges of cloud com-
puting. Then quickly followed by Open Cirrus (a massive global
test bed), a combined initiative of HP, Intel Corporation and Ya-
hoo, designed to explore all aspects of cloud computing. How-
ever, HPs Government Cloud Theatre, which is an intellectual
property based flexible cloud computing demonstration facility,
sets the perfect movement. I mean to address the security con-
cerns and the always rumored unknown vulnerabilities. Obvi-
ously the aim of HPs facility and other similar labs are to con-
front the universal fears about the security of the cloud.
The cloud has evolved and has matured to a point that even
smaller companies are now coming up with cloud platforms
without much security concerns. This is mostly because the
server side security on the cloud is already addressed by the in-
frastructure providers who are available for few dollars per day.
Another reason is the advancement and maturity of the Serv-
ice oriented architecture which is being adopted and welcomed
globally. On the browser side of things, there is still wide open
vulnerability though.
In this article I want to talk about how clouds security maturity
on the server side (Infrastructure as a Service) lead to technol-
ogy advancement on Database as a Service (DbaaS). Thae
again, cloud database service is not just a concept anymore, for
example, Amazon RDS, Cloudant, MongoHQ, and Redis To Go
provide fully-managed MySQL Community Edition, CouchDB,
MongoDB, and Redis database services, respectively. If you
had noticed DbaaS offering trend is mostly on NoSQL database
systems rather than a traditional RDBMS.
With the success of Internet based companies like Google
and Amazon it is becoming obvious that existing unwieldy RD-
BMSs are not capable to handle complex traffic sites. As a con-
sequence todays enterprise-grade RDBMSs have evolved into
systems attempting to solve everything for all types of applica-
tions, thus also become less manageable. In fact such situa-
tions forced most relational database vendors to focus devel-
opment efforts on improving the ease and efficiency of routine
system administration of on-premise databases: management-
free cloud database offerings. However, administration is not
the only problem as todays relational database servers (de-
signed in the 1970s) are not intended to run in virtualized data
centers that scale horizontally across tens or hundreds of small
machines in reaction to varying application workloads. So the
bigger internet players were forced to reinvent the database.
These companies not only invented newer ways to persist and
handle large data they also open sourced these technologies.
NoSQL moment was thus literally formed. New technologies
available today like MapReduce, Key-value store, Tuple store,
Tabular, Ordered key-value store, Multivalue databases etc. are
few examples. Essentially a new trend is beginning to evolve
Cloud Security
Advancement
and Its Effects
on Data-Centric
Persistence Layer
Cloud computing is substantially changing the IT landscape, because of its affordable computing
resources instantly available to anyone. Clouds suddenly provide an easy access to any kind of IT
resources. Anyway all this indicates that this modern delivery model, i.e., cloud based services is
a reality, and we need to be aware of the accompanying security baggage. Cloud computing security
is essentially an evolving sub-domain of information security, which is typically grouped into three
areas: Security and Privacy, Compliance, and Legal or Contractual Issues. The recent WikiLeaks
retaliation attempt on Amazon.com, and an isolated advanced attack on RSA Systems, which resulted
in compromising RSAs SecurID two-factor authentication products, is just a tip of the iceberg.
Cloud Security Advancement
17
1/2011
were there is a marriage between the NoSQL moment and cloud
technology. Thats exactly why we see a lot of cloud based da-
tabase services and solutions being launched.
Some examples are:
database.com (API based RDBMS technology for the
cloud)
Arrivu (API based Knowledge Base technology for the
cloud))
AWS(APIbasedRDBMSandNoSQLtechnologyforthe
cloud)
GoogleAppEngine(MapReducesoftwareframeworkfor
the cloud)
WindowsAzurePlatform(Microsoftcloudplatform)
While most have database service as one of their enabling tech-
nology, there are companies offering exclusive cloud based da-
tabase like database.com, Arrivu and AWS. Since the cost in-
volved with the infrastructure setup is minimal, newer players
are taking advantage and coming up with true next generation
technologies. For example Arrivu is a startup, which utilizes the
concept of Knowledge Base. Its basically a new kind of data-
base that uses plain English instead of traditional SQL. These
new kinds of database are enabling newer ways to develop
web applications.
The Cloud is maturing one layer at a time. We can look at it
as three layers: the infrastructure layer, data persistence layer
and application construction layer. The infrastructure layer can
be declared as well matured and as we now have API based
cloud infrastructures that can scale automatically. Security is
also on the way of its maturity. In fact we have highly configura-
ble options for security in the hardware and software layer at
the Infrastructure level. On the Data persistent layer, it is still. In
my opinion, the security side of things is no different than that
of the Infrastructure layer. However, because of the advance-
ment of server based and SOA security all layers in the cloud
are being exposed using the same tried and tested techniques.
The application creation layer is currently everywhere and to-
tally lacks any kind of focus, standard or innovation. We at Arrivu
Technologies are leading the way of innovation in the application
constructions layer. Arrivu lets a user build web applications us-
ing plain English. We use Artificial Intelligence to achieve this.
Again the security concerns are the same at this layer.
The thorn here is the browser based security that is still open
to vulnerability. Browser standards are evolving and in HTML5
we are talking about storage at the client side. It remains to
be seen on how the browser based security solution is going
to affect the server side security. One thing to be noted is that
once a server side security is compromised then all layers are
affected.
The author (creator of Arrivu) is writing from the experience
of creating cloud platform solutions.
StEPhEn MALLIk
Stephen Mallik is a technology evangelist currently focusing on cloud tech-
nologies. He is the visionary and Architect of Arrivu (www.arrivu.com) and
Knowledge Base. He has also created RulesEngine and ProcessEngine (a tool
that combines EAI, BPM & CEP) in the past.
Subscribe to our newsletter
and stay up to date with all news from
Enterprise IT Security magazine!
a d v e r t i s e m e n t
http://enterpriseitsecuritymag.com/
CLOUD COMPUTING
18
1/2011
C
loud computing exists at the forefront of technology mod-
ernization, widely accepted as the obvious path toward
IT efficiency, yet security concerns continue to be a sig-
nificant hurdle for mainstream cloud adoption. Despite those
concerns, the compelling economic and operational benefits
drive more businesses to the cloud every day. As businesses
evaluate opportunities to capitalize on the benefits of cloud com-
puting, its important to understand how the cloud architecture
either increases/decreases vulnerability to potential threats. Al-
though many of these same security concepts can be applied
to on-premises private cloud models, this article focuses on
evaluating the cloud provider controlled security versus busi-
ness user controlled security within a public, multi-tenant cloud
environment.
This whitepaper highlights security problems that exist within
a majority of cloud infrastructures as most people know them
today, and introduces an alternative configuration to achieve
a more secure cloud architecture. With the information provided,
IT professionals will have a cursory understanding of the differ-
ence between flat and ideal cloud architectures to better evalu-
ate the achievable level of security. Once businesses know that
the secure three-tier architecture trusted in the physical IT world
can be replicated within the cloud environment, they know what
to demand from cloud providers and can rest assured that cloud
adoption will not change their existing business, infrastructure,
security, or Service Level Agreement (SLA) models in the ways
previously feared.
Scope
When it comes to cloud computing discussions (especially on
the topic of security), unintentionally boiling of the ocean often
occurs. To properly frame this discussion, scope must be de-
fined. This article will discuss security concerns as they apply
to Infrastructure as a Service (IaaS). Specifically it will tackle
IaaS as it is used in public cloud or otherwise shared environ-
ments, but strong parallels can be drawn to dedicated or private
clouds. When using the generic term cloud providers, this article
is referring to providers like Amazon, Rackspace, OpSource and
Evaluating
the Security
of Multi-Tenant
Cloud Architecture
Figure 1. Flat Cloud Architecture
As businesses evaluate opportunities to capitalize on the benefits of
cloud computing, its important to understand how the cloud architecture
either increases/decreases vulnerability to potential threats.
Internet
Provider Managed Firewall
Provider LAN
Customer 1
VM
Customer 2
VM
Customer 3
VM
Customer 4
VM
Evaluating the Security of Multi-Tenant Cloud Architecture
19
1/2011
a multitude of smaller cloud providers that deliver IaaS servic-
es. These providers are not all the same, but they share one or
more of the characteristics discussed in this article. This is the
only place where cloud providers are identified by name.
Current Landscape
In January 2010, Gartner predicted that, a fifth of enterprises
will hold no IT assets by 2012. Even at the time it was a bold
prediction. As of March 2011 it is mistaken. The majority of com-
panies spent 2010 dabbling in on-premise virtualization or sim-
ply watching from the sidelines to see how the market unfolds.
There are many factors inhibiting adoption. One of the most
commonly cited is general concern with the security model of
various cloud providers. Apart from security concerns, many
executives have anxiety about the impacts of cloud computing
on existing business models. Both concerns are real, but they
are not universal. Options exist that provide peace of mind for
enterprise customers.
Architecture Concerns
In March of 2011 Context Information Security LTD (Context),
released a whitepaper titled Assessing Cloud Node Security
(http://bit.ly/ezZrXP). It contains the following quote:
In a traditional hosted environment any attacker from the Inter-
net must start at the outer firewall and work their way through
But in the cloud all the systems within the virtualised network
reside next to each other.
According to Context, cloud providers are flat. Figure 1 de-
picts a flat cloud where the provider manages both the edge of
the cloud and the network in which the virtual machines (VMs)
reside. Essentially, this flat network provides no isolation be-
tween multiple tenants sharing the cloud. There are two major
problems with this approach.
The first problem is that the provider-managed firewall is the
security equivalent of a screen door. The provider must accom-
modate a huge variety of services for a multitude of custom-
ers. There is no way to control whether or not a customer will
be building web servers, ftp servers or private web services
that could span nearly any network port. Furthermore, Layer 7
(application-level) technologies are useless because the pro-
vider has almost no understanding of the underlying services
exposed by the customers.
Another major problem with this configuration involves the
lack of isolation between VMs between customers (Figure 2).
Since there is no isolation between clients, attacks are easily
staged from with the same cloud by other tenants (intracloud
attacks). A potential attacker may not have to know what he is
looking for because the public nature of this network makes all
traffic easily obtainable by others. Even more concerning is that
if the attacker shares the cloud, an offensive can be launched
directly at weaknesses in the customers virtual machines.
To defend against this, one must install a firewall directly on the
virtual node itself. Unfortunately, this approach does not scale.
It should be noted here that there are other vectors of at-
tack. There is always concern that attacks could be launched
directly through the hypervisors that host the VMs. Furthermore
the opportunity exists for attacks to be launched from the pro-
viders side of the hypervisor. These concerns are legitimate
and should be weighed when considering any cloud comput-
ing strategy. The threat is acknowledged here, but will not be
discussed in detail.
Support for Custom Appliances
As previously mentioned, another looming security issue con-
cerns the providers inability to support customer provided im-
ages. The customers must choose from a limited list of pre-con-
figured images that may or may not meet the security demands
of the customer. Furthermore, the cloud consumer must trust the
cloud provider to properly lock-down the virtual image so that
it can be used securely. This references the previously stated
concerns about isolation between multiple customers.
The only way to secure against this threat is to build a firewall
on the device itself, but since this is a raw image, the software
must be installed and configured each time a node is created.
For environments where the end-users are deploying cloud in-
frastructure, the practice is difficult if not impossible to enforce
without direct involvement from the security organization. It is
also possible for some operating systems to be compromised
even before these security measures can be implemented. This
approach is both not scalable and undesirable. Figure 2. Attack Vectors
Figure 3. Flat User Hierarchies
Internet
Provider Managed Firewall
Provider LAN
Compromised
VM
Customer 2
VM
Customer 3
VM
Customer 4
VM
No-Visibility Silo
No-Visibility Silo
No-Visibility Silo
O
r
g
a
n
i
s
a
t
i
o
n

1
User Account 1
User Account 3
User Account 2
User 1 Cloud
Servers
User 2 Cloud
Servers
User 3 Cloud
Servers
CLOUD COMPUTING
20
1/2011
Flat or Non-Existent User Hierarchies
The untenable nature of how cloud providers handle access
and control of cloud computing environments can be even more
challenging. Most, if not all, cloud providers extend their cloud
services as a function of a single user account. In many cases
there is no chain of custody or ability to empower administra-
tive users to deploy their own resources with limited access and
control. Giving another individual access to the cloud manage-
ment console means giving total access to all user-managed
systems.
Most security administrators will bristle at the thought of losing
audit-trail accountability through generic accounts. To empower
multiple users, multiple accounts are absolutely required; how-
ever, there is no visibility or continuity between these silos in
a flat user hierarchy. Even more vexing is the lack of account-
ability, visibility or control that can be managed from a higher
vantage point. Today, almost all IT resources are provisioned
though some form of internal supply chain. Services flow down
this chain while usage flows back up. Most cloud computing
models today threaten this eco-system.
Trust is Good, Control is Better
Trust is required to use the cloud enterprises must trust their
cloud provider with their business. This trust has been required
since the advent of the co-location facility and continues on-
ward through cloud computing. Trust is great if and when you
can find it, but control provides peace of mind. If given the op-
tion to trust a service providers firewall configuration or to re-
tain complete control of the firewall product and configuration,
security-minded organizations will choose the latter every time.
Using the co-location facility example, trust was required, but
a customer-managed firewall at the top of the rack was the
rule. Why should cloud computing change this dynamic? Quite
simply, it shouldnt. In fact, cloud computing should not change
any of the following:
Thebusinessmodel
Thesecuritymodel
Thearchitecturemodel
Theservicelevelagreement
It seems straight forward, but to date very few cloud providers
see it that way. Either through technical inability, cost or unwill-
ingness to accommodate multiple configurations, these cloud
providers demand change to one or more of these aforemen-
tioned criteria. Regardless, these cloud models misrepresent
core cloud consumers and cloud deliverables. These models
consider the user to be a single individual and the deliverables
to be virtual machines. The expectations of the enterprise cus-
tomer deviate from this considerably. For the enterprise, the
consumer of cloud is the organization and the deliverable is
a dynamic workspace for creating complex infrastructure.
Meeting Enterprise Security Requirements
Since the beginning of distributed computing, 3-tier architecture
has been the rule. This model should be familiar to nearly all
of security experts and system architects. It creates a security
model by creating independent computing layers designed to
force intruders to penetrate multiple defense mechanisms be-
fore compromising the data. Since security is never absolute,
these multiple layers create ample opportunity for security ad-
ministrators to detect the attack and neutralize the threat before
it becomes a calamity.
Not All Clouds are Flat
When providing IaaS to the enterprise, it might be surprising to
hear that less security is sometimes better. Essentially, if the
provider firewall services insufficiently support the enterprise
use case, it calls to question whether or not there is any value
at all in providing that service. This is another example of trust
versus control. The enterprise customer wants to control the se-
curity of their cloud computing environment. When securing its
border against the edge of the Internet, trust is not enough.
Sure, the enterprise customer may trust the provider, but the
nature of threats from the Internet can never be completely un-
derstood. Furthermore, the customer wants be free to act in-
dependently from their provider when it comes to securing the
border. A better model is for the service provider to provide an
unconfigured public facing LAN as a service with no inherent
security. At first the concept of no security seems counter intui-
tive, but if the cloud consumer can install a security device of
their choosing and configuration, it goes much farther to meet
the enterprise security requirement.
In this model, as shown in Figure 4, there are both public and
private networks. It should be noted here that the public LAN is
not shared among other cloud members. It is dedicated to that
customer and defines a single broadcast domain with which no
other customer can interfere. Of course the customer is free to
create as many public servers as they wish, it is a best practice
Figure 4. Tiered Cloud Infrastructure
Figure 5. n-Tier Architecture
Internet
Customer 1
VM
Customer 2
VM
Customer 3
VM
Customer 4
VM
Firewall Firewall Firewall Firewall
Firewall Firewall Firewall Firewall
Private LAN Private LAN Private LAN Private LAN
Edge Firewall Edge Firewall
Web
Server
Web
Server
Web
Server
Desktop Desktop Desktop
Business Tier
Firewall
Business Tier
Firewall
CRM
Server
Email
Server
Java
Application
Server
Data Tier
Firewall
CRM
Server
Email
Server
Java
Application
Server
Internet
DMZ
Business Tier
Data Tier
Evaluating the Security of Multi-Tenant Cloud Architecture
21
1/2011
to secure the border with a single security device and control
all access through this device. All other servers are created on
one or more private networks that the enterprise controls. This
is the way it has always been done and cloud computing should
not change this simple practice.
When the cloud gives unfettered access to both public and pri-
vate networks under the strict ownership of the customer, n-tier
architecture is possible. Complex network topologies are now
possible and the services can be exposed securely while other
servers are strictly private in nature. Of course a cloud provider
delivering this type of service will have provisions for accessing
the consoles of the completely private servers. A typical setup
in this type of environment will have separate, demilitarized seg-
ments for both web based applications and user desktops. Be-
hind both of these network segments are more access control
layers separating these tiers from the business.
The advantages of the tiered security model in the cloud are
obvious, but there are additional benefits. Most importantly is
the ability to have complete control over the edge security de-
vice. This means that any service can be controlled at what-
ever level the security model requires. For a service provider
to manage a layer 7 firewall device in front of customer appli-
cations would be, at best, impractical. Creating layer 7 policies
require intimate knowledge of the services that sit behind them.
As a service provider managing a multi-tenant cloud, this level
of knowledge is just not possible. Also, VPN tunnels and intel-
ligent load-balancing can be implemented using the very same
technologies used in the physical datacenter.
Custom Appliances
The ability to install and manage customer appliances defines
another trait of the enterprise ready cloud. When an enterprise
customer is able to bring their own appliance to the cloud, it can
be pre-hardened to meet the security requirements of the cus-
tomer. It makes even more sense to bring an appliance directly
from the enterprises on-premises datacenter. Since the logical
network can mimic the network topology of the enterprise data-
center, it becomes much simpler to use existing images as is.
Once uploaded to the cloud, the user community can deploy the
images as required by the business. Of course these images
remain private to the enterprise customer that uploads the im-
ages. Because they are pre-built with roles, ACLs, and polices,
direct intervention from the security team may not be required
to implement a server.
Tiered User Hierarchy
As important as the architectural details are to the enterprise
acceptance of cloud, the method in which users access the
cloud can be even more challenging. As stated before, the typi-
cal cloud hierarchy is completely flat and featureless. In fact, the
majority of cloud computing environments tie all infrastructure
to a single user account. From an enterprise perspective this is
simply not adequate.
Enterprises allocate computing resources through internal
supply channels. Sometimes these supply channels are de-
fined by high-level business relationships or compliance require-
ments, and other times those channels are a matrix across IT
managed disciplines. Auditing and delegating control must be
possible. Cloud computing user hierarchies support these rela-
tionships. Once complex business relationships are facilitated
in the cloud, it becomes possible to align the cloud computing
model with the business model.
Conclusion
Despite the security risks, compelling economic and operational
benefits drive more businesses to the cloud every day. With an
ever increasing pool of cloud vendors touting solution reliability
and efficiency, businesses should look to cloud architecture as
the telltale sign of a more secure multi-tenant cloud offering.
With technology advancing at lightning speed and sales-hype
like blinding fog, understanding the security differences between
flat and ideal multi-tenant cloud architectures provides a solid
foundation for successful cloud computing strategies.
Business executives and IT professionals should be relieved
to learn that cloud architecture today allows the virtual world to
replicate the same three-tier architecture that secures our physi-
cal infrastructure today. In this ideal cloud architecture, busi-
nesses no longer conform to cloud. Instead, the cloud adapts to
accommodate existing business models, infrastructure models,
security models, and Service Level Agreements (SLAs).
Armed with this critical insight, readers need not feel com-
pelled to compromise business standards to gain cloud com-
puting benefits. Know whats possible, know your options, and
demand cloud vendors to deliver solutions that best suit the se-
curity and organizational requirements of your business. Knowl-
edge is power, and cloud security can be the reward.
DAviD RokiTA
With nearly 15 years of experience in IT operations mana-
gement, system design and architecture, David Rokita pro-
vides technical leadership in developing Hexagrids state-
of-the-art cloud computing solutions to match real-world
enterprise IT requirements. For Union Pacifc Railroad, Da-
vid participated in managing mission-critical systems deployments, 24x7 cri-
tical issue response, and datacenter facilities and services migration betwe-
en Saint Louis and Omaha. As a Senior Architect for MasterCard Worldwide,
David specialized in secure Internet facing infrastructure, PCI compliance, re-
al-time fraud detection, secure payment systems, and global outsourcing in-
itiatives. David was also the Founding Principal and Architect for Precision
Storage Solutions, a frm specializing in design and implementation of data
storage, backup, document management, and disaster recovery solutions.
Figure 5. Tiered User Hierarchy
Marketing
Department
Security
Department
IT Operations
Network
Operations
Project 1 Project 2 IDS Firewall LDAP
Web
Servers
E-mail Routing LANs
Enterprise
Users
Servers
Networks
ACLs
Roles
CLOUD COMPUTING
22
1/2011
M
ission-critical cloud-based business applications
(e.g., Salesforce.com, SharePoint, and SAP) are
often prime targets for continuous, persistent crimi-
nal attack from sophisticated profit-driven and even politically
motivated hackers.
Todays workers also go far beyond traditional applications in
their cloud-based computing. They routinely transfer informa-
tion via personal e-mail accounts such as Yahoo or Gmail;
use peer-to-peer (P2P) applications like LimeWire and BitTor-
rent; download files from Web 2.0 social networking sites such
as Facebook; and stream rich media from YouTube. While
these cloud-based Web applications can offer business benefits
in certain scenarios, they have the potential to rob companies of
bandwidth, productivity, and confidential data, and subsequently
put them at risk of regulatory noncompliance.
Finally, traditional approaches to network security become
less effective in the public cloud. WiFi-enabled laptops and
3G/4G cellular smartphones, dynamic port selection, and traf-
fic encryption have undermined traditional perimeter-based net-
work controls over application access. Moreover, it is critical to
prioritize and manage bandwidth for all of these applications in
order to ensure network throughput and business productivity.
Five Common Problems in Cloud-Computing
These emerging cloud-computing trends present a host of new
security concerns for IT. Weve seen the five of the most com-
mon problems to include:
P2P Traffc. P2P applications can steal bandwidth and in-
troduce malware. These applications can be particularly
diffcult to control, as developers frequently update new
buildsspecifcallydesignedtoevadefrewalldefensesby
alternating port usage.
Streaming Media. Streamingmusicandvideotraffccan
place a heavy burden on network performance, and over-
whelmmission-criticalapplicationtraffc.Forexample,one
ITadministratorwasperplexedwhyittookoveranhour
andahalftodownloadapatchflethatshouldhavetaken
How Application
Intelligence Solves
Five Common
Cloud-Computing
Problems
Cloud-based computing is on the rise. Over 90 percent of recently
surveyed companies expect to be using cloud computing in the
next three years.1 Still, securing access to the cloud poses significant
challenges for IT departments.
How Application Intelligence Solves Five Common Cloud-Computing Problems
23
1/2011
How Application
Intelligence Solves
Five Common
Cloud-Computing
Problems
onlyafewminutes.Hecouldnotfgureoutwhatwasbot-
tleneckinghisrecentlyexpandedInternetpipe.Hethenre-
alizeditwasthefrstdayoftheNCAAtournament:alarge
number of employees had tuned in to online streaming vid-
eo and audio commentary, killing network throughput and
company productivity.
Confdential Data Transmittal.Confdential,sensitive,and
proprietary information can be maliciously or unintention-
ally transmitted over FTP uploads or as email attachments.
Job insecurity, whether actual or rumored, can cause em-
ployees to download customer, order, and payment histo-
ries. One study1 found over half of employees anticipat-
ing rumored layoffs had downloaded competitive corporate
data.
Third-Party Email. Third-party email presents another
channel for potential malware infection and data leakage.
Notonlycanemployeesandcontractorstransferconfden-
tial information over corporate SMTP and POP3 email, but
also personal Web mail services such as Hotmail and
Gmail.
Large File Transfers. Withouteffectivecontrol,largefle
transfers, whether over FTP or P2P applications, can bog
down network bandwidth.
Applying Application Intelligence to Cloud Com-
puting Scenarios
To resolve these common problems in cloud computing, IT
requires a new approach to security: application intelligence.
Utilizing application intelligence goes beyond the port- and
address-blocking of traditional firewalls to intelligently detect,
categorize, and control application traffic. With application de-
tection, categorization and control, IT can block, restrict, or pri-
oritize any specific application, whether it is SAP, YouTube or
LimeWire.
IT can then effectively apply application intelligence solutions
to each of the five common problems:
P2P Traffc.Becauseitcandetectandcategorizetraffcby
specifcapplicationsignaturesratherthanportoraddress,
an application intelligence gateway is especially useful in
controlling variable-port P2P applications. For example,
a university IT department could have the fexibility and
granular control to restrict student access to LimeWire to
only 10 percent of available bandwidth, thereby protecting
throughput while discouraging non-productive behavior.
Streaming Media. An application intelligence gateway can
provide IT with granular control over streaming media and
social networking applications. For instance, an adminis-
tratormightpermitmembersofapredefnedActiveDirec-
tory group for marketing staff to have access to YouTube
sites for promotional activities, while restricting access to
all others.
Confdential Data Transmittal. IT could create and enforce
application intelligence policy to detect and block email
attachments carrying a watermark indicating sensitive or
proprietary information.
Third-Party Email.Fillingasecuritygapleftbymostfre-
walls and email security solutions, IT could use application
intelligence to identify, scan, and control any third-party
Webmailtraffctraversingthegateway,suchasHotmail
and Gmail.
Large File Transfers.Torestrictexcessive-sizefletrans-
fers,ITcouldconfgureapplicationintelligencepolicyto
identifyandrestrictFTPandP2Pfletransfersbasedupon
predetermined size limitations.
Used in combination with traditional firewall features, applica-
tion intelligence can provide greater protection against new and
evolvingchannelsforemergingthreats.Forexample,acom-
promised Facebook page might suggest a friend click a link
to launch a YouTube video, which is actually a link to a mal-
ware file. Because application intelligence can detect this link
and file over the application traffic, it could enable anti-malware
and content filtering policy to prevent the malicious file from
downloading, thereby protecting both the user and the corpo-
rate network.
Conclusion
Thegrowthofcloud-basedapplicationtraffichasexceededthe
security capabilities of traditional firewalls. Fortunately, new ap-
plication intelligence technology can address the most com-
mon problems that come with these emerging trends. When
deployed effectively on high-performance platforms, application
intelligence gateways offer IT a viable solution for cloud-based
application security.
PAtrICk Sweeney,
VP of Product Management
Patrick Sweeney has over 18 years experience in high
tech product marketing,product management, corpora-
te marketing and sales development. Currently, Mr. Swe-
eney is SonicWALLs Vice President of the Network Securi-
ty Business Unit. Previous positions include Vice President of Worldwide Mar-
keting, Minerva Networks, Senior Manager of Product Marketing & Solutions
Marketingfor Silicon Graphics Inc, Director of Worldwide Sales & Marketing
for Articulate Systems, and Senior Product Line Manager for Apple Compu-
ter. Mr.Sweeney holds an MBA from Santa Clara University, CA.
1
The Goldman Sachs Group, Inc. (Jan. 10, 2010). Mapping 2010: Key Tech
Trends to Watch
2
Cyber-ArkSoftwareInc.2008
CLOUD COMPUTING
24
1/2011
R
ecent research conducted by Mimecast has found that
a large proportion of businesses are now using some
form of cloud service, with a further 30 percent planning
on adopting more cloud services in the future. Fashionable new
architectures within the technology industry are not unusual.
However, even allowing for a certain amount of bandwagon
jumping, this rate of cloud adoption has been considerable.
The cloud itself is a competent and established business tool that
solves a range of security issues and drives efficiency. However
once one delves deeper into the world of cloud computing, one finds
that there are some issues that still need resolving. One problem is
that as more organisations turn to the cloud, the need for an effec-
tive set of industry standards is becoming ever more pressing.
There is a clear divide between those who argue for implementa-
tion of cloud standards and those who argue against. At the heart
of this debate is a clear need to balance the benefits of having
a standard with the call fora sustained pace of innovation.
The argument against cloud computing standards relies on the
premise that standards just arent necessary. In this sense, indus-
try wide uniformity and standardisation is seen as something which
wouldstifle innovation and distract focus from more specific prob-
lems. According to this train of thought, different providers need to
be free to evolve solutions that best fitdistinctive domain and cus-
tomer needs.
The alternative one voice, one system argument sees the lack
of standards in the cloud industry as a serious problem. With the
industry being void of anycommonly accepted standards, vendors
have nothing to hold them to account and as a result potential and
existing customers have little objective information on which to base
their buying decisions. A lack of homogeneity can cause a range of
issues. For instance a deficiency of inter-cloud standards means
that if people want to move data around, or use multiple clouds,the
lack of fluency between vendors creates a communication barrier
which is near impossible to overcome. Surely companies should be
able to move their data to whichever cloud provider they want to
work with without being tied in for the foreseeable future?
Another issue is that there is considerable confusion around the
term cloud itself. Among vendors there is a definite trend of cloud
washing whereby less scrupulous companies re-label their products
as cloud computing too readily, overselling the benefits and obscur-
ing the technical deficiencies of what they have to offer.
This cloud washing is in some areas leading to a mistrust of cloud.
Furthermore, with the market becoming increasingly crowded and
no clear standards in placeit is hard for customers to tell the differ-
ence between a cloud vendor with a properly architected delivery
infrastructure and one that has patched it together and is merely
using cloud as a badge. All of this makes it increasingly difficult for
customers to navigate their way through the maze of cloud services
on offer and, of course, it is the customer who should be the priority
throughout these discussions. Moving forwards, there are a range
of bodies that are pursing some form of resolution to the standardi-
sation debate. However for these organisations to have a genuine
impact on the industry, companies and individuals need to rally be-
hind them and actively support their calls for universal standards.
The first standard that needs to be tackled is security. Its the
number one customer concern about transferring data to the cloud
and needs to be addressed as soon as possible. The reality is that
this concern is mirrored by vendors who are similarly wary of any
potential security breaches and, as a result in most cases go to
extreme lengths to protect their customers data. In factone cloud
security firm recently estimated that cloud vendors spend 20 times
more on security than an end user organisation would. Security
breaches would inevitably mean the reputation of a company falling
into disrepute and in worst cases mark the end of their business al-
together. Moreover,the creators of any new cloud based technology
do not want to see their project fail for obvious reasons. It is those
vendors that do not apply strict standards to their business that need
to be called into question. An industry standard is the only way to
manage this and good vendors would welcome one because they
have nothing to fear from rules of best practice.
The second standard that needs to be tackled is the Cloud Data
Lifecycle. In previous years, when a customer bought software they
installed it directly on their premises. Therefore if the vendor went
away they could keep running the software until they found an al-
ternative. With an increasing number of people flocking to the cloud,
how can a customer ensure they continue to have access to their
dataif the vendor goes out of business? It is for this reason that we
need Data Lifecycle standards because currently the onus is on the
customer to check the financial health of their provider.
The good news for cloud users is that there is light at the end of
the tunnel. The issue of standards is no longer being sidelined but
instead being addressed on a large number of platforms with con-
tributions from some of the industrys top decision-makers and influ-
encers. For most, if not all conversations, it is simply a question of
when, not if, cloud standards are established. However while the de-
bate continues, customers will need to ensure that they are aware
of the dangers and pitfalls associated, albeit rarely, with adopting
a cloud service. Carrying out their own due diligence and research
to ensure that their chosen technology is robust, properly architect-
ed and secure will remain an essential practice until that time.
Justin Pirie
Director of communities anD content for mimecast, takes
A LOOK AT THE ONGOING DEBATE AROUND CLOUD STANDARDS.
Mimecast is exhibiting at Infosecurity Europe 2011 the No. 1 industry event in Eu-
rope where information security professionals address the challenges of today
whilst preparing for those of tomorrow. Held from 19th 21st April at Earls Court,
London, the event provides an unrivalled free education programme, with exhibi-
tors showcasing new and emerging technologies and ofering practical and pro-
fessional expertise. For further information please visit www.infosec.co.uk
cloud computing
standards
Recent research conducted by Mimecast has found that a large proportion of businesses
are now using some form of cloud service, with a further 30 percent planning on
adopting more cloud services in the future.
MONITORING & AUDITING
26
1/2011
T
he capabilities of attackers grow every day. The frequen-
cy and sophistication of IT infrastructure attacks is in-
creasing, sensitive company data is constantly threat-
ened. Fortunately, hand in hand with this situation, the network
protection tools are rapidly developing too. Certainly, theres no
need to introduce firewalls, which are the first line of protection.
For a long time it was believed that the second and last line of
the computer network defense are IPSs (Intrusion Prevention
Systems). However, even those are not enough to face the new
threats requiring different approaches to protect the IT infra-
structure and sensitive data.
Figure 1. The principle of frewalls and IDS/IPS systems: work like guards
who flter the trafc according to predefned rules and patterns. Firewall
checks packet header only. IDS/IPS looks inside the packet.
This fact is being pointed out since 2007 by Gartner, which de-
fines the best combination of network security tools as follows:
firewall + IPS + NSM/NBA. NSM is an acronym for Network Se-
curity Monitoring and NBA means Network Behavior Analysis.
This trend is also confirmed in a study called Network Behavior
Analysis: Protecting the Predicting and Preventing, published in
November 2009 by the Aberdeen Group. These new technolo-
gies are not only about security, because solutions based on
NSM or NBAare also used to detect network misconfigurations
and operational problems or as a tools for increasing the overall
IT infrastructure efficiency. In this article, we would like to explain
the basic principles of the NSM/NBA technology and, in particu-
lar, its contribution to the protection and management of data
networks. For completeness we should add that the outputs of
network monitoring offer a wider use, especially in the area of
network infrastructure optimization or application performance.
Products of this category are offered by the world-known manu-
facturers such as HP, Cisco, IBM and CA.
The influence of IT infrastructure in the organizations is in-
creasing it is gradually becoming their nervous system. This
entails increasing demands on the extent and quality of the IT
infrastructure management. The problems are recognized late
and their removal has a serious negative impact on the opera-
tions of organization. An infected computer sending spam will
cause black-listing of a company and before clarifying the situ-
ation (and thanks to various caches even long time after), the
e-mail communication is blocked by other servers. People are
constantly complaining about the performance of network or
applications, the guilt flips between suppliers and the IT depart-
ment. As a consequence, people are restrained from work. The
investments in the IT infrastructure development are not reflect-
ing actual situation and needs. Difficult and insufficient monitor-
ing of a network also attracts employees to abuse it for personal
purposes and, last but not least, the door is open for various
amateur and professional attackers. Sooner or later, as a net-
work administrator or IT manager, you will face all these prob-
lems. Without proper tools, you wont be able to deal with them,
Firewall, IPS Whats Next?
Figure 2. The principle of SNMP technology: it only provides information
about the trafc volume, but the structure and possible failures remains
hidden
Certainly, theres no need to introduce firewalls, which are the first line
of protection. For a long time it was believed that the second and last
line of the computer network defense are IPSs (Intrusion Prevention
Systems). However, even those are not enough to face the new threats
requiring different approaches to protect the IT infrastructure and
sensitive data.
Firewall, IPS Whats Next?
27
1/2011
document them or even detect them. The analysis of an inde-
pendent association Network Security Monitoring Cluster says
that an inefficiently managed and maintained network costs
a medium sized company (approximately 250 PCs) $50.000 to
$100.000 (not counting the costs for the IT infrastructure secu-
rity) every year.. Thanks to the NSM/NBA technology, more than
50% of these costs can be saved.
For many years SNMP was a synonym for the monitoring and
computer networks supervision. However, this protocol provides
only a traffic summary and doesnt really shows what is actually
happening in the network (what is the traffic distribution in time,
who are the top users, etc.).
Flow data monitoring
Flows in computer network are similar to individual records
on phone call listing. You can find out who talked with whom,
when and how long it lasted. The content of conversation re-
mains hidden. The flow-based monitoring technology is rep-
resented by a number of industry standards the most widely
used are sFlow, NetFlow and IPFIX. The flow is defined as
a sequence of packets with the same five entries: destination/
source address, destination/source port and protocol number.
For each flow the following data is recorded: time of creation,
duration, number of transmitted packets and bytes, and other
information (TCP flags and other header fields of transport
protocols). From now on, we will mostly talk about NetFlow,
which is the most widely used standard developed by Cisco
and supported across other manufacturers (Enterasys, Juni-
per, Nortel, ...). Flow statistics were until recently the domain
of advanced and expensive routers and switches. However,
the usage of the basic IT infrastructure elements for generat-
ing the flow statistics faced a number of barriers and perform-
ance limitations.
Figure 3. The scheme and principle of NetFlow technology
The barriers and performance limitations are eliminated by
the use of specialized equipment network probes. These
devices are capable of generating NetFlow statistics from any
point in the network. The generated flow statistics are exported
to the collector, where they are stored and prepared for visuali-
zation and analysis or automatically evaluated. NetFlow probe
is deployed same way as firewall or IPS in the form of an ap-
pliance a server including software that ensures the export
of statistics, remote configuration via web interface, user man-
agement, etc. In the case of hardware-accelerated model, it is
also equipped with special hardware based on field program-
mable gate array, which guarantees the processing of packets
up to the speeds of 10Gbps. The probes are typically placed
at the central node of the network, or at the critical points or
lines with the highest data transfer. The deployment of such
a probe is made by connecting it to the mirror (SPAN) port of
the router or switch, or by direct insertion into the line using
optical or metallic splitter (TAP).
Figure 4. A special hardware based on FPGA for generating NetFlow data on
high-speed networks
The modern collectors for NetFlow data storing and process-
ing always include an application for displaying statistics about
network traffic in graphs and tables with different time resolu-
tions, generating so-called top N statistics, data filtering accord-
ing to the required criteria, user profile creation and implemen-
tation of manual traffic analysis enabling investigation even to
the level of individual flows.
Figure 5. Example of NetFlow visualization (the screenshot comes from
INVEA-TECH FlowMon Collector)
A manual analysis and processing millions of daily traffic
records is certainly not an ideal solution to reveal the problems
of the IT infrastructures. The next step is an automatic and au-
tonomous processing and evaluation of NetFlow data gener-
ating alerts (events) triggered by undesirable situation, attacks,
configuration problems and anomalies in general. Thats the job
for the Network Behavior Analysis technology.
Network Behavior Analysis
The processing and evaluation of statistics in the network traf-
fic is an endless, repetitive process of searching for patterns
of undesirable network behavior and states, updating behav-
ior profiles and comparison with the common state in order
to detect anomalies. And what results can we expect? It is
a wide range of issues such as network scanning, dictionary
attacks, Denial of Service attacks, attacks based on applica-
tion protocols, peer-to-peer network activities, spyware, vi-
ruses or botnets.
MONITORING & AUDITING
28
1/2011
Figure 6. Detection of an attacker performing a dictionary attack by
analyzing the network behavior
And this is just the beginning behavior analysis can help
detecting unwanted applications, misconfigured devices, the
usage of anonymization services, top network users, or even
identifying the source of network latency. An interesting result
of the behavioral analysis is the profiles that are de facto living
configuration database of all active devices in the network. With
profiles of behavior, we are able to distinguish between servers
and clients in the network and gain an overview of the services
that are used and provided. What are the concrete benefits of
the network behavior analysis for you? Firstly we should men-
tion more efficient processes, because problems are detected
and uncovered before they cause unplanned night shifts, down-
times and resentful users or customers. Another benefit is a bet-
ter protection against new security threats (social engineering
attacks from the inside, data leaks ...), staff network abusing,
use of unwanted network services or the distribution of illegal
software. From an economic perspective, this approach leads to
significant cost savings, primarily by reducing the labor intensity
needed for an IT infrastructure management, minimizing costs
resulting from the elimination of security incidents, or SLA fulfill-
ment from the side of suppliers.
Figure 7. The detection of anomalies based on the change in the network
device profle (e.g. a new service occurrence on the server or the increase of
communication partners)
Finally, the question of success evaluation and comparison to
the IDS/IPS arises. Each manufacturer, of course, argues that
his system has the highest success rate. This can be truth in
terms of the specific methodology. But absolutely the most im-
portant factor (indeed, every manufacturer adds that theres no
guarantee of future results) is the overall concept and philoso-
phy of the specific tool or the number of evaluation methods of
operation.
In the context of development of outsourcing and the Software
as a Service model, remote processing and evaluation of traf-
fic statistic also can be used. A prerequisite for such services is
the ability to generate statistics about network traffic NetFlow
data. That makes possible to obtain an on-line solution for the
undesirable behavior and anomaly detection without having to
install or deploy any specific software or hardware in your net-
work. With the Software as a Service model, virtually everyone
can at least try the NBA technology.
Figure 9. The principle of trafc analysis as an online service statistics are
securely delivered to the evaluation center, which provides network administra-
tors the immediate results (the scheme comes from the website www.
nethound.eu the NetHound service is ofered by Czech company AdvaICT)
Conclusion
The deployment of an advanced solution for security monitoring
and network behavior analysis enables organizations to pre-
vent losses due to unavailability of the network, reduce costs
for operations, increase network security, protect investments
into the network infrastructure, improve reliability and network
availability and maximize satisfaction of their users and custom-
ers . These findings are also confirmed by Gartner in its Network
Behavior Analysis Update released in 2007, identifying NSM/
NBA technology as an additional level of data network protec-
tion needed in the 21st century. Aberdeen Group, engaged in
the benchmarking of IT solutions, sees the future (according to
the report issued in November 2009) particularly in the integra-
tion of NBA/NSM outputs with outputs of other systems and
increasing the credibility of detected attacks and anomalies us-
ing the correlation of these outputs. In the United States there
are known probes by the companies such as Endace, nPulse
or Napatech. There are many collectors for statistics storage:
NetFlow Tracker, Calligare Flow Inspector or a plenty of open-
source solutions. In the U.S., the widely expanded platform is
StealthWatch by Lancope (www.lancope.com), which offers
a complete solution including behavioral analysis. In the seg-
ment of internet providers, the well-known product is Arbor
Peakflow designed to detect massive anomalies. Outside US
the only manufacturers of NBA/NSM solutions are in the Czech
Republic INVEA-TECH and AdvaICT. These companies of-
fer their own solution called FlowMon which includes a probe,
collector with a variety of extensions and a plug-in for anomaly
detection system called ADS. More information is available at
www.invea.cz and www.advaict.com.
PAvel MINArIk
received master degree from computers science in 2005 at
Faculty of Informatics of Masaryk University in Brno. Cur-
rently he works as a Chief Technology Ofcer in AdvaICT.
He is the main architect of AdvaICTs ADS (Anomaly De-
tection System) and outgoing products. His main focus is
network trafc analysis and anomaly detection. He has
participated in several research projects (mainly for U.S. and Czech Army)
as a senior researcher of Institute of Computer Science of Masaryk Universi-
ty. He is a co-author of two technology transfers (2010) from the University
and co-author of 7 published research papers in domain of network beha-
vior analysis (2007-2009).
IDENTITY MANAGEMENT
30
1/2011
P
rivileged Identity refers to any type of user or account
that holds special or extra permissions within the en-
terprise systems. Privileged identities are usually cat-
egorized into the following types:
Generic/Shared Administrative Accounts. The non-person-
al accounts that exist in virtually every device or software
application. These accounts hold super user privileges and
are often shared among IT staff (i.e., Windows Administra-
tor user, UNIX root user, and Oracle SYS account). Serv-
ers, desktops and databases most critical accounts are all
accessed with privileged identities
Privileged Personal Accounts. The powerful accounts that
are used by business users and IT personnel. These ac-
counts have a high level of privilege and their use (or mis-
use)cansignifcantlyaffecttheorganizationsbusiness.
(i.e., CFO user, DBA user)
Application Accounts. The accounts used by applications
to access databases and other applications. These ac-
counts typically have broad access to underlying business
information in databases.
Emergency Accounts. Special generic accounts used by
theenterprisewhenelevatedprivilegesarerequiredtofx
urgent problems, such as in cases of business continuity
or disaster recovery.
Access to these accounts frequently requires managerial ap-
proval (i.e., fire-call IDs, break-glass users, etc.) Privileged
identities touch upon virtually every commercial sector. This
is because every enterprise has a critical component in cy-
berspace that is accessible by end users, applications, de-
vices, and accounts within this highly-complex collaborative
ecosystem.
US government and private sector information, once unreach-
able or requiring years of expensive technological or human
asset preparation to obtain, can now be accessed, inventoried,
lost or stolen with comparative ease either by accident or delib-
erately using sophisticated privileged identity attack tools. In an
effort to improve business security, compliance and productiv-
ity, privilege authorization policies must be redesigned and user
permissions more granularly managed.
Traditional Privileged Identity Management (PIM) tools ac-
count for a significant portion of Identity Access Management
(IAM) tools, yet feature a plethora of shortcomings:
Failtoenabledesktopuserstoeffectivelydotheirjobas
astandarduser(80%ofemployeesloginwithadministra-
tor rights);
Failtocontrolsuperuseraccesstocriticalservers,giving
users complete and unchecked access (80% of all securi-
ty breaches are committed by those working within an or-
ganization);
Forceorganizationstochoosebetweenproductivityand
security when implementing a privileged identity manage-
ment solution. While these challenges may have been his-
torically acceptable, they are no longer good enough. It is
time for businesses to expect more from their privileged
identity management solution in order to improve security,
compliance and overall productivity.
Age of Authorization
Technology is an ever-changing and evolving aspect of modern
business. Most agree that the use of technology is essential to
achieving many of the milestones critical to business reform.
Identity and Access Management (IAM) govern three significant
areas when ensuring proper identity security authorization,
access and authentication.
Authorization
Authorization management is a significant pillar in identity se-
curity, mainly due to the fact that industries are moving from pa-
per to electronic records. Authorization is the process of giving
someone permission to perform certain tasks, or obtain certain
information.
More formally, to authorize is to define permission policies.
For example, human resources staff is normally authorized to
access employee records, and this policy is usually formalized
as permission brokering rules in a computer system. During
operation, the system uses the permission brokering rules to
decide whether permission requests from (authenticated) users
shall be granted or rejected. Resources include an individual
file, task, or item data.
Privilege Identity
Management
Demystified
Privileged identities touch upon virtually every commercial sector. This
is because every enterprise has a critical component in cyberspace
that is accessible by end users, applications, devices [...]
Privilege Identity Management Demystified
31
1/2011
Access
Access includes the process of centrally provisioning role based
time bound credentials for privileged access to IT assets in order
to facilitate administrative tasks. Super User Privileged Access
(SUPM) and Share Account Password Management are two
focal points for proper access controls.
Super User Privileged Management (SUPM) &
Shared Account Password Management (SAPM)
When it comes to crashing your enterprise systems, destroying
data,deletingorcreatingaccountsandchangingpasswords,its
notjustmalicioushackersyouneedtoworryabout.
Anyone inside your organization with superuser privileges has
the potential to cause similar havoc, either through accidental,
intentional or indirect misuse of privileges.
Superusers may well also have access to confidential infor-
mation and sensitive personal data they have no business look-
ing at, thus breaching regulatory requirements and risking fines.
The trouble is that accounts with superuser privileges, includ-
ingsharedaccounts,arenecessary:Youcantrunacorporate
IT system without granting some people the privileges to do
system-level tasks.
This is where SUPM and SAPM methodologies come into
play.Sowhatsthebestwaytomanagepersonalandshared
accounts with superuser privileges in a controlled and Global
leaders appear to be protecting information security from budg-
et cuts but also place it under intensive pressure to perform
Pricewaterhouse Coopers
Implementing controls over shared and super user accounts is
essential to security & compliance auditable manner? That was
akeyquestionResearchVicePresidentAntAllanaddressedat
the Gartner Information Security Summit 2009 in London back in
September. When it comes to best practices for managing per-
sonal accounts with superuser privileges, Allan recommended
creating three types of accounts:
Personalaccountswithfull,permanentsuperuserprivileges
Personalaccountswithfull(orrestricted)temporarysuperus-
er privileges
Personalaccountswithlimited,temporarysuperuserprivileg-
es
Superuser activity on any of these accounts should be moni-
tored, logged and reconciled, Allan recommended. The first two
types are intended for full-time system administrators, and the
number of these accounts should be minimized.
However,itsimportantnottomakethenumbertoosmall,Al-
lan warned. Otherwise there might not be enough people avail-
able at a given time to take required action when it is needed.
Itsalsoprudenttoconsiderlimitingthescopeofthesuperus-
erprivilegesacrosstheorganizationsinfrastructurebyasking
yourself: Does a given administrator need to be a superuser on
all the systems in the organization?
The third type of account, the one with limited, temporary su-
peruser privileges, is intended for application developers and
database administrators. The superuser privileges of these ac-
counts should be limited to the applications or other areas that
they might reasonably need to access. Allan recommended us-
ing superuser privilege management (SUPM) tools to control
these three account types:
Byprivilege(e.g.,byregulatingthecommandsavailable)
Byscope(byresourcesorsystems,perhaps)
Bytime(eitherbyprovidingprivilegesforafxedtimeperi-
od or by time windows)
Allanalsorecommendedusingsharedaccountprivilege
management (SAPM) tools to control these three account
types:
Byprivilege(e.g.,byregulatingthecommandsavailable)
Byformfactors(checksum,licensecode,IPaddress)
Byscope(byresourcesorsystems,perhaps)
Bytime(eitherbyprovidingprivilegesforafxedtimepe-
riod or by time windows)
Organizations continue to struggle with excessive user privilege
as it remains the primary attack point for data breaches and un-
authorized transactions. Mark Diodati, Burton Group
Authentication
Authentication is the process of determining whether some-
one or something is, in fact, who or what it is declared to be.
In private and public computer networks (including the Inter-
net), authentication is commonly done through the use of logon
passwords.
Knowledge of the password is assumed to guarantee that
the user is authentic. Each user registers initially (or is regis-
tered by someone else), using an assigned or selfdeclared
password. On each subsequent use, the user must know and
use the previously declared password. The weakness in this
system for transactions that are significant (such as the ex-
change of money) is that passwords can often be stolen, ac-
cidentally revealed, or forgotten. For this reason, Internet busi-
ness and many other transactions require a more stringent
authentication process.
Identifying the Misuse of Privilege
Problems that manifest themselves in an organization due to
misuse of privilege stem from intentional, accidental, or indirect
causes. The amount of threats posed to organizations have
increased faster than security professionals can effectively ad-
dress them, opening targeted organizations up to greater risk.
Intentional
Intentional misuse of privilege often stems from insider attacks.
An insider attack is defined as any malicious attack on a corpo-
rate system or network where the intruder is someone who has
been entrusted with authorized access to the network, and also
may have knowledge of the network architecture.
A 2010 CSO Cyber Security Watch Survey published find-
ings that demonstrate the significant risks posed from insider
attacks.
Cyber criminals now operate undetected within the very walls
erected to keep hackers out. Technologies include rogue de-
vices plugged into corporate networks, polymorphic malware,
and key loggers that capture credentials and give criminals privi-
leged authorization while evading detection. In 2008, the White
HouseissuedtheCyberSecurityPolicyReview,whichprofiled
systemic loss of U.S. economic value from intellectual property
and data theft as high as $1 trillion.
The Computer Security Institute and FBI report states that an
insider attack costs an average of $2.7 million per attack. CSO
magazine cites the following points regarding this threat:
Organizationstendtoemploysecuritybased,wall-and-for-
tress approaches to address the threat of cybercrime, but
this is not enough to mitigate the risk
IDENTITY MANAGEMENT
32
1/2011
Risk-basedapproachesholdpotentiallygreatervaluethan
traditional security based, wall-and-fortress approaches
Organizationsshouldunderstandhowtheyareviewedby
cyber criminals in terms of attack vectors, systems of inter-
est, and process vulnerabilities, so they can better protect
themselves from attack
Economichardshipsspawnedbythe2008-2009recession
may generate resentment and fnancial motivations that
can drive internal parties or former employees to crime
Internationalconsultancyagency,Deloitte,statedthesur-
vey conducted by CSO magazine reveals a serious lack
of awareness and a degree of complacency on the part
ofITorganizations,andperhapssecurityoffcers.Organ-
izations may focus on unsophisticated attacks from hack-
ers or insiders because they are the noisiest and easiest
to detect. Yet that focus can overlook stealthier attacks
that can product more serious systemic and monetary im-
pacts.
Accidental
Though difficult for many to admit, humans are fallible. We are
not perfectly consistent in our principles personally or profes-
sionally. Accidental misuse of privileges on desktops and serv-
ers does happen, and it does have a measurable impact on the
organization as a whole. For example, desktop configuration
errors cost companies an average of $120/PC, according to
IDC report, The Relationship between IT Labor Costs and Best
Practices for IAM.
In September 2004, HFC Bank, one of the largest banks in
the United Kingdom, sent 2,600 customers an e-mail that, due to
aninternaloperatorerror,exposedrecipientse-mailaddresses
to everyone on the list. The problem was compounded when
out-of-office messages containing home and mobile phone
numbers automatically responded to the mailing.
As one famous hacker said, The weakest link in any network
is its people. The most fortified network is still vulnerable if us-
ers can be tricked into undermining its security -- for example,
by giving away passwords or other confidential data over the
phone, or performing some activity that allows malware to hi-
jackadminrightsondesktops.Forthisreason,usereducation
should be one cornerstone of a corporate site security policy,
in addition to privilege authorization management. Make users
aware of potential social engineering attacks, the risks involved,
and how to respond.
By controlling and auditing superuser access 10% of incidents
can be averted, saving over $113,000 in prevented breaches
annually.
Encourage them to report suspected violations immediately.
In this era of phishing and identity theft, security is a responsi-
bility that every employee must share.
Indirect
Indirect misuse of privileges is when one or more attack types
are launched from a third party computer which has been taken
over remotely. A startling statistic revealed by Gartner is that
67% of all malware detections ever made were detected in
2008. Gartner also estimates managed desktops, or users who
run without admin rights, produce on average a $1,237 savings
per desktop and reduce the amount of IT labor for technical
support by 24%.
The Georgia Tech Information Security Center (GTISC) host-
ed its annual summit on emerging security threats on October
15 and published its annual attack forecast report. According to
their research, the electronic domain will see greater amounts
of malware attacks and various security threats in the coming
year.
Data will continue to be the primary motive behind future cy-
bercrime, whether targeting traditional fixed computing or mo-
bile applications. According to security expert George Heron,
Its all about the data, so he expects data to drive cyber-attacks
for years to come. This motive is woven through all five emerg-
ing threat categories.
Best Practices for Evaluating PIM
While understanding the standard definition of PIM is simple,
privilege identity management can mean very different things
to different business units within an enterprise:
Evaluating PIM CFO
Chief Financial Officers (CFOs) relate to PIM in financial terms.
From a CFO perspective, authorization most likely impacts the
cost of a company budget (i.e., productivity, security) or costs
incurred due to misuse of privileges (i.e., compliance, fraud).
A primary advantage in directing attention to PIM best practices
is the reduction in costs that result from improving the efficiency
of handling information and accessing exactly what you need to
inordertoperformyourjob.
Realrisksandpotentialcoststoanenterpriseduetopoor
management of security and authorization will also be of
greater meaning to a CFO. Information technology will be
the medium of choice for all exploitation of privileges in an
enterprise. In fact, the future has 67% of all malware detec-
tions ever made were detected in 2008 already arrived, with
annual losses from viruses, intrusions, and data breaches
estimated by some entities to be in the millions of dollars
annually.
It is especially important that a CFO understands the risks as-
sociated with unsecured systems due to improper authorization.
Otherwise, management choices may unwittingly jeopardize
the companys reputation, proprietary information, and finan-
cial results. A CFO does not need to be a security expert, but
understanding the basics behind authorization will lend itself to
implementing best practices.

The Most Important Strategy for Meeting Security Objectives
CEO CFO CIO CISO
Increasing focus on data protection X X
Prioritizing security based on risk X X
Most CIOs stress the importance of security to senior man-
agers. In order to ensure an enterprise is implementing proper
PIM policies, a CIO should look to ensure the ability to collect
user activity and authorization information from a variety of re-
sources, associate this data with candidate roles and responsi-
bilities, propose alternative roles and leverage decisions made
about the data on an ongoing basis. Without a standardized so-
lution in place, productivity can be impacted as this takes many
resources and time to complete.
Improper PIM practices can lead to serious problems due to
misuse of privileges. Security initiatives have a higher success
rate when it is tied to business initiatives and a corporate goal.
When PIM security is built into business initiatives, funding will
come. Likewise, tying PIM to the corporate security goal (i.e., to
ensure the integrity of company data), implementing such poli-
cies will show an executive commitment to security.
Privilege Identity Management Demystified
33
1/2011
Evaluating PIM Administrator
Security policies are the first line of defense to an IT environ-
ment. Without them, an enterprise will be at war. Not only will
there be battles between the different support organizations,
but administrators could be battling hackers (internally or ex-
ternally). There will be no politics from misuse of privileges
just a raw desire to change, steal, or accidentally destroy
data. Additionally, proper authorization security empowers ad-
ministrators to eliminate the risk of misuse of privilege by no
longer requiring the distribution of administrative rights or root
passwords.
Executives often hand off the responsibility for security to sys-
tems administrators without providing adequate resources to
deploy the authorization controls needed to secure and maintain
privileged access. As with CIOs, demonstrating the tie-in to busi-
ness initiatives and/or corporate goals will help an administrator
meettheirobjectivesaswell.
Evaluating PIM Auditor
Compliance, compliance, compliance Mandates that require
greater privilege authorization control include but are not limited
to SOX, HIPAA, GLBA, and PCI DSS. Auditors are well aware
of policies that must be in place to comply with federal, state and
industry regulations. Non-compliance can result in fines, severe
financiallosses,databreaches,anddamagetoacompanys
reputation. Sound authorization security will help auditors vali-
date corporate compliance. Proper authorization detection and
audit-friendly logs to track privilege use helps an auditor per-
form the complex duties of this position.
Audits have become so important that they command board-
level attention. The advantage of using an identity and au-
thorization management tool is that it provides the ability to
log, control, audit and report on which users have privileges
towhatinformationassets.Regulatoryandcomplianceissues
are among the main drivers behind identity and authorization
brokering tools. Organizations require the ability to demon-
strate that account administration and authorization controls
are performing according to policy. A good tool should serve
as the cornerstone of enterprise governance, risk and overall
compliance strategy. Some of the finer points a solution should
deliver are: always knowing who is accessing what; when they
are doing it and if they are authorized; automatic provisioning
of accounts; and integration with enterprise applications; to
name a few.
An auditor is interested in seeing proof of compliance. Most of
these tools create an audit trail that auditors should accept for
a general controls audit and proof of compliance. A basic identity
and authorization management tool should help organizations
comply with most of the challenges that regulations like HIPAA
and PCI DSS place on our organizations.
In fact, HIPAA may have a new enforcement mechanism
because of the HITECH Act signed into law in February 2009
aspartoftheAmericanRecoveryandReinvestmentAct.The
new law gives government officials more power when enforcing
HIPAA policy, especially when dealing with companies that do
business in multiple states. An identity and authorization man-
agement tool would be the perfect solution for this kind of com-
pany, creating a common reporting framework.
Privileged Access Lifecycle Management
Banks, insurance companies, and other institutions are faced
with the monumental task of managing authorization to mission-
critical systems. These organizations have large numbers of
internal and external users accessing an increasing number of
applications, with each user requiring a different level of secu-
rity and control requirements. In addition, these organizations
must also address identity management concerns that arise
from compliance issues related to regulations like SOX, HIPAA,
GLBA, and PCI DSS.
High administrative costs due to account maintenance,
password resets, inconsistent information, inflexible informa-
tion technology (IT) environments, silos due to mergers and
acquisitions, and aging IT infrastructures make this even more
challenging for organizations. Together, these factors are pro-
pelling the adoption of privileged lifecycle access management
solutions across all industries. Privileged Access Lifecycle
Management (PALM) is a technology architecture framework
consisting of four continual stages running under a centralized
automated platform: access to privileged resources; control of
privileged resources; monitoring of actions taken on privileged
resources; and remediation to revert changes made on privi-
leged IT resources to a known good state.
Privileged Access Lifecycle Management
Access includes the process of centrally provisioning role based
time-bound credentials for privileged access to IT assets in or-
der to facilitate administrative tasks. The process also includes
automation for approval of access requests and auditing of ac-
cess logs.
Control
Control includes the process of centrally managing role based
permissions for tasks that can be conducted by administrators
once granted access to a privileged IT resource. The process al-
so includes automation for approval of permission requests and
auditing of administrative actions conducted on the system.
Monitor
Monitor includes audit management of logging, recording and
overseeing user activities. This process also includes automat-
ed workflows for event and I/O log reviews and acknowledge-
ments and centralized audit trails for streamlined audit support
and heightened security awareness.
Remediation
Remediation includes the process of refining previously as-
signed permissions for access and/or control to meet security
orcomplianceobjectives,andthecapabilitytocentrallyrollback
system configuration to a previous known acceptable state if
required.Automation of the Privileged Access Management Li-
fecycle includes a central unifying policy platform coupled with
an event review engine, that provides controls for and visibility
into each stage of the lifecycle.
HowtocostjustifyPrivilegedAccessLifecycleManagement
Security:PrivilegedAccessiscriticalforsmoothongoing
administration of IT assets. At the same time, it exposes an
organization to security risks, especially insider threats.
Compliance: Privileged Access to critical business sys-
tems,ifnotmanagedcorrectly,canintroducesignifcant
compliance risks. The ability to provide an audit trail across
all stages of the Privileged Access Lifecycle Management
iscriticalforcompliance,andisoftendiffculttoachievein
large complex heterogeneous IT environments.
ReducedComplexity:EffectivePrivilegedAccessLifecy-
cle Management in large heterogeneous environments
IDENTITY MANAGEMENT
34
1/2011
with multiple administrators, managers and auditors, can
be an immensely challenging task.
Heterogeneous Coverage: An effective PALM solution
supports across a broad range of platforms including Win-
dows, UNIX, Linux, AS/400, Active Directory, databases,
frewalls,androuters/switches.
Beginning Steps Before Implementing PALM
Set Security as a Corporate Goal
Enterprises may have trouble maintaining security be-
cause everyone is too busy trying to reach other goals. If
you have problems maintaining security in your company,
consider adding security as a goal for every level of man-
agement.
Provide or Enlist in Training as Required
For security to work, everyone needs to know the basic
rules.Oncetheyknowtherules,itdoesnthurttoprompt
them to follow those rules.
Ensure All Managers Understand Security
It is especially important that all members of management
understand the risks associated with unsecured systems.
Otherwise,managementchoicesmayunwittinglyjeopard-
izethecompanysreputation,proprietaryinformation,and
fnancialresults.
Communicate to Management Clearly
Too often, system administrators complain to their termi-
nals instead of their supervisors. Other times, system ad-
ministratorsfndthatcomplainingtotheirsupervisorsisre-
markably like complaining to their terminals.
If you are a manager, make sure that your people have
access to your time and attention. When security issues
comeup,itisimportanttopayattention.Thefrstlineof
defense for your network is strong communication with the
people behind your machines.
If you are a system administrator, try to ensure that talk-
ingtoyourimmediatemanagerfxestheproblemsyousee
frompotentialorrealizedmisuseofprivileges.Ifitdoesnt,
you should be confdent enough to reach higher in the
management chains to alert for action.
Delineate Cross-Organizational Security Support
If your company has a security group and a system ad-
ministration group, the organization needs to clearly de-
fnetheirrolesandresponsibilities.Forexample,arethe
systemadministratorsresponsibleforconfguringthesys-
tems? Is the security group responsible for reporting non-
compliance?Ifnooneisoffciallyresponsible,nothingwill
get done. And accountability for resulting problems will
many times be shouldered by the non-offending party.
Freeware vs. Licensed Software
To be able to understand pros and cons of Open Source soft-
ware, one must first understand the philosophy in which it is
rooted.Supposeforamomentyoureastudentreadingaphys-
icsbookwhichexplainstheTheoryofRelativity.
Now, you are able to read the book, use the notorious formula
E=mc
2
tosolveallofyourexercisesand,ifyoureaparticularly
brilliant student, why not, even start from there to come up with
a new formula leading to a new scientific discovery.
In other words, the scientific knowledge is in public domain,
freeforeverybodytouse,modifyandredistribute-youdont
havetopayaroyaltytoEinsteinsnepheweverytimeyousolve
a difficult physics exercise or you daydream about time-space
travel.
In this sense, freeware may be regarded as an attempt at
making the world of technology much more similar to that of
science, particularly in the field of computer software. Every
software distributed with an open source license grants to eve-
rybody the rights to disassemble, rebuild, manipulate and per-
sonalize the product, making it possible to understand its inner
mechanismsandadapttheproducttotheusersneeds.
However, using open source software for IT security purposes
is generally discouraged simply because the entire source code
is available for free download would usually make it much easier
for a malicious user to find an exploitable bug in the program in
order to bypass all protections.
Whereas using the proprietary software in this case tends
to make things difficult for the potential attacker, as he would
have to reverse-engineer a large part of the program in order
to achieve the same result.
Sure, open source software may be free, but the propeller-
heads you need to actually get it working, customized, and sup-
portedarent.
Spendingtimecustomizingasoftwareproduct,justbecause
its open source,doesntmeanthattimeiswellspent.Business
owners should stick to the boring, off-theshelf stuff for now.
Conclusion
Letsfaceitorganizationscannotsimplybuildwallsto
protect vital information anymore. However, in the proc-
ess of adapting to this new virtual collaborative environ-
ment comes the enormous challenge of ensuring that priv-
ileged access to critical information is not misused. Walls
that may have worked a decade ago are now practically ir-
relevant as users seek ways around, over, or under these
obstructionsbecauseitinterfereswiththeirmainjobdu-
ties.Aswemoveforwardinthisevolvingera,itsimportant
to develop an awareness of how to protect our resources,
whatever they may be, using boundaries to guide us, not
walls.
Havingwelldefnedawarenessofboundariesenablesend
users and applications to communicate freely within an IT
environment without worry of intentional, accidental or in-
direct misuse of privilege. Boundaries allow a more pro-
ductive and compliant dialogue to take place between us-
ers and the IT department and proactively deters attempts
of misuse. If boundaries are respected, then IT remains in
control of security, compliance and productivity, and has
the authority to take proactive steps in which to protect the
enterprise.
Privileged identity management is critical business sys-
tems, and if not managed correctly, can introduce signif-
icant compliance risks. Privileged authorization is criti-
cal for smooth ongoing administration of IT assets. At the
same time, it exposes an organization to security risks, es-
pecially insider threats.
JIM ZIERICk
Mr. Zierick brings more than 25 years of enterprise expe-
rience building technology companies in operations and
sales to BeyondTrust where he will be responsible for di-
recting the companys global initiatives to drive product
growth and technical thought leadership in the Privilege
Identity Management market, as well as adjacent mar-
kets. He will be will be responsible for development methodology, process
and management for the entire BeyondTrust product suite.
ATTACKS & RECOVERY
36
1/2011
A
successful defense-in-depth program is constituted of
three equal legs: People, Technology and Operations.
These vectors should be treated equally, and without
proper balance could leave a potential gap in your security pos-
ture. Certain legs like technology might take longer to implement
and maintain, but it does not mean that it is any more important
than the other two. These legs need to walk in stride in order to
arrive at your goal, and that goal is to protect your data. A com-
panys data is what hackers are after, either to steal, change,
deny or delete. The data is what a defense-in-depth initiative is
aimed to protect.
The taxonomy of a hacker could fall into multiple categories
with numerous motivations. The individuals attacking you could
be in the form of a nation or government, cyber-crime gang,
competitor, malicious insider or a kid in his mothers basement.
They could be looking for financial data to sell, government se-
crets to exploit, intellectual property, or to embarrass and black-
mail. At this point it does not matter who or why they are at-
tacking, but to verify that your data and systems are safe from
their assault.
A defense-in-depth program cannot be successful or protect
you from attack without the CIA triad of confidentiality, integ-
rity and availability. These three roles are at the heart of what
this program is attempting to achieve, without this framework
the three legs cannot move. Included with the CIA framework
a successful program needs to have the ability to alert, protect,
react and report on the attacks that have occurred.
There have been many analogies of an onion when describ-
ing a defense-in-depth strategy, and I think this is a great way
of thinking about it. You have to peel back multiple layers of the
onion until you reach the core, and this is exactly what we are
trying to achieve here with defense-in-depth. We want some-
one to pass through multiple layers of security before they get
to the core of our data. You need to have fail safes in place at
other layers to catch the intruder if they happen to get through
a previous layer. The Onion Approach is how you apply a suc-
cessful defense-in-depth program. This makes sure that you are
not relying on one leg of the program too heavily and creates
a robust approach to securing your data.
Now lets take a closer look at each one of the three legs that
make up our defense onion.
The People Layer
Many times people fail to realize that people are actually the
problem. People are the ones that are hacking into your sys-
tems, people are the ones that are accidentally or purposefully
releasing confidential data outside your network, people are the
ones that are not configuring the systems properly and leaving
security holes in your network. Long story short, people are the
problem. But there is a bright side, people are also the solution.
We need to be working this leg of our defense-in-depth program
to our advantage. This part of the program is more focused on
the tangible aspect of securing data, and this is not just by lock-
ing a server in a closet.
The security of your data is directly related to the caliber of
people that are working at your company. By doing proper back-
ground checks, credit checks and drug screening allows for em-
ployers to become more selective with their on-boarding proc-
ess. This does not mean that an employee with a clean record
is not a threat; it just means that the organization is performing
its due-diligence to protect itself against a potential unwanted
situation. Hiring good people and rewarding them appropriately
allows for people to become more involved and protective of the
company they work for. On the flip side, if there is inappropriate
Defense-In-Depth
The Onion Approach
There is no such thing as being completely secure in any industry,
especially in technology, but there are definitely ways to mitigate
potential risk and breaches to your organization. By adopting a defense-
in-depth strategy based on the framework of industry best practices
and technology, you can continue to strive towards being secure.
Defense-In-Depth
37
1/2011
behavior, an employee should be disciplined appropriately to
enforce the companys policies & procedures, and establish
authority.
Physical security is another area that falls into The People
Layer sector of our program. Having the appropriate levels of
security on your physical perimeter defers a malicious person
from gaining physical access into your facility. By having en-
trances equipped with swipe cards and biometric access will
stop the majority of people from entering a physical location;
having pictured badges also helps identify intruders. Another
form of physical security is having cameras setup on key parts
of the building recording information that can be reviewed at
a later time. Being able to rewind the recording to a particular
point in time is key to this security, as well as having an alarm
system that can alert the proper authorities when needed or
having an on-site security guard. Lastly, having physical locks
on file cabinets and server racks to protect the physical data
from being stolen is a form of security that should not be over-
looked. How many companies have a third party cleaning crew
with full access to the building? As we mentioned in the begin-
ning of this article, we need a way to have the ability to alert,
protect, and react on incidents. By having an alarm system in
place we are able to alert on a physical breach of the perimeter,
by having locks and access cards with appropriate permissions
we are able to protect our physical infrastructure and data, by
hiring security guards and camera systems we are able to react
to incidents in real time.
Another role of The People Layer is user training and aware-
ness. People are not secure by nature and will indubitably make
errors and mistakes throughout their time in the company. This
is human nature and there is no way to stop this issue with 100%
certainty, but there is a way to mitigate some of the pain. By
having a successful security awareness program that focuses
on the dos and donts of information security, you can educate
a user from making a potentially dangerous decision. Having an
educated user population might be the greatest form of secu-
rity that a company can bolster in its information security arse-
nal. This can be accomplished with in-person training, monthly
newsletters, signs and handouts, etc. The main focus is to have
users understand that is part of their responsibility to assist with
keeping the company secure.
The last responsibility of our People Layer is the policy and
procedure layer. This is glue of the previous People Layers, and
of many other layers to follow. Having secure policy and proce-
dures in place to let people know how to act and achieve cer-
tain tasks is paramount. Examples of this would be a policy on
separation of duties within departments to segregate data from
being treated insecurely, or having a third party vendor policy
to review all external vendors before doing business with them.
Without this layer the other layers will tend to act independently
of each other. The policy and procedure layer ends up being
the laws of the layers and will keep the people, technology and
operations on a path towards security.
The Technology Layer
There is a wide breadth of information security technologies
available to assist with securing your systems, network, and
applications. This can be the most difficult layer to implement
due to the large amount of planning, implementation and inte-
gration with other areas of technology. The Technology Layer
is here primarily to alert, protect, react and report on cyber-
security related incidents. This layer heavily bleeds into both
the people and operation layers since both layers relying on
the technology.
Before technology can actually assist with the protection of
data it needs to be researched, reviewed and implemented. If
there is a need for a technology to help secure a particular part
of the organization, there should be thorough investigation into
the products that are on the market that can assist with filling
this need. Third party reviews by reputable organizations should
take place to verify that a niche or fly-by-night product is not se-
lected. After choosing a handful of potential vendors that can fit
the bill, a Request for Proposal (RFP) should be created with
the needs that you are looking to fill with their products. Once
the vendors return these RFPs a choice can be determined
based on their input and you can start reviewing their product.
A proof-of-concept should be determined in the RFP to review
the product before purchase. With this phase you will decide if
this technology will actually perform the way it was intended to
in your environment. During this phase the technology should
be placed in a staging environment and tested to make sure
that it protects and works as desired, it is here that you need to
determine if this technology will actually improve your security
posture. If you cannot get the desired effects or resources to
run the technology, it will never run properly and could poten-
tially give you a false sense of security, leaving the company
vulnerable to attack. If after you run through these steps and
you determine that you have the right technology, it does what
it is intended to do, and you have the resource to run, than its
time to purchase and implement the system. Before implemen-
tation can begin a detailed architecture discussion should begin
with updated network drawings, resources and project plans.
Preparing for an implementation is important, because missing
details could lead to a misconfigured product or cause other ar-
eas of the network to fail. There needs to be a diverse layered
approach of where the technology is implemented throughout
the organization. This is mainly due to resist attack by having
many fail safes installed within the architecture. There are 3 ma-
jor locations that technology can be implemented for security in
your infrastructure: border systems, externally facing systems,
and internally facing systems.
Border systems are the first line of defense from outsiders
trying to gain unauthorized access to your data or cause the
organization harm. The technology that many people think of if
you say the word security is firewall. Firewalls should be stood
up on the boundary of your networks with only the necessary
ports open for access. Locking down the policies on your fire-
wall to allow only traffic that is needed and deny all that is not is
a recommended standard. The firewall in many cases is one of
the first lines of defense in your network, and by limiting what Diagram 1.
ATTACKS & RECOVERY
38
1/2011
can get through the firewall, will help take pressure off the tech-
nology layer beneath it. Firewalls do not have to be used on just
the outside of your network, and should be in place to separate
any major segments of the architecture. The next layer of tech-
nology that would be hit on the external network would be an
Intrusion Prevention System (IPS). Depending on your network
architecture you could actually have an IPS positioned before
the firewall, behind the firewall, or implemented within the fire-
wall. In our case we are going to have it implemented behind it
to watch the flow of traffic come through. An IPS in our situation
blocks malicious traffic that made it past the firewall, because it
acts on intelligence and scans traffic for signatures and known
intrusion attempts at a higher OSI layer. The IPS and firewall
walk hand-in-hand as the first real defense against malicious at-
tacks coming inbound towards your company; they could also
be used to catch malicious traffic leaving your network. These
two technologies complement each other by securing traffic
from different standpoints.
Even with a firewall and IPS installed and configured properly,
it is still very possible for an intruder to get past this technology.
It is for this reason that an organization should be concerned
with their externally facing systems and install a demilitarized
zone (DMZ) within their network. A DMZ is a network that serves
public facing systems to the public for use. An example of this
would be a web DMZ, where all publicly accessible applications
are available over the Internet. Even though they are behind the
firewall and IPS they are still considered a high risk and should
have proper procedures and polices created on how to harden
these systems from attack by intruders. Systems within the DMZ
should not have direct access back into your internal network
and should, for the most part, be segregated for the companies
protection. The majority of systems that are publicly available on
the DMZ are web applications. These systems are on the DMZ
for the purpose of being accessed by people outside of the com-
pany. The use of encryption to access personal data on these
systems should be used to provide confidentiality and integrity
of the transmission. This will secure the flow of data from the
companies systems to the intended recipient without the risk of
it being monitored by an attacker. This is for the protection of
data into your network by an intruder who was looking to steal
or cause damage within your systems, but what if the purpose of
an attacker was to knock your presence off the Internet? Denial-
of-Service attacks are directed toward a companys services
to prevent them from performing their intended service. When
these attacks are sourced from multiple locations its considered
a Distributed Denial-of-Service and can effectively take down a
company from serving up web content, e-mail, or anything that
involves the Internet. Having a DDOS service established to
either scrub or absorb the overload of malicious traffic is your
best bet in attempting to mitigate this type of malicious traffic
from overloading your systems and bandwidth. Even with all this
technology in place an intruder could still gain access to your
internal systems, and you have to be as diligent with protecting
your internal systems as you are with your external ones.
With a defense in depth framework, you cannot rely on your
perimeter technologies to protect your internal systems; espe-
cially when the threat is occurring from the inside. Internal sys-
tems historically are not as hardened as their external counter-
parts, for the simple reason being that they are there to serve
up data or access for internal use with a certain amount of trust.
This does not mean that the security on these systems should
be lax, and they should be treated with the same security mind-
set as a system in your DMZ. There are systems and standards
that should be followed to catch either internal abuse or external
attacks that have slipped past your first layer. In order to secure
your LAN there are network based systems and node by tech-
nology that can assist as you continue to focus your security
efforts down to the node level.
The protection of the internal systems starts with the access to
the systems themselves. By applying access control to networks
and hosts will limit the ability for attackers to easily gain control
or at least slow them down. Utilizing an identity management
system helps with the monitoring of users and assists with the
accidental permission creep that a user can accumulate over
time. As well as using a centralized authentication system to
limit the need for multiple local user accounts to manage equip-
ment is also recommended. Access to a system is an integral
part of security, and this should be closely guarded.
From here the next layer down is implementing a Network Ac-
cess Control (NAC) across your LAN which will limit the misuse
of nodes gaining access to your network. This system forces
your guidelines and standards to nodes connecting to your net-
work before they are allowed to connect internally. This stops
rouge PCs and insecure nodes from causing damage before
they are able to connect to your data or internal systems.
Installing a Security Incident and Event Management (SIEM)
on your network is a technology that touches all other systems
that in attempts to pull needed information from each. This sys-
tem essentially can collect logs from all your devices and have
custom correlated rules to look for suspicious and known mali-
cious traffic. A SIEM is constantly searching logs from your sys-
tems for hostile activity, and when done correctly will be a major
layer of defense between the other technology layers.
Another technology that is designed for internal use is Data
Leakage Prevention (DLP). With the biggest threat to your net-
work being inside abuse, DLP is one of the only tools that are
going to protect and detect this type of attack. If a user has the
permission to a file, no other system is going to stop them from
accessing it. By utilizing DLP and seeing who, what and where
your data is going, will you give a step up on protecting your
information and potentially stop insider abuse.
Web content filtering is a technology that allows for organiza-
tions to block unwanted web traffic from being accessed. This
also helps with data loss, but can assist with protecting your
Diagram 2.
Defense-In-Depth
39
1/2011
user population from accessing known malicious web content.
This system is in place to proactively stop threats from coming
down into your LAN.
There are also preventions that can be placed on single sys-
tems like anti-virus and encryption that are aimed to protect
the individual node. Encryption technologies like tokens and
full disk encryption are aimed to keep the confidentiality and
integrity of the data either at rest or in transit, while anti-virus/
anti-spyware/anti-malware are aimed towards keeping the node
clean from malicious software running on the system. These
technologies are really focused to prevent unauthorized activ-
ity at the node level.
Utilizing security technology both on your internal and ex-
ternal systems that allow for countermeasures and unique ob-
stacles for intruders to pass will be most successful in organi-
zations security. Each technology will allow another hoop for
a hacker to jump through in order to gain access. You can think
of this security as a siphon, you should have your most rigid
security at the top of the siphon in attempt to catch as much
as possible and then filter down the access and security to the
particular node. By having more stringent security at the top of
the network, will hopefully protect your workstations at the bot-
tom. With the security technology now in place, there needs to
be a group of people managing, monitoring, and responding to
threats that your systems are now taking. This brings us to our
next major layer.
The Operations Layer
The operations layer is the piece of the onion that ties all the
others layers together and focuses on the day-to-day tasks that
are needed to uphold a companys security posture. This layer
is the adhesive for the other layers. Without this layer an organi-
zation can have all the policy and technology in place without
fully taking advantage of what each layer has to offer. The efforts
that were put in place by the other layers are not fully achieved,
until the operations layer is put to work enforcing them. Lets
review how the operations layer reaches out and touches the
other layers.
Within The Peoples Layer, the operations layer is fully re-
sponsible for getting the word out about information security.
This layer needs to have people educated to know what poli-
cies and procedures have been created and how to act accord-
ingly. There should be a location that employees can go to get
updated informational policy and procedure, which should be
vetted on a regular basis by the appropriate personnel. Without
updating the policies and procedures in visible locations, people
will not be educated properly.
Also, going through a risk management assessment of your
systems so that you are aware of where the potential threats and
risk lie within your network and organization should be reviewed
on a regular basis. Risk management will give you and education
on where your data is most vulnerable and how to achieve a bet-
ter understanding of the risks that your data is open to.
Another standard to follow in the operations layer is strict
change control policies. By scheduling changes to your sys-
tems based of criticality and risk, gives the decisions makers
a better understanding of what is in store for a particular
change. Making unauthorized changes to the network is dan-
gerous and leaves the systems open for potential vulnera-
bilities. Having procedures in place for change control limits
these types of errors.
There is another group in the organization that assists with
keeping the people that operate the technology honest, and they
are internal audit. An internal audit department should be verify-
ing that the operations teams are doing what they are supposed
to, when they are supposed to. They should make the appro-
priate departments accountable for their actions and regularly
verify that procedures are being followed.
Lastly, the final group in the operations team is the incident
response team. This team in involved when an incident has oc-
curred, and brings it the appropriate people to resolve the issue.
An incident could be both technical and non-technical, and the
teams should be organized to have the appropriate resources
called upon when needed.
Within The Technology Layer, the operations layer is respon-
sible for keeping the technology running efficiently, while at
the same time monitoring and reacting to attacks against the
company. These proactive tasks include, but are not limited to
include verifying up to date anti-virus and IPS signatures on
a regular basis, patching all systems in a scheduled manner,
verifying renewals of equipment, performing audits of access
lists, etc. There should also be scheduled vulnerability scans,
either internally or by a third party, against the organizations web
and network infrastructure in attempts to find compliance and
security holes in their security architecture.
Having monitoring against your systems is essential to hav-
ing a successful defense-in-depth program. Without proactively
monitoring your systems could be under attack constantly or
successfully breached and you would not even know. Reviewing
alerts from your technology and creating baselines against your
systems are ways to proactively look for attacks against. Being
able to tell when something is not right and review it in a timely
manner is important for the security of your systems.
Lastly, creating a security metrics program to deal with week-
ly, monthly, quarterly and yearly statics is a way that allows you
to gauge your improvement in certain areas of your security
posture. These metrics can also be used to show patterns that
might have gone by unnoticed. Having a successful metric pro-
gram that shows thoughtful metrics displayed in an easily under-
stood format can also be used to justify and verify your security
program to management.
With all the layers now unfolded, a defense-in-depth program
should not rely on one layer alone. It takes multiple layers, de-
partments, and technology to have defense-in-depth actually
brought through to fruition. When one layer fails, this model
should be constructed in a way that another layer is there to
catch the failure. These layers are hand in hand with each oth-
er and work both independently and cooperatively at the same
time. there is a relationship between each layer and its partner
layers. In order to be successful at this approach, you cannot
rely on one layer too heavily and assume that the other layers
are just going to pick up the slack. Just because you have all the
newest technology implemented does not mean that you are se-
cure? You need to have proper steps taken to secure the human
aspect of your organization, while also operating the systems to
their full potential. Without this that new technology will not get
you too far. Defense-in-depth is a team sport and needs all the
players in the game playing to their full potential.

MATThew PAscuccI
Matthew Pascucci is an Information Security Analyst in the
fnancial sector with over 10 years experience in the techno-
logy feld. He holds numerous technical certifcations and a
degree in Network Security. You can follow his blog at www.
frontlinesentinel.com.
ATTACKS & RECOVERY
40
1/2011
I
n July 2010, a Microsoft Windows computer worm called
Stuxnet was discovered. Stuxnet targets industrial software
and equipment, and is the first malware to include a program-
mable logic controller rootkit. Speculations are that it caused a
months setback to the Iranian Nuclear Program.
Viruses and Malwares that do damage are nothing new,
but Stuxnet is not just any Virus its a cyber-weapon. A cyber-
weapon that pushes the concept of cyber warfare into the realm
of possible. Today its a country that seeks to destroy another
nation and tomorrow its a commercial company that seeks to
make a rival company go out of business.
Goodbye World!
Can Software cause damage to Hardware? Yes. Software con-
trols Hardware and it can make it perform damaging Opera-
tions. Software can damage other software in the hardware that
makes it work. And last, but not least, Software controls Hard-
ware and can make it perform Operations that will result in dam-
aging a different hardware. This is all leads toward Permanent
Denial-of-Service, an attack that damages hardware so badly
that it requires replacement or reinstallation of hardware.
One Permanent Denial-of-Service attack method is known
as Phlashing. Plashing is an attack that exploits security flaws
which allow remote administration on the management interface
of the hardware, such as routers, printers, network-attached
storage, or other networking hardware. The attack replaces the
devices firmware with a modified, corrupt, or defective firmware
image. A successful Phlashing attack bricks the device, render-
ing it unusable until it repaired or replaced.
The Phlashing attack can also be applied to internal hardware
devices such as CD-ROM and DVD-ROM drives that can have
their firmware updated or replaced by running an application on
the system that they are connected to. External hardware de-
vices such as Mobile Phones, Tablets, and PDAs that are con-
nected by USB or FireWire to a system can also be attacked.
Taking the process of Jailbreaking of an iPhone, iPod or iPad as
an example, Jailbreaking is conceptually the same as Phlash-
ing. One of the common side effects of an unsuccessful Jail-
breaking of an iPhone, iPod or iPad is bricking it, now imagine
that every time that you will connect an iPhone, iPod or iPad for
syncing purposes it will get bricked.
Other Permanent Denial-of-Service attacks are Overvolting,
Overclocking, Power cycling, and Overusing. Overvolting is the
act of increasing the CPU, GPU, RAM or other component volt-
age past manufacturer specification. Although it may sound like
a bad idea, overvolting is sometimes done in order to increase
the computer performance. Taking the process of overvolting
a CPU as an example, it is possible to change the CPU core volt-
age (aka. Vcore) Which is the power supply voltage supplied to the
CPU from the BIOS settings. Increasing the voltage creates more
heat and more heat means high temperature. Now computer hard-
ware and transistors specifically can sustain damage from lengthy
exposure to high temperatures. Standard cooling measures are
usually ineffective for such high temperatures and so its possible
to cause hardware damage. Another byproduct of increased volt-
age is Electromigration. Electromigration is a phenomenon that
manifest in transport of material caused by the gradual movement
of the ions in a conductor due to the momentum transfer between
conducting electrons and diffusing metal atoms. In other words,
an Electromigration increase the resistance of the metal intercon-
nects which can interfere with the processor operation and also
can cause interconnects to break entirely. Once a CPU intercon-
nect is broken, it cannot be used to send voltage high signal down
through, and there is hardware damage.
Overclocking is the act of running the CPU, GPU, RAM or
other components at settings that are faster than the manufac-
turer originally intended. Much like overvolting, overclocking can
also be done from the BIOS settings or by a running software
(as in the case of GPU) on the system the hardware is installed
at. Overclocked hardware creates more heat due to its operating
at a higher frequency and standard cooling countermeasures
are not always effective. The result is lengthy exposure to high
temperatures which leads to hardware damage.
Power cycling is the act of turning a piece of equipment, usu-
ally a computer, off and then on again. This simple yet effective
attack works on damaging hardware by causing temperature
flection and spikes. Whenever the hardware is on theres elec-
tric current passes through which causing it to heat and when
turning off the electric current stops and the hardware cools
down. Thus power cycling can cause wear on the hardware as
well as keep humidity levels from lowering. Power cycling can
also result in spikes, a sharp momentary increase in voltage or
electric current that can damage an electronic circuit.
Last, but not least there is the overusing attack. Overusing
is yet another Permanent Denial-of-Service that is simple yet
effective. Overusing can be used against hardware with me-
chanical parts as well as solid-state one. Starting with overusing
attack on mechanical parts, hardware such as CD-ROM drive
or DVD-ROM drive usually contains mechanical parts to either
eject the CD, or open the tray. Repetitively opening and closing
the tray is a good example for overusing that will wear down the
mechanical parts till the point it will no longer work.
Ready or Not Industrial
Cyber Warfare Comes
In October 1987, a DOS file virus was detected in Jerusalem, Israel. The virus, known as Jerusalem
contained one destructive payload that was set to go off on Friday the 13th, all years but 1987.
On that date, the virus deletes every program file that was executed. In April 26, 1999 a Microsoft
Windows computer virus called CIH (aka. Chernobyl) payload was delivered for the first time.
CIH is one of the most damaging viruses, overwriting critical information on infected system
drives, and more importantly, in some cases corrupting the system BIOS.
Ready or Not Industrial Cyber Warfare Comes
41
1/2011
Ready or Not Industrial
Cyber Warfare Comes
Keeping the same type of thinking, lets see how a similar con-
cept works on a solid-state hardware such as Flash. Flash mem-
ory is a non-volatile computer storage chip that can be electrically
erased and reprogrammed. It is primarily used in memory cards,
USB flash drives, MP3 players and solid-state drives for general
storage and transfer of data between computers and other digital
products. Flash memory is vulnerable to a Memory Wear, a form
of overusing attack that results in preventing from further infor-
mation to be written. Flash memory has a finite number of pro-
gram-erase cycles, meaning there is a limitation on the amount
of writing a flash memory is capable of accepting. Running an
application that excessively writes to a flash memory will result
in wearing it out and in devices such as tablets, thin clients, and
routers where it is not always physically possible to replace the
memory it can result in a downtime and a complete bricking.
Its important to remember that all though not all Permanent
Denial-of-Service attacks are easy to execute, have immediate
effect or causes the same degree of damage. All Permanent
Denial-of-Service attacks cause sabotage that gets in the way.
Wrong Browser, Wrong Website
Depends on the Permanent Denial-of-Service attack, it can be
mounted either remotely or locally. For the local part, a Perma-
nent Denial-of-Service attack payload can be embedded into
a malware much like Stuxnet. Getting infected with malware is
usually much easier than getting rid of it, or detecting it. An at-
tacker has multiple ways to infect the companys employees,
starting with exploiting Client-side vulnerabilities such as in Web
Browsers and Browser plug-ins. Vulnerabilities in Adobe Read-
er, Internet Explorer, Java, Firefox and more are exploited by
attackers when the companys employees visit infected web
sites. A common misconception is that infected web sites are
either illegal or bad but the truth is that nowadays malware is
delivered using Advertisements on legitimate web sites such as
Yahoo, Google, Fox and more. Another infection vector is social
networks, social networks has an increased amount of malware
and online scams embedded in it and as such it can be used
and directed toward infecting the companys employees. Most
social media networks do not check the links in users posts to
see if they could lead to malwares or other threats. Also, much
like web sites, social networks do not seem to screen their ad-
vertisements for malwares. Last, but not least there is social
engineering infection vector. Social engineering attacks such
as phishing, spear phishing and whaling are still very effec-
tive. The latter describes going after a bigger fish which surely
would be an executive. When someone is being whaled, their
initial contact might not be some generic Dear sir/madam mes-
sage, but might actually include their specific name, job title, or
more. Once contact is made, the victim is most often tricked into
opening some kind of file attachment that contains the malware.
Once the malware is executed it does not have to set off imme-
diately but rather it can wait and be a part of an orchestrated at-
tack. The trigger to star the Permanent Denial-of-Service attack
can be anything from a predefined date to command sent from
the attacker itself. Such campaign will be known as Advanced
Persistent Threat (APT) and as in the case of APT the detection
rate of anti-virus products will be lower.
Putting All the Pieces of the Puzzle Together
The following is a fictional scenario for an Advanced Persistent
Threat that includes a Permanent Denial-of-Service in it. You walk
into your office on Monday morning. Plug your laptop into the
power and go get some coffee while it boots up. Coming back
to your desk and clicking on your favorite Email Client. Starting
writing Emails, replying to others, making phone calls and soon
enough its launch time. You grab your jacket and join a few col-
leagues on their way out. After an hour or so youre making your
way back to the office, ready to do some more work when you
find out your laptop is frozen. You reboot it and an error message
appears, saying the hard-drive crashed and it unable to boot from
it. You are puzzled by it, its a brand new laptop and there were
no signs of any hard drive problems. You are about to pick up the
phone to call IT and then one of your colleagues steps into to your
office and asks you if you are also experiencing any problems
with your hard drive. That day the whole company had problems
with their hard drives and it took a few more days until IT depart-
ment replaced all the damaged hard drives and in the meantime
the employees could not properly work and as a result the com-
pany suffered both operational and financial losses!
Not all Threats are Equal and not all the Risks
worth the Time and Money to deal with
There is no silver bullet for it, this threat requires a threat modeling
that reflects not only technological understanding but also busi-
ness understanding. The company needs to know where their
assets are and how to protect them. Not all threats are equal and
not all the risks worth the time and money to deal with. A targeted
attack means theres one or more failure points that the attacker
can exploit and the company should be ready to detect and miti-
gate any attempts to strike them. Its a challenge, the company
needs to protect 100% of the attacks while the attacker needs
only to succeed once, and that one time is enough to cause con-
siderable damage. It is time to move beyond the casual Website
penetration testing to a more methodological, in-depth risk survey
and penetration testing that will test for such attacks and their out-
comes on the company, a genuine red team testing.
Hardware manufactures plays a major role in this threat and
they should address these attacks in their products. Software
that is either embedded in hardware, or controls it should im-
plement safety checks and controllers to avoid been abused.
In addition, it is mandatory to digitally sign firmware updates or
patches to prevent from the attacker to use it as an entry point.
The Hardware itself should also be able to detect and mitigate
attacks such as Overvolting, Overclocking and etc. to avoid sus-
taining a permanent damage. Perhaps these protections would
only be able to reduce it to a level of Denial-of-Service but even
that is better than having a permanent damage to the hardware.
Finally the general security of the products is also a factor if the
attacker manages to exploit vulnerability and to gain access to
the product filesystem or internal memory it can be then lever-
aged to perform a Cyber Warfare attack.
This is IT
Cyberwarfare is expected to hit the commercial market in the
next few years and we will see more and more companies been
attacked by APT that will blow up in their face. Today its a na-
tion that seeks to harm another nation and tomorrow its a com-
mercial company that seeks to make a rival company to lose
business. Make sure its not your company.
ITzIk koTlER
serves as Security Arts Chief Technology Ofcer and brings mo-
re than ten years of technical experience in the software, tele-
communications and security industries. Early in his career, Itzik
worked at several start-up companies as a Security Researcher
and Software Engineer. Prior to joining Security Art, Itzik worked
for Radware (NASDQ: RDWR), where he managed the Security Operation Center
(SOC), a vulnerability research center that develops update signatures and new
techniques to defend known and undisclosed application vulnerabilities. Itzik has
published several security research articles, and is a frequent speaker at industry
events including Black Hat, RSA Conference, DEFCON and Hackito Ergo Sum.
SECURITY IMPLEMENTATION
42
1/2011
W
e take for granted that everybody reading this mag-
azine probably knows that Information Security is
about ensuring the Confidentiality, Integrity, and Avail-
ability (CIA; called the CIA Triad) of your organizations informa-
tion. However, for the average person Information Security can
be an overwhelming concept to grasp. Mention it in the local
shopping mall and you will get a mixed set of reactions ranging
from fear to confusion to amusement. Yet for those of us who
spend time reinforcing our tin-foil hats with chicken wire (after
all, everybody KNOWS that Faraday cages are the best way to
stop signals from leaking) the everyday duties required to keep
an enterprise computing environment safe is a staggering ballet
of firewalls, intrusion detection, policies, and encryption algo-
rithms. The correct answer to what Information Security is will
depend on whom you ask - the firewall administrator sees it as
a set of complex and interlaced rules, while all the Active Direc-
tory team may see is users and their associated groups.
For the new manager of an Information Security program,
you must ask which of those answers is correct. The answer
to your question is that both of them are right. Each of these
answers are but parts of the greater system consisting of an
interlocked series of layered controls which provide defense-
in-depth against all potential threats to your computing environ-
ment. As cryptologist Bruce Schneier said, Security is... more
than designing strong cryptography into a system; its design-
ing the entire system such that all security measures, including
cryptography, work together.
Unfortunately, Information Security solutions are complex,
diverse, and are not usually of the plug-and-play variety. How
do you make all of these processes, technologies, and prod-
ucts fit together? How do you make sure that your Information
Security plan is comprehensive? Deciding how to start a pro-
gram is one of the most fundamental complexities of managing
an Enterprise Information Security Program (EISP). According
to the International Information Systems Security Certification
Consortium, Inc., (ISC), there are ten domains [1] in Informa-
tion Security:
Access Control:
1. Application Development
Security
2. Business Continuity and
Disaster Recovery Plan-
ning
3. Cryptography
4. Information Security Gov-
ernance and Risk Man-
agement
5. Legal, Regulations, In-
vestigations and Compli-
ance
6. Operations Security
7. Physical (Environmental)
Security
8. Security Architecture and
Design
9. Telecommunications and
Network Security.
Covering all of these areas
with an effective and system-
atic strategy is complicated
and can be overwhelming.
Enterprise IT Security
Management
by the Numbers
[...] the everyday duties required to keep an enterprise computing
environment safe is a staggering ballet of firewalls, intrusion detection,
policies, and encryption algorithms.
NIST INFOSEC
DOCUMENTS
The Federal Information Process-
ing Standards (FIPS) Publication
Series is the ofcial series of pub-
lications relating to standards and
guidelines adopted and promul-
gated under the pro-visions of the
Federal Information Security Man-
agement Act (FISMA) of 2002.
The Special Publication
800-series reports on ITLs research,
guidelines, and outreach eforts in
information system security, and
its collaborative activities with in-
dustry, government, and academ-
ic organizations.
ITL Bulletins are published by
the Information Technology Labo-
ratory (ITL). Each bulletin presents
an in-depth discussion of a single
topic of signifcant interest to the
information systems com-munity.
Bulletins are is-sued on an as-need-
ed basis.
Source: Guide to NIST Security Docu-
ments [2]
Enterprise IT Security Management by the Numbers
43
1/2011
Figure 1. NIST Information Security structure
Fortunately there is help. The United States (US) National Insti-
tute of Standards and Technology (NIST) Special Publications (SP)
800 series and the US Federal Information Processing Standards
(FIPS), which are also published by NIST (which is part of the U.S.
Department of Commerce) provide a solid basis for any security
program, whether in-side or outside of government space that can
scale to meet the unique needs of your organization. NIST currently
has about 300 documents which are the security guidelines used by
most U.S. Federal agencies. These documents are provided free
to the public at the NIST Computer Security Divisions Computer
Security Resource Center (CSRC), at http://csrc.nist.gov [3].
Knowing where the data is helps us close the gaps.
ISABELLE THEISEN
Chief Security Officer,
First Advantage Corp. [5]
It is important to note that the NIST framework is based on
a Data-Centric Security Model (DCSM). DCSM operates on the
principle that all information is not created equal; this is because
some has more inherent value to your organization than does
others. According to IBMs experts, the primary goal of data-
centric security is to drive security controls from a business re-
quirements perspective [4]. Effectively this level of security is
driven by questions that determine the significance of data in
context of its use within the agency, or an information risk man-
agement approach.
There are a number of guidelines and frameworks which provide
direction on how to categorize information. For example, FIPS 199
is the definitive guideline for Federal organizations to determine
data classification, and it bases informations value on its value com-
pared to inherent threats to determine the risk to the organization [6].
Regardless, DCSM should drive the decisions that determine ap-
propriate security controls: physical restriction, encryption, least-
privileged access, logging, etc.
One additional general note on NIST documentation: many of the
documents are aging (for example, 800-12 was written in 1995!)...
do not let this fool you. Where NIST or FIPS publications get into
specific technological concepts, they are updated. However, many
of them are tech-agnostic, choosing instead to provide sets of gen-
eral best-practice principles that apply to almost every organization.
Consider NISTs website as your own resource for almost two dec-
ades of time-proven, highly researched, and historically effective
lessons learned that your organization may benefit from starting
with your EISP strategy.
Planning
When beginning the strategic planning process, there are some
practical issues that should be considered for your program.
A great place to start is SP 800-14, which provides a foundation up-
on which organizations can establish and review information tech-
nology security programs. The eight Generally Accepted System
Security Principles in SP 800-14 are designed to provide the public
or private sector audience with an organization-level perspective
when creating new systems, practices, or policies. [6]
Another key resource is SP 800-12 An Introduction to Computer
Security [7]. 800-12 is a great reference regardless of your security
experience and background, because it provides technology-neu-
tral concepts, security program structure guidance, common-sense
considerations, as well as supporting resources that you might use
to develop security controls. For example, section 2 details the eight
elements which are the foundation of the documents general ap-
proach to computer security:
Computersecurityshouldsupportthemissionoftheorganiza-
tion.
Computersecurityisanintegralelementofsoundmanage-
ment.
Computersecurityshouldbecost-effective.
Computersecurityresponsibilitiesandaccountabilityshould
be made explicit.
Systemownershavecomputersecurityresponsibilitiesout-
side their own organizations.
Computersecurityrequiresacomprehensiveandintegrated
approach.
Computersecurityshouldbeperiodicallyreassessed.
Computersecurityisconstrainedbysocietalfactors.
Other guidance can be found in other documents like SP 800-27
Revision (rev) A, Engineering Principles for Information Technol-
ogy Security (A Baseline for Achieving Security) [8] which provides
security management principles in six categories:
SecurityFoundation
RiskBased
EaseofUse
IncreaseResilience
ReduceVulnerabilities,and;
DesignwithNetworkinMind
800-27 goes into greater detail about each category, providing
a total of 33 sub principles which have a lower level of detail about
each of the six categories. Each of these discusses the system de-
velopment life-cycle, at which stage of the principles applies, and
a discussion about the importance of each item.
It is all about Risk!
In order to know if you covered your bases when developing your
strategy, you must determine (as with all Information Security de-
cisions) where the risk lies. The core of the NIST system is
a Risk Management Framework (RMF), which analyzes risk at
three tiers: the organization, mission, and Information System
view and is described in NIST SP 800-39, Managing Information
Security Risk [9]. The RMF is focused on the entire security life
cycle, and is comprised of six steps:
SecurityCategorization
SelectSecurityControls
ImplementSecurityControls
NIST Publications
800 Series / FIPS Series
Technical and IT Infrastructure
Specific Guidance
FIPS 140-2, 800-42
800-44, 800-56, 800-57,
800-61, 800-63
PLANNING
800-12, 800-14, 800-18,
800-27, 800-59, 800-60
800-100. FIPS 199
IMPLEMENTATION
800-16, 800-50, 800-30,
800-34, 800-35, 800-36,
800-37, 800-53, 800-61,
800-47, 800-55, 800-64
ASSESSMENT
800-26, 800-53
Internal/External
Audits and reviews
Security
Program
SECURITY IMPLEMENTATION
44
1/2011
AssessSecurityControls
AuthorizeInformationSystems
MonitorSecurityState
Risk is the underlying driver for all Information Security systems,
processes, and yes, even budgets. Risk is critical because it ex-
plains, in empirical terms, why Information Security activity x
matters to the business unit it is designed to support by protect-
ing tthe CIA of its information resources. In order to better un-
derstand this process, lets review each of the NIST RMF steps:
categorize, select, implement, assess, authorize, and monitor
and apply them to your Information Security program.
Categorize FIPS 199, SP 800-60
As the DSCM concept suggests, categorization is a critical task
in Information Security. According to NIST, Security categoriza-
tion provides a structured way to determine the criticality and
sensitivity of the information being processed, stored, and trans-
mitted by an information system. The security category is based
on the potential impact (worst case) to an organization should cer-
tain events occur that jeopardize the information and information
systems needed by the organization to accomplish its assigned
mission, protect its assets and individuals, fulfill its legal respon-
sibilities, and maintain its day-to-day functions. [11]
FIPS 199 provides guidelines for the information owner to classify
(categorize) your information as High, Moderate, or Low values but
you can substitute the appropriate term for your organization (e.g.,
Public, Internal, and Sensitive) based on the culture, function, and
political environment of the company.
Additionally,SP800-60,Volume1:GuideforMappingTypesof
Information and Information Systems Security Categories [12] is
practical guidance that explains the categorization process func-
tionally by describing the lower level processes used to categorize
enterprise information.
Select FIPS 200, SP 800-53
One of the documents that has been around for years and con-
stantly updated is NIST SP 800-53 rev 3, Recommended Security
Controls for Federal Information Systems and Organizations [13].
This has been combined with FIPS 200, Minimal Security Require-
ments for Federal Information and Information Systems [14] to en-
sure that appropriate security requirements and security controls
are applied to all Federal information and information systems. An
organizational assessment of risk validates the initial security con-
trol selection and determines if any additional controls are needed
to protect organizational operations (including mission, functions,
image, or reputation), organizational assets, individuals, other or-
ganizations [13].
NIST 800-53 breaks its Information Security requirements into
seventeen control families:
(AC) AccessControl
(AT) AwarenessandTraining
(AU) AuditandAccountability
(CA) Certifcation,AccreditationandSecurity
Assessments
(CM) ConfgurationManagement
(CP) ContingencyPlanning
(IA) IdentifcationandAuthentication
(IR) IncidentResponse
(MA) SystemMaintenance
(MP) MediaProtection
(PL) SecurityPlanning
(RA) RiskAssessment
(SA) SystemandServicesAcquisition
(SC) SystemandCommunications
(SI) SystemandInformationIntegrity
A with many of these Federal documents, they are largely driven
by the Federal Information Security Management Act (FISMA)
of 2002. While your organization may not be directly subject
to FISMA, do not let this stop you from considering how these
guidelines might apply in your environment. 800-53 provides a
simplified RMF, revised and updated security controls, security
baseline configurations, and a variety of other useful tools.
Implement SP 800-70
SP 800-70 is deceptive. While it appears to be directed solely on
the provisioning and use of Governmental security checklists, this
is a very extensible concept for every enterprise. 800-70 [15] pro-
vides directions on how to use security checklists, as well as a link to
the National Checklist Repository (NCR), which is located at http://
checklists.nist.gov [16]. The NCR is part of the National Checklist
Program (NCP).
These checklists are step-by-step security configuration guidelines
for most major technologies, and they are currently being migrated to
align with the Security Content Automation Protocol (SCAP). SCAP
provides a common standard which allows you to import NCR con-
figuration files and run them natively within many major security
tools. With a SCAP-compatible tool and a few clicks, your security
program could provide reporting and analysis on the security con-
figurations of most of the organizations technology systems. Figure 2. NIST Risk Management Framework [10]
CATEGORIZE
Information Systems
Starting Point
FIPS 199/SP 800-60
MONITOR
Security Controls
SP 800-37/SP 800-53A
SELECT
Security Controls
FIPS 200/SP 800-53
AUTHORIZE
Security Controls
SP 800-37
IMPLEMENT
Security Controls
SP 800-70
ASSESS
Security Controls
SP 800-53A
Security Life Cycle
SP 800-39
Enterprise IT Security Management by the Numbers
45
1/2011
While not a NIST publication, security practitioners might also
consider the SANSTM (SysAdmin, Audit, Network and Security) re-
sources available to assist in security control implementation. SAN-
STM provides a wide-array of Information Security resources like
their extensive training seminars, the Internet Storm Center (for new
patches and alerts), podcasts, and security white papers.
However, one of most popular SANS resources is the SANSTM
Top 20 Security Controls [17], which is available online (http://www.
sans.org/critical-security-controls). This details the 20 most critical
security controls based on the research of a consortium sponsored
by the Center for Strategic and International Studies. SANSTM pro-
vides details on each of the top 20 controls, guidance on implemen-
tation, and tested resources proven to be an effective way for organ-
izations to implement continuous monitoring. For example, when
the U.S. State Departments CISO John Streufert implemented the
SANSTM top 20 cyber-security controls they realized an 80% re-
duction in measured security risk [17].
NIST 800-53 may also be useful for reducing your compliance
overhead as related to other security requirements that impact your
organization (ISO, SOX, CObIT, PCI, etc.). By meeting the NIST
requirements, you may be able to address other requirements by
cross-referencing to other security controls.
For example, a simple and free resource was created by
symantecTM, called the IT Controls Reference [18]. This docu-
ment can be downloaded as a PDF or even printed at poster
size. It cross-references ISO 17799, CObIT 4.0, SOX/COSO,
HIPAA, PCI, GLBA, NERC CIP, and PIPEDA. You can download
a copy of the IT Controls Reference from several online sources.
A more comprehensive solution is the Unified Compliance
Framework (UCF) [19], which harmonizes hundreds of authori-
tative compliance requirements in a cross-referenced Rosetta
Stone spreadsheet. The UCF allows your organization to select
applicable requirements, and creates a single register to clarify
control conflict with four updates per year. Paid subscriptions avail-
able online at http://www.unifiedcompliance.com.
Assess SP 800-53A
800-53A [20], published in June 2010, is a highly adaptive docu-
ment wherein NIST describes how to test the effectiveness of the
controls listed in 800-53.
More importantly, 800-53A provides guidance on how to provide
assurance of the effectiveness of your security controls, including
penetration testing, and step-by-step instructions for each of the
controls and control enhancements (the number varies between
versions, but usually around 200). If you need to have the frame-
work to evaluate your controls, this is the place to look. It provides
information on what constitutes successful implementation of the
security requirements, what type of documentation would be valid
testing evidence, and other useful evaluation tips. These guidelines
can be used to establish management self-reporting of controls as-
signed to management, and even to develop your own repository of
audit evidence that can be used to meet audit requests for control
documentation.
Also note that recent feedback from FISMA pushes the impor-
tance of automated controls. There is no doubt that automation can
increase the effectiveness of some controls, but it can also make
them very predictable. Keep in mind that automation can be effec-
tive, but it cannot totally replace the analysis of a skilled Information
Security professional.
Authorize SP 800-37
This section is easy: if you do not know what it means, it probably
does not apply to you. Authorization is the process by which Federal
organizations or their contractors certify a system has appropriate
Information Security controls in place, and is ready to be author-
ized. External assessors review and validate the systems controls,
and if the agency agrees with the evaluation the system gets offi-
cially blessed by the responsible agency. This process is designed
to confirm that the system is ready to go into a live or production
status.
Monitor SP 800-37, SP 800-53A
Monitoring is the part of Information Security most EISP programs
forget to account for, but is critical in the Total Cost of Ownership
(TCO) calculation. You cannot just stand up a piece of equipment
in your environment and expect it to work forever; the threats, risks,
and even your own environment are far too dynamic for that. The
monitoring process should be fully funded to provide for the care
and feeding of all of the controls you implemented in steps 1-5. Fail-
ure to do so will ultimately result in a failure of your tools to live up to
the full functionality you touted during your business justification.
This is particularly true for staffing concerns. Many Information
Security managers keep buying all-encompassing, enterprise-class
tools, yet maintain the same level of staffing. These solutions even-
tually reach a point of diminishing returns as your staff may be-
come so overworked that they end up being ineffective with any of
the tools they administer. Shepherding your staffs time helps not
only ensure that you do not get blindsided during your next audit,
but may also help you make sure you dont get owned by the next
hacker or virus.
Conclusion
In conclusion, as the manager of your EISP there are a few items
you should keep in mind. Firstly, while all of these concepts are
really important the most important factor not discussed in your
program is culture. Create a culture of Information Security aware
employees, who understand that compliance with security rules
and regulations is not the Information Security departments role
alone - it is the charge of every employee. The corollary to this
rule is that culture is driven by the tone at the top. Without the
support of your executive leadership, your Information Security
program will develop a Russian culture (characterized by check-
offs and mark-offs).
Secondly, keep in mind that all of these security controls are
not the pole that you must jump over, but the floor your organi-
zation jumps from. They represent the minimum you must do,
but not necessarily everything that you should be doing to your
companys information. By implementing your program and be-
ing an evangelist for a data-centric & risk-based security, you
should ensure that risk management becomes an inextricable
part of your organizations culture.
Finally, for all of this guidance remember: your expert opinion
IS THE KEY. Unless you are legally bound to NISTs guidelines,
they should only be guidelines; use them where they make busi-
ness sense in your organization and drop anything that does not
fit with your business, unique risks, or technology footprint. Eve-
ry dollar you spend or packet your program inspects should be
based on ensuring the CIA of your organizations information in
order to fully support your business operations. Use the NIST and
FIPS documents in order to help provide a standard framework
that is the backbone of your companys robust EISP strategy.
SHAYENE CHAMPION
Financial & Organizational Compliance Advisory
Tennessee Valley Authority
schampion@tva.gov
SECURITY IMPLEMENTATION
46
1/2011
G
rid computing is a technology that enables people and
machines to effectively capture, publish, share and
manage resources. There are several types of Grids
but the main types are; Data Grids, Computational Grids and
Knowledge Grids.
Data and Computational Grids are quite similar in the fact that
they are used to manage and analyse data. With technology in-
creasing and developing at such a dramatic rate, average com-
puters cannot cope with the amount of data or the calculations
they are being asked to perform. To analyse a complicated set
of data it could take a standard computer a few days or even
weeks to analyse. Whereas if a grid was used to perform the
same analysis it could take considerably less because it would
harness the computational power available on the grid, paral-
lelise the load and allow the calculations to be performed with
a small turnaround time.
Knowledge Grids are self-explanatory; their purpose is to
share knowledge. At this day and age we have come to a point
where we are using computers to create vast amounts of data.
The information overload is so big that human beings are not
able to analyse that data in a timely manner and extract the
much seeked knowledge that will allow to further science and
better our lives. We are at a point where we now have to teach
computers how to extract knowledge from raw data.
Going back to the HPC Wales Project; the project is set across
a minimum of nine sites including Swansea, Cardiff, Aberyst-
wyth, Bangor, Glamorgan, Swansea Met, Newport, Glyndwr and
a range of other sites. The grid will allow all of the sites to share
and distribute resources freely. A number of pilot applications
will be sponsored via HPC Wales to test the capabilities of the
grid. In Newport we are considering G4CP.
Grid for Crime Prevention
The Grid for Crime Prevention, also known as G4CP will stim-
ulate, promote and develop horizontal methods and tools for
strategically preventing and fighting cyber-crime and guaran-
teeing security and public order in the Welsh cyber-space. The
application was originally called Inter-Organisational Intrusion
Detection System (IOIDS). Furthermore, G4CP will promote
a coherent Welsh strategy in the fields of cyber security through
the exploitation of the projects artefacts, and will play an ac-
tive role in the establishment of global standards in the areas of
cyber-crime prevention, identification and prosecution.
The centre of gravity of G4CP is to design and implement an
application that promotes collaborative working, using data fu-
sion and data mining techniques, and allow knowledge discov-
ery from raw security incident data.
Trying to defend the European cyber-space against organ-
ised cyber-crime can be seen as a complex problem. One of
the problems being companies are afraid to report it because
they feel their reputation will be damaged if they report it. This
means that many private organisations and law enforcement
agencies are forced to face cyber crimes with next to no help
from other organisations in the same supply chain. The G4CP
felt that was a need for the defenders of the European Informa-
tion Infrastructure to come together to form a number of virtual
Management
of Knowledge Based
Grids
Fujitsu is set to bring high-performance computing (HPC to Wales.
They will provide a distributed grid which is a project set over five-
years costing up to 40 million. The grid will include over 1400
nodes which are spread across more than eight sites, linked using
Fujitsus middleware technology SynfiniWay which will deliver an
aggregated performance of more than 190 petaflops. (A petaflop is
a computer measuring unit equal to 1000).
Management of Knowledge Based Grids
47
1/2011
communities in order to take actions collectively against the
perpetrators of cyber-crimes and promote a culture of security
amongst and across the members of these communities. These
communities should allow for secure information sharing and fa-
cilitate organisations to be proactive in defending their networks
against ongoing cyber attacks.
G4CP will make grid technology attractive to establishments
across the cyber-crime fighting field. It will help the uptake of
grid type architectures and extend their concept from computa-
tion grids to knowledge grids.
all the components that are required to control that grid and
its environment.
User Management
There has to be some form of security and authentication on
the grid to ensure that the users on the grid are accessing
material which is appropriate to them. There are two methods
which can control help monitor users on the grid and their
security; public & private key cryptology along with X.509
certificates.
Public & Private Key cryptology is used regularly in many
different kinds of computing projects and environments for
a secure authentication method. The main reason it is still in
use is that it helps indicate the true authors of a piece of in-
formation. E.g. If Sian wanted to send a message to Stelios,
she would encrypt it with her Private Key so that when Stelios
received the message he would be able to unlock it with her
Public Key and read the message but because Sian encrypt-
ed it with her Private Key he knows it is from Sian.
There are flaws in Public & Private key cryptology along
with all methods of authentication, however; this method of
secure authentication is put in place as a contract of trust
between the user and the manager of the grid.
To ensure that the grid is used on appropriate applications
or web browsers an X.509 certificate could be issued to au-
thorities who use the grid. The X.509 certificates are stand-
ard for a public key infrastructure for a single sign on and
privilege management infrastructure. It basically specifies
(amongst other things) standard formats for public key cer-
tificates, certificate revocation lists, attribute certificates and
a certification path validation algorithm.
If we go back to the example of G4CP, we could manage
the users on the grid via a X.509 certificate (similar to an
E-Science Certificate). For example; when a user joins the
G4CP (or attempts to join) they will have to meet up with
a member from the authority management team of the G4CP
who will then run through an application process which will
identify whether they have a need for accessing the grid. If
they are suitable then an X.509 certificate with the correct
permissions for accessing the grid could be issued to them.
The certificate however will have sufficient permissions built
in stating which sections they are allowed access to and IP
rights stating that it can only be installed on one computer.
An error message would display saying that the certificate is
currently in use and has already been applied. All certificates
will have public and private key encryption so if they are sto-
The application that G4cP developed would manage to
effectively police the cyberspace and minimise the threats
against computing infrastructures. This will promote a co-
herent European strategy in the field of crime prevention
through the exploitation of the projects products and will
play an active role in the establishment of global standards
in the area of crime prosecution through the dissemination
of the projects results.
The G4CP raises a lot of questions around grid management,
for example if a user wanted to know about Denial of Service
against web servers in Wales, once they received the result
what would happen with it?
The knowledge would be stored so that if another user
needs it then it would be available and instead of using up
resources creating the same query it could just locate the
knowledge and deliver it to the user. However, if this occurred
with every search then it could become too overloaded with
information and cause the system to slow because the stor-
age would quickly run out. To store every query it would take
thousands of terabytes at least. The only solution would be to
keep the information available for a short period of time and if
the query was not called for during that time frame then to de-
lete the query. If it occurs again outside of the time frame then
the query will be developed again and held on the server. The
demand for information and harnessing the power of the grid
to deliver information and knowledge faster is the key.
Managing a Knowledge Grid
In a communication network, a node is a connection point, ei-
ther a redistribution point or a communication end point. How-
ever, the definition of a node does depend on the network
and protocol layer referred too. The main goal of grid man-
agement is to measure and publish the state of resources
at a particular point in time. To be effective, monitoring must
done from one end to the other end, meaning that the entire
environment and its components must be monitored.
Understandably this is no easy task, if we take HPC Wales
for example; it will provide to G4CP 1400 nodes set across
9 different sites. It is a huge task on its own just to manage
Figure 1. G4CP Centre of Gravity
Figure 2. E-Science Certifcate (X.509)
G4CP
Crime Pervention
Generic enabling applicatioan technologies
Tools and enviromments for data miming,knowledge discovery,
collaborative working...
Next Generation GRID
Architecture design and development addressing security,
business models, open source standards, interoperability
SECURITY IMPLEMENTATION
48
1/2011
len or attempted to be copied they wont be able to be used
without the two keys. There will also be an expiration date on
the certificate whereby the user would have to re-apply for
a certificate close to the time of expiration. Therefore, if a user
doesnt wish to continue having access to the grid or doesnt
have a need for it anymore there wont be any rogue accounts
which could become vulnerable and used maliciously.
The above figure is a typical E-Science certificate. It shows
the registered owner and the expiration date for the certifi-
cate.
The IOIDS Subsystem must be connected to the subjacent
communication platform G4CP in order to allow integration
with other platforms this issue will be solved using its own
module too. The employment of a dispatcher will perform the
processing of incoming messages sent over through the Grid
for Digital Security.
How to manage the nodes
Managing the users has been covered but what can you do
to manage the nodes? As previously mentioned G4Cp could
be using up to 1400 nodes running across 9 different sites. If
one of the sites goes down and there is no one there to man-
age it then what happens when the user needs access to that
specific piece of information?
There are several software packages available which can
be used to manage the nodes on a network. One specific
software package is Conga; it is an integrated set of software
components that provides centralised configuration and man-
agement of clusters and storage.
It has features such as one web interface which manages
clusters and storage, automated deployment of cluster data
and supporting packages, easy integration with existing clus-
ters with no need to re-authenticate them, integration of clus-
ter status and logs along with fine grained control over user
permissions.
This software basically manages the clusters and the nodes
of the grid without altering the user permissions already in
place. The user permissions would be set by the authority
figure or manager of the grid using the Public & Private Key
infrastructure combined with X.509 Certificates which manage
the users already on the grid. The expiration date of the cer-
tificate could be set by the authority figure after an interview.
The time length could vary dependent upon the relevance of
the knowledge grid to the user.
The primary components in Conga are luci and ricci which
can be separately installed. Luci is a server than runs on one
computer and communicates with multiple clusters and com-
puters via ricci. Ricci is an agent that runs on each computer
which in turn would be managed by Conga.
Luci can manage the nodes on the grid; there is an ad-
ministrative menu which can do the following options: make
a node leave or join a different cluster, fence a node, reboot
a node and delete a node.
All these options would be extremely helpful to manage
the grid especially if its nodes are spread worldwide. E.g. If
a node was broken in Germany and the main administrative
authority was based in Wales it could take anywhere from
a few hours to a few days to arrange for someone to either
instruct someone how to fix it or to travel to Germany to fix it.
Whereas with Conga, it is all done by a network and remote
log in which means that the node can be managed from Wales
(or wherever the headquarters is) near instantaneously with
next to no disruption.
The above image (figure 3), shows a screen from Conga
running on Redhat which is a Linux based Operating Sys-
tem.
What does this mean?
The High Powered Computing (HPC Wales Project) will mainly
be based in Swansea and Cardiff which will then be connected
across the remaining sites. This means that each placement
will have a taste of the high power computer performance that
can be used for research.
Grids are developing everyday and are predicted to become
the norm in the future. Similar to the internet, it is widely used
and spread across the world. Grids will give scientists and
researchers the power to get results faster and to expand the
knowledge they already have. The demand for information is
increasing as is the rate of expecting the information.
Sin LoUiSe HayneS
Sin Haynes is currently a fnal year student at Universi-
ty of Wales, Newport studying BSc (Hons) Forensic Com-
puting due to complete her studies in June 2011. She is
aspiring towards getting a job as a digital evidence tech-
nician or a role within the forensic evidence technician
feld. She is an active member of the British Computing
Society and is hoping to develop her education further in the future.
StiLianoS VidaLiS
Dr. Stilianos Vidalis was born in Athens, Greece, and was
raised on an island in the Aegean Sea. He moved to Wa-
les in 1995 where he did his undergraduate and postgra-
duate studies. He received his PhD in Threat Assessment
in July 2004 from the University of Glamorgan. He joined
the Department of Computing of the University of Wa-
les, Newport in 2006 where he is currently the Head of the Centre for Infor-
mation Operations. He is the program leader for the BSc Computer Forensics,
and BSc Information Security. Dr. Vidalis is a member of the BCS South Wales
Committee, a member of the E-Crime Wales Steering Group, and a member
of the Military Educational Committee of the Welsh OTC. His research intere-
sts are in the areas of information operations, digital forensics, threat asses-
sment, and efective computer defense mechanisms. Dr Vidalis is involved in
a number of R&D projects with the Welsh Assembly Government, Law Enfor-
cement Agencies, and private companies. Figure 3. Conga GUI
SECURITY IMPLEMENTATION
50
1/2011
T
here are a variety of ways to confirm the security health
of an environment but the two most common are vulner-
ability assessments and penetration tests. A vulnerability
assessment is performed to identify any holes in the security
of an environment, but does not attempt to exploit any of the
discovered issues. A penetration test takes a vulnerability as-
sessment one step further and actually attempts to exploit any
discovered vulnerability.
Organizations should perform some sort of vulnerability as-
sessment on a regular basis. The frequency of the assess-
ments depends on the size of the company and what the in-
frastructure is protecting. In addition, publically traded and
companies governed by financial regulations should receive
an annual, independent review from an outside company that
specializes in security testing. The testing company should
utilize up to date tools which are different than the tools used
in house. The rest of the article describes the steps to com-
plete an independent security review project.
Contract
It is important that the two parties negotiating the contract
are familiar with security testing and understand the required
scope. Additionally the testing company should understand
why the testing is taking place. It could be to comply with regu-
latory requirements, a requirement for partners or customers,
or just internal checks and balances. This will guide the testing
plan to the most appropriate and relevant options.
The contract needs to include the scope of the testing. This
will define the internal and external devices to be tested as
well as the type of test, vulnerability assessment only or pen-
etration testing. The organization should identify any device it
considers out-of-bounds. It is important here to note if testing
will be white box (administrators are aware and supportive of
testing) or black box (responsible administrators are unaware
of the testing engagement). The benefit of black box testing is
it tests not only the vulnerabilities of the system, but also the
response mechanism.
If the organization being tested has any outsourced or host-
ed devices that will be part of the testing it is their responsibil-
ity to confirm they are permitted to authorize such tests. This
is a common issue with hosted websites. Most web-hosting
companies have many customer sites on a single box and
do not allow testing out of concern for the effect on other
customers. Also without prior knowledge of the test, there is
nothing to separate your testing traffic from an actual attack,
so the hosting company will treat you as a hostile party first
and foremost.
Once all the details are finalized both parties should sign
the contract and retain originals. Copies should be made and
provided to the staff members that will be responsible for per-
forming or overseeing the security tests.
Before the Scan
The organization being tested should assign a staff member to
oversee the testing scope of work. This employee should be
familiar with the contract and all it entails. They will be respon-
sible for working with the testing company and providing any
needed information and access. The employee or team that
will be performing the testing should likewise be familiar with
the contract, especially the identified out-of-bounds devices.
Communication between the assigned points of contact for
both organizations is essential to a successful security test.
The employee of the company being tested should share any
negative experiences from previous scans, such as The last
scan performed took down our phone system or The last time
we ran a scan the switches were acting up. Likewise the test-
ing company employee can share any potential issues they
foresee based on the environment, such as Our testing tools
Security Testing
Every organization is responsible for protecting their customer infor-
mation, employee data, and assets. This is a daunting task in todays
security environment full of vulnerabilities and attack vectors. A star-
ting point for this protection is a well written information security prog-
ram with supporting procedures. But how do organizations confirm that
everything is being followed as it should be? Security Testing!
Security Testing
51
1/2011
typically crash Cisco phones or HP switches with specific
firmware versions. Divulging these potential issues before
testing will enhance the scanning experience for both parties
and avoid potential down-time for the organization.
Scheduling of the test is also an important aspect that needs
to be driven by the organization being tested. It would not turn
out well if the testing occurred during the time payroll is being
processed and the testing took down the payroll server.
The tester should also share any IP addresses they will be
testing from. This will allow the organization to differentiate
between the contracted security test and a malicious attack
that happens to be occurring simultaneously.
During the Scan
The company employees main job during the scan is monitor-
ing the various aspects and impacts of the scan. They should
be aware of any negative consequences to the systems and
receive any third party vendor security notices. If it is a black
box test, they should also monitor the incident identification
and response.
This is where the fun begins for the tester. The tester should
follow their methodology and document everything along the
way. The testing methodology should follow closely to what
a malicious attacker would perform; finger printing, foot print-
ing, service identification, vulnerability identification, and pen-
etration (if the contract allows).
Documentation is most critical during the scan. The value of
the security testing will come with the report provided to the
organization and having supporting documentation will greatly
enhance the quality of the report.
After the Scan
When the scan is complete, the tester should notify the com-
pany point of contact. If there are any gaping security holes
discovered during the test, it would be appropriate for the
tester to share those at this point so the company can quick-
ly address them. For example, if the test identified a router
opened on a public IP address with default credentials the
company should fix that immediately and take steps to con-
firm that a compromise has not already occurred. The com-
pany can also share with the tester any issues that came up
during the scan.
Reporting
Now for the not-so-fun part of security testing the report!
This may not be the most exciting part for the engineer that
is performing the tests, but it is by far the most important.
The report is the main vehicle to share the findings with the
organization. The report should point out the vulnerabilities
discovered and discuss the potential impact to the business.
It is essential that the report be written in business terms so
the organization truly appreciates and understands how each
discovered vulnerability could impact production activities or
the loss of confidential information. Simply providing the cus-
tomer a stock report from the testing tool with a list of found
vulnerabilities is not sufficient. The engineer should provide
recommendations on how to improve the network and how it
would improve the security posture of the organization.
The report should have an executive summary section that
provides a high level overview of the risks for upper manage-
ment. It should also provide a technical summary for IT man-
agement to understand where the policy or procedures need
to be improved.
It is a good practice to provide the customer with a draft
of the report after it is completed for them to review and ask
the tester any questions. This is where discovered issues
can be discussed and identified as false positives and for
the organization to pick the brain of the tester to ensure they
understand the vulnerability and associated risk. This step
ensures that the company being tested is comfortable and
understands the various aspects of the report, adding value
to the overall experience for them as well as providing ex-
cellent feedback to the testing company that can be applied
to later reports.
Perfect Report Not Likely!
Anyone who is tasked with keeping all devices in an environ-
ment up-to-date understands the challenges that go along
with it. End users are constantly looking for and installing new
software and even the basic everyday software is in need of
updates - sometimes daily! These challenges make it very
unlikely that any organization can patch to zero. For the test-
er writing the report or the organization reading the report
it is important to view it from a systemic issue standpoint.
What are the procedures in place for patch management? Is it
a manual process or does the organization have software that
centralizes and automates the process? These systemic is-
sues should be the focus of the results, not each vulnerability
on each device.
What Else?
Security testing is becoming more complex as environments
are becoming more intricate and diversified. Mobile and cloud
computing are pushing the organizations perimeter further
out, and in some cases obliterating it altogether. It is important
that both administrators and security testers understand these
implications and work together to develop a testing methodol-
ogy that is capable of identifying any weaknesses.
Organizations also need to remember that the human is
often the weakest link in the security chain. As locked-down
as the perimeter may be, an employee browsing the Internet
and clicking on an interesting link could introduce malware
to the environment compromising all the security controls
in place. Ensuring that employees are properly trained and
adding social engineering testing to annual security testing is
a good starting point.
Conclusion
Security testing is an essential part of maintaining a healthy
computing environment. Regular scans by internal staff and
annual, or more frequent, contracted scans will help identify
any gaps in the security procedures. It is important for ad-
ministrators and security testers to stay on top of the findings
because we know the malicious attackers sure are!
MARk LohMAN, CISSP, C|Eh, MCITP
Mark Lohman is an Information Security Assessment Engineer with NET-
BankAudit. NETBankAudit provides a wide range of IT Audit and vulne-
rability assessment services to customers in the financial services sector.
www.netbankaudit.com
Contact the author: mlohman@netbankaudit.com

SECURITY IMPLEMENTATION
52
1/2011
T
he number of intelligent devices deployed exceeded PCs
in Q4 of 2010. IT environments were already struggling
with increased complexity; adapting to remotely hosted
applications and an inux of iPads, Android based platforms,
and even always connected eBook readers will only complicate
things more. Ensuring that security is maintained has become
even more daunting.
Keeping it simple is a security mandate. Complicated policies,
multiple solutions from multiple vendors, and in many cases dif-
ferent security solutions deployed in different regions or even
for different projects, adds to complexity and often introduces
vulnerabilities.
Choosing security solutions and organizing your IT security
operations should be made as simple as possible.
I use three simple rules to evaluate security solutions and as
a basis for organizing security operations. These are:
A secure network assumes the host is hostile
A secure host assumes the network is hostile
Secure applications assume the user is hostile
These three simple rules help to make sense of the thousands
of different security solutions available. Products and practic-
es that conict with these three simple rules might not be the
best solution. Many security operations are already organized
around these three principals. I propose below one additional
group responsible for countering targeted attacks.
A secure network assumes the host is hostile
It has been years since a rewall that enforces policies based
only on source-destination-service has been sufcient. Trusted
end points harbor malware, are controlled by attackers, and are
launching points for attacks. Network security solutions must
be in-line and inspect all the trafc that passes through them.
They must look for viruses, worms, exploit trafc, and even unu-
sual behavior. IDC dubs these solutions complete content in-
spection rewalls. Many vendors refer to them as UTM, Unied
Threat Management. I have written extensively on UTM as
a simplifying security solution.
One aspect of a secure network that is often overlooked is that
the computers on the inside of the network are often the danger.
It could be an infected computer brought in by an employee or
contractor, it could be a poorly patched server that has been
compromised by an outside attacker.
Even the smallest organizations have to invest in network
security solutions to block attacks from devices on the inside of
the network. This is accomplished through network segmenta-
tion and deploying content inspection capabilities internally. As
threats multiply watch for solutions that either sit on top of the
access switch or incorporate the switch in their conguration.
The network security team within IT operations is tasked with
conguring network devices and ensuring that policies are uni-
formly deployed without introducing conicts. There are mod-
ern policy management solutions available from companies like
AlgoSec, FireMon, and Tun that can evaluate rewall policies
for redundancies, un-used rules, etc. and consolidate rules be-
tween multiple vendors products.
A secure host assumes the network is hostile
This is another way of stating the requirement for a layered de-
fense model. A laptop, desktop, notebook, or server cannot rely
on the network to keep it safe. AV, rewalls, and anti-spyware
solutions have to be installed and up-to-date. Patches for critical
applications and OS have to be installed as quickly as possible.
Browsing shields should be turned on and Microsoft IE should
not be used if at all possible.
I believe we passed a critical inection point just in the past
twelve months where the number of viruses, Trojans, and
worms introduced every day (some vendors report evaluating
over 200,000 pieces of code every 24 hours), has exceeded the
ability of anti-virus vendors to cost effectively protect end points.
Whitelisting solutions from Bit9, CoreTrace, Lumension, and Sa-
vant Protection and others are becoming mature enough to war-
rant deployment. In particular, as new platforms running on new
operating systems (Android, iOS, OSX, Linux) are deployed it is
highly recommended that you start with whitelisting rather than
repeat the morass of virus update infrastructure needed to deal
with Windows platforms.
Simplifying IT Security
Management
We have recently entered yet another phase of rapid change and
evolution of our IT environments. The two major innovations are cloud
computing and an explosion of intelligent devices. Public and private
cloud initiatives are needed to enhance availability and scalability at
reduced cost.
Simplifying IT Security Management
53
1/2011
Secure applications assume the user is hostile
This is where authentication and authorization come in. One
of the best deterrents of malicious behavior is the end users
awareness that their actions are associated with them (strong
authentication) and logged (behavior monitoring). Many online
services have failed to protect themselves from their custom-
ers. This applies to internal le sharing and community services
as well.
The identity and access management space is growing quick-
ly to accommodate the plethora of new applications and deliv-
ery models. Access management should belong to a separate
group within IT operations.
Countering targeted attacks
New threats and new measures to counter them call for a re-
organization of IT security teams so that they can focus on de-
fending the organization from targeted attacks.
It is only ten years since most enterprises established sepa-
rate security teams to address vulnerabilities and deploy and
maintain patches and virus signature updates as well as con-
figure and maintain firewalls. To ensure that policies were cre-
ated and enforced most organizations also created the position
of Chief Information Security Officer (CISO) who enacted those
policies and became responsible for ensuring that the organi-
zation was in compliance with standards and regulations. The
rise of targeted attacks must be met by similar organizational
enhancements.
The terminology and titles are not important but the roles and
responsibilities described here are required to mount an effec-
tive cyber defense.
Countering targeted attacks calls for new measures. One of
those measures is creation of specialized teams that are not
bogged down in the day to day tasks of blocking viruses and
cleaning up machines. Here is my proposal for such an organi-
zation.
Team Lead: Cyber Defense Commander
The title may evoke a too martial image. Perhaps cyber de-
fense team lead, or director of cyber defense, will be a better fit.
But the idea of one-throat-to-choke in establishing a leadership
role is an effective way to motivate a team and its leadership
with the seriousness of its task. They must be instilled with the
idea that they are targeted, under attack daily, and engaged
in a battle to protect the organization from a malicious adver-
sary. The cyber defense team replaces the traditional computer
emergency response team (CERT) and will probably incorpo-
rate most of the same people.
The cyber defense commander is responsible for establish-
ing the cyber defense team, assigning and directing roles, mak-
ing sure the correct tools and defenses are deployed, putting
in place controls and audit processes, and reporting to upper
management on the results of those processes, and audits. The
cyber defense commander would also be the primary point of
contact for communicating to law enforcement and intelligence
agencies when the inevitable situation arises that requires out-
side help or communication.
A large organization with divisions spread around the globe
or separate large business units may well have cyber defense
teams deployed in each division with their own leaders who
report up to the cyber defense commander. (Call them lieuten-
ants if you must but I am not going to take the military command
structure that far.)
The cyber defense team should have three primary roles: an
outward looking role, an operational role, and an inward looking
role. Each of those roles is described next: Cyber defense ana-
lysts are the intelligence gatherers. They study the threatscape
with an eye towards emerging threats to the organization. Most
organizations assume that because they have so many people
in IT security that someone is looking out for the latest attack
methodologies or tools, and even keeping tabs on the various
groups that engage in cyber attacks. Unfortunately the opera-
tional aspects of IT security are too consuming to allow this type
of outward looking focus.
The second role within the cyber defense team is the op-
erational role. Members of the cyber defense operations team
must:
Select and deploy network and host based tools to monitor
activity, alert on unusual activity, block attacks, and assist
in removing infections that have made it through all of the
cyber defenses.
Interact with the rest of IT operations to ensure that infec-
tions are quickly snuffed out and cleaned up.
Engage in forensics activities to perform post mortems on
successful attacks, gather evidence, and improve future
operations.
The members of the internal cyber defense team supplement
the rest of IT operations. They are not responsible for the daily
updating of servers and desktops or the distribution of AV sig-
natures or maintaining firewalls. Their job is to discover and
mitigate attacks as they occur.
The third component of the cyber defense group is the Red
Team. They look inward. They scan the network for holes in the
defenses and new vulnerabilities. They engage in attack and
penetration exercises to test defenses. They evaluate new IT
projects to ensure that authentication, authorization, and de-
fenses are included in the initial design all the way through to
deployment.
The organization and duties of the Cyber Defense Team arise
from the new threat of targeted attacks. There is a fundamental
difference between defending against random attack from vi-
ruses, worms, and botnets and targeted attacks. When the vi-
ruses and worms are written to pecifically infect an enterprises
system and gain control of internal processes, communications,
and data, traditional tools are ineffective and traditional organi-
zations are at a loss. By assigning responsibility to a core team
of cyber defense specialists the enterprise can begin to address
their vulnerability to targeted attacks.
Conclusion
Good security is simple security. Applying these three rules will
help any organization establish a more secure operating envi-
ronment. It will also be easier to assign responsibility and avoid
cross department nger pointing. Incorporating a cyber defense
team will off load the day to day operations to specialists that
can react quickly to incursions and build up an organizations
defenses.
RIChARd STIennon
Chief Research Analyst
IT-Harvest
March 24, 2011
MANAGER CENTER
54
1/2011
A
t the end of 2010, for those astute enough to look for them,
there were warning signs of how threats are developing
and what the weapons of choice are shaping up to be. Top
of my list are:
Mobile phone security
The more cynical among you may argue that there have been
issues with mobile phones introducing insecurities into the en-
terprise for many years but this is not whats bothering me - its
the increase in mobile phone malware that Im concerned about.
I believe it is a matter of time before traditional PC malware, such
as Zeus, makes the leap to smartphones as this is a growing area
offering increased opportunities to criminals. While some may argue
that malware is already targeting mobiles, which is true, a PC is still
involved in the infection process but I think this is set to change.
First, lets first look at why. An individuals mobile phone is increas-
ingly being used as a means of verifying a users identity. The most
prominent of which is among financial institutions. Banks will send
an SMS to the registered mobile phone containing a code which the
individual then uses in an online verification process when complet-
ing transactions. However, what we have already seen happen is
malware on the PC intercepts the registration process and diverts
subsequent SMS messages to a phone controlled by criminals who
are then free to make legitimate transactions.
I believe over the coming year we will see smart phone verifica-
tion increase combined with mobile banking apps more widely avail-
able and adopted. At the same time we will also see the various dis-
parate mobile operating systems begin to converge to a more open
platform. These three alignments will attract the criminals interest,
who will use the experience they have gained enhancing their PC
malware, to adapt it to directly infect smartphones in an effort to fol-
low and intercept the money trail.
To overcome this threat users will need to develop the same hy-
giene for their mobile phones as they do for their PCs. Links and
attachments sent to their phones should be treated with caution,
as many already do with PCs and emails. Organisations wishing
to rely on mobile technology need to educate their stakeholders to
the risks laying out exactly how they will and more importantly wont
contact them and what they will and wont ask them to do. Its not
just criminals who can learn from the PC environment, organisa-
tions can to, and ultimately the weak link will still be the browser so
organisations need to protect themselves, and their users, as they
would in the desk top world.
The Blurred Perimeter
While ten years ago the only device that would hook up to the en-
terprise would be corporate owned this is no longer true and fair to
say hasnt been for a long time. Employees seeking a more flex-
ible working environment use their own personal devices to link
up, often utilising the corporate VPN. External partners are also
granted access to the system to complete tasks and collaborate
on projects. In fact, some organisations are considering opening
up the virtual doors to allow customers to link directly into systems.
All of this means the perimeter line of defence has become blurred
as machines outside of the enterprise are embraced into the en-
terprise via the VPN.
Although this may have been happening for a number of years,
attacks have usually been initiated by individuals hacking into the
system. However, this threat is now evolving with malware that re-
sides inside the browser that sniffs out and modifies the traffic into
the intranet. In fact, I have recently seen Zeus malware specifically
designed to target enterprises by capturing the credentials from
VPN gateways. Criminal gangs can then use these credentials to
unlock the door and gain unrestricted access to all areas - CRM
systems, financial accounts, and anything else they can translate
into monetary gain.
To protect themselves organisations need to view any user con-
necting through a VPN as a potential malware carrier, while restrict-
ing users access to more sensitive parts of the enterprise is one
solution, it might not always be feasible. Its a fine balancing act
between granting users access and the risks this access poses
I dont claim to have all the answers!
Financial Malware
Yes Zeus, and other malware like it, has existed before 2011, but
what I believe will be significant in the next 12 months is how it
will continue to involve and include more operating systems and
browsers. Criminals have invested too much time and money in the
malware, and its proved way too resilient, that I cant see it being
replaced any time soon. Instead, its attack methods will become
increasingly sophisticated with tweaks made to the way it surgically
injects into banks web pages. In fact, I predict it will be the leading
platform for financial fraud in 2011.
What can we do about that? Well, fight fire with fire is my view.
While we continue to make improvements in the ability of organi-
sations, such as banks, to detect Zeus on the server side I feel an
area that needs to be improved is procedures within the bank to use
the intelligence these solutions provide to identify, track and prevent
fraud in real time. While today it might be impossible to shut down
Zeus command servers we should still strive for law enforcement
collaboration to bring these criminals to justice when we finally cre-
ate the ideal world.
The Cloud
No 2011 prediction would be complete without a look to the cloud.
However, I believe there is a degree of hype surrounding insecuri-
Security Challenges Facing
Enterprises in 2011
With the last mince pie eaten, and the frolics of New Years Eve a distant memory, its time to get back to
business. As 2011 stretches out before us Amit Klein, CTO for Trusteer, has given his crystal ball a good
firm rub and has the following predictions of whats hot, and whats not in the coming twelve months:
Security Challenges Facing Enterprises In 2011
55
1/2011
Security Challenges Facing
Enterprises in 2011
ties in the cloud which I attest are based purely on speculation. In
fact, in my humble opinion, I think it can be as secure as regular
hardware applications. That said, I wouldnt recommend moving
every application you use to the cloud, but those that require scal-
ability could certainly benefit.
It just requires a degree of sensibility and a secure approach af-
ter all, you wouldnt step out into the road without looking both ways
first. If you wouldnt do it on the desktop dont do it in the cloud.
Platform Diversification
Whereas consumers, at present, use a PC to connect and complete
transactions online, I think this is set to change dramatically this
year. With smartphone apps progressively more sophisticated and
tablets in every smart briefcase this might seem an obvious state-
ment but I envisage this will evolve even further to include computer
enabled devices as customisable options for new cars, a service
available as part of in-flight entertainment and even touch screens
at bus stops anytime, anywhere, anyhow will be the mantra of
tomorrow. This will allow the market to develop so that the services
we all use in our everyday lives, like online banking, will be offered
over many different channels. The technology exists its just wait-
ing for the investment to develop and launch it.
While it is true that the threat posed to the enterprise, from any
particular device, may be trivial the more you open up your services
the more threats you face and its just going to get more complicated
to prevent them all.
While we may not see the full fruition of this diversity in 2011,
perhaps this decade is a more realistic timeframe, organisations
still need to start thinking ahead to make sure what they do today
prepares them for tomorrow.
Consumerisation of IT
Again the sceptics among you may argue that this has been around
for a while now and is nothing new, in my view its the type and ability
of the devices that is changing and shouldnt be ignored.
As well as the usual array of electronic devices with serious mem-
ory capacity that are wrapped up in shiny paper waiting to come
into work and steal information, users may have a new weapon in
the bag. Wireless access points present a real danger with users
choosing to plug them in an effort to make their life a little bit easier!
Consumers have been using them at home to link up their printers,
PCs, TVs and even games consoles therefore its only natural that,
blissfully unaware of the risks these transmitters pose, they may try
to create the same flexibility in the workplace.
One way of overcoming this problem is to control the technology
users are allowed to utilise in the workplace, although telling them
not to do something wont necessarily stop them. The best form
of protection is to fully consider the damage they could inflict and
develop a strategy to negate the risk - I didnt say it was all going
to be easy either!
Browser Threat to Main Frames
Its true that this isnt an issue for every organisation, but the scale of
the problem and its potential affect on us all deems it worth examin-
ing and so its inclusion in my list of security threats to watch.
Traditional organisations, such as banks, insurance companies
and government organisations particularly healthcare, have re-
lied on ancient legacy mainframe systems and applications to store
and manipulate their data records. Green screens were used to
communicate with the mainframe, display information and update
records. However these entities have started to migrate their inter-
face to web based modern terminals. No longer hidden from view,
browser infecting malware, such as Zeus and Spyl, have quickly
sniffed out this fresh blood and gained an insight into areas of the
enterprise that they couldnt hook into and target before furthering
fraudulent activity.
Organisations who either find themselves in this predicament,
or are considering taking the leap to web based services, need to
focus their attention on securing the browser which is where these
attacks are occurring. Some Banks have already heeded the warn-
ing and are becoming increasingly proficient at doing so - it is time
for others to learn the lesson and lock down these databases be-
fore we all get hurt.
Social Networks
Finally, where would we be without a quick glance at the damage
social networks pose. Again, this is nothing knew but growing at an
alarming rate and almost impossible to mitigate against. My feeling
on this subject is perhaps we all need to start thinking differently
about how we tackle this growing phenomenon.
Employees should be educated of the dangers displaying all of
their personal lives on social network sites pose both personally
but also for the organisation. For example, it may be possible that
they become befriended by someone who is purely interested in
their where theyre employed. By monitoring what your employees
are doing and saying, and even who they are friends with, you can
work together to limit the risk their online behaviour poses you
might even prevent them becoming victims to identity theft.
While some of these threats might not be new per se, I think Ive
provided original food for thought to all of them. 2011 is a promising
year, with talk of a recovery all be it fragile and very early days.
What Id like to see is it being a secure one where we start to fight
back against the malware controllers who, at the moment, I feel
have the upper hand in too many battles. Lets work together to
block them out and take them down. Be safe out there.
About Trusteer
Trusteer is the worlds leading provider of Secure Web Access serv-
ices. The company offers a range of services that detect, block and
remove attacks launched directly against endpoints such as Man
in the Browser, Man in the Middle and Phishing. Trusteer services
are being used by leading financial organizations and enterprises
in North America and Europe, and by tens of millions of their em-
ployees and customers to secure web access from mobile devices,
tablets and computers to sensitive applications such as webmail,
online payment, and online banking. HSBC, Santander, The Royal
Bank of Scotland, SunTrust, Fifth Third, ING DIRECT, and BMO
Financial Group are just a few of the companies using Trusteers
technology. Trusteer is a privately held corporation led by former
executives from RSA Security, Imperva, and Juniper. Follow us on
www.Twitter.com/Trusteer. For more information about our services,
please visit www.trusteer.com.
AMIT KlEIN
noted malware researcher and CTO of web browser security
specialist Trusteer, is an expert on Internet and endpoint secu-
rity technologies. Prior to Trusteer he was Chief Scientist at Cy-
ota, Inc. (now part of RSA Security) a leading provider of lay-
ered authentication solutions. In this role, Mr. Klein researched
technologies that prevent online fraud, phishing, pharming.
Previously, he was director of security and research at application security vendor
Sanctum, Inc. (now Watchfre) where he was responsible for the security architec-
ture of all Sanctum products. Mr. Klein spent almost 7 years in the Israeli Army as
a research ofcer and project manager. He has published over two dozen articles,
papers and technical notes on the topic of Internet security. Mr. Klein is a gradu-
ate of the prestigious Talpiot programme of the Israeli Army. He holds a B.Sc. (cum
laude) in Mathematics and Physics from the Hebrew University (Jerusalem).
TECH CORNER
56
1/2011
U
nfortunately, the stateful packet inspection firewalls used
by many organizations just dont cut it. They rely on ports
and protocols, and are not able to identify cloud and
SaaS applications, along with many of the Web 2.0 services
that rely on the browser for the delivery of application. There-
fore, they cant weed out the good from the bad, productive
from unproductive. As a result, IT is left with a binary approach
to traffic control block or allow. Should you block ports or en-
tire protocols just to block a few undesirable applications? Or
do you open the floodgates and allow access to any application
that might be useful, even at the risk of sapping productivity and
exposing your organization to threats? Neither is a satisfactory
choice.
Todays leading companies avoid this dilemma with a Next-
Generation Firewall that can deliver comprehensive intelligence,
control, identification and visualization of all the applications on
their networks. This is effective because Next-Generation Fire-
walls can tightly integrate application control with other intrusion
prevention and malware protection features.
To manage applications effectively, your Next-Generation
Firewall must meet each of the following criteria:
Scan all application traffic
First, your Next-Generation Firewall needs the capability to scan
all traffic, including network layer and application layer traffic.
This requires going beyond simple stateful inspection to con-
duct deep packet inspection, regardless of port and protocol.
Additionally, the firewalls deep packet inspection engine should
be updated dynamically to identify the latest intrusion threats,
malware attacks, spyware, and Web sites that could affect the
security of your network. Most importantly, the firewall should be
able to block those security threats without introducing latency
and degrading the network to unusable levels.
Fingerprint and show applications coming thro-
ugh the firewall
To allow you to create and adjust application policy controls
based upon critical observation, your Next-Generation Firewall
must let you monitor and visualize all your network application
traffic. To do this effectively, the device needs to fingerprint the
specific applications running on your network, and understand
for whom the traffic is destined. It needs to present this infor-
mation in an intuitive graphical form, allowing you to observe
real-time application activity, aggregate trend reporting on ap-
plications, ingress and egress bandwidth, Web sites visited, and
all other user activity.
Top 8 Firewall
Capabilities for Effective
Application Control
IT administrators try to deliver critical corporate solutions efficiently,
but also have to deal with employees using wasteful and often
dangerous applications. In order to increase network and user produc-
tivity, IT needs to prioritize critical application bandwidth and throttle
or completely block social media and gaming applications.
Top 8 Firewall Capabilities for Effective Application Control
57
1/2011
Create granular application control policy
Your Next-Generation Firewall must let you to create applica-
tion-related policies easily and flexibly, based on contextual cri-
teria, such as by user, group, application, or time of day. For ex-
ample, you might grant access to a particular application based
upon the business need of the person in the organization us-
ing it. Someone in your marketing group might have legitimate
reasons to access Twitter and Facebook for social media cam-
paigns, while someone in your accounting group might not. In
addition, for effective and easy management, a policy should
be centralized, unified, and object-based. Next-Generation Fire-
walls with application intelligence and control allow you to cre-
ate granular, application-based firewall policy, helping you to re-
gain full control over application traffic by managing bandwidth.
This increases productivity, prevents data leakage and protects
against application-borne malware.
Manage application bandwidth
To help you manage application bandwidth, your Next-Gen-
eration Firewall must let you prioritize bandwidth allocated to
essential and latency-sensitive applications (e.g., Salesforce.
com, LiveMeeting, or VoIP). At the same time, it needs to let
you limit bandwidth allocated to non-essential applications (e.g.,
YouTube, MySpace or Facebook). In addition, your firewall
should help you increase productivity further by controlling ac-
cess to Web-based application sites (e.g., ESPN). At the least,
it should allow you to limit access to specific feature sets within
applications. For example, you could allow access to Facebook,
but block access to Farmville and other gaming features.
Block application-borne malware
Malware no longer requires user intervention to run. Distribu-
tion of malware has evolved from simply sending virus-laden
executables and attacking systems on local networks to exploit-
ing documents, files and browser features traditionally consid-
ered safe. For example, Adobe PDF files and Flash are now
prime targets for exploits due to their ubiquity and the invis-
ibility of attacks embedded inside of them. These threats come
into networks through various channels, and can only be pre-
vented by devices that support dynamic security services and
that continuously receive malware intelligence from dedicated
research labs.
Control distributed applications
Once you have upgraded to a Next-Generation Firewall at your
central gateway, your next logical phase is to apply application
control and bandwidth management policy at any distributed
branch sites. Because todays branch networks connect directly
to the Internet, you need to be equally vigilant in securing ap-
plication traffic to and from branch sites. Managing bandwidth is
also crucial to optimizing distributed network performance and
remote employee productivity. Application controls enable you
to set policy based upon any unique geographic or site-specific
needs (e.g., a retail branch location requiring prioritized band-
width for a cloud-based transactional application). The same
granular controls also ease administration by enabling you to
push standardized policy for object-based roles and groups
across distributed sites from a centralized console. Moreover,
robust visualization capabilities are critical to widely distributed
network security, as they let you monitor and track usage, traffic
and performance trends, and adjust policy accordingly across
the globe.
Deliver optimal performance
Finally, none of this matters if your firewall doesnt have the
horsepower to get the job done. Your firewall needs the per-
formance capability to control applications fully, without bogging
down your network throughput. Performance technology (e.g.,
multi-core architecture and non-buffering reassembly-free scan-
ning) can dramatically increase the viability of your application
intelligence and control solution.
In summary, your firewall needs to keep up with the times.
It must fully control the application layer (not only the network
layer), and provide the capability to:
Scanallapplicationtraffc
Fingerprint and show applications coming through the
frewall
Creategranularapplicationcontrolpolicy
Manageapplicationbandwidth
Blockapplication-bornemalware
Controldistributedapplications
Deliveroptimalperformance
Application intelligence and control, along with real-time visuali-
zation, should be integral components of your Next-Generation
Firewall. They help manage both business and non-business
applications, and help increase network and user productivity.
PATriCk SwEEnEy
VP of Product Management
Patrick Sweeney has over 18 years experience in high tech
product marketing, product management, corporate mar-
keting and sales development. Currently, Mr. Sweeney is So-
nicWALLs Vice President of the Network Security Business Unit. Previous po-
sitions include Vice President of Worldwide Marketing, Minerva Networks,
Senior Manager of Product Marketing & Solutions Marketing for Silicon Gra-
phics Inc, Director of Worldwide Sales & Marketing for Articulate Systems,
and Senior Product Line Manager for Apple Computer. Mr. Sweeney holds an
MBA from Santa Clara University, CA.
LETS TALK
58
1/2011
C
an a parallel be drawn between freedom to cross the Berlin Wall
and the freedom to post on a Facebook wall? Can such borderless
liberty be defined in a 140 character Twitter feed or does it require
an updated Declaration of Independence? Theres no question the prolifera-
tion of technology around the globe has opened new portals for expression
to those otherwise silenced by their governments and as a means toward
equality for the oppressed. Call it Freedom 2.0.
Yet while this expressive new flame flourishes, our collective fail-
ure to protect such channels threatens both the integrity and usability
of online forums. Last years hacking of Googles source code the se-
cret instruction manual, if you will, of the search engines inner work-
ings should serve as a warning sign to the international community of
cyberspaces next stage of growth. First built as a tool for convenience,
the Internet is simply not equipped to ward off todays sophisticat-
ed attacks. If this modern marvel is to continue to thrive and serve as
a beacon for all things innovative, security must now be our top priority.
The Google attack was not an isolated event. As the global recession
drags on, sensitive information only becomes more valuable and more
vulnerable. Former employees, upset over a recent layoff in these hard
economic times, have insider information that can be used to access com-
pany networks and obtain corporate data. Depending on how big their axe
to grind is, now its all too easy for the disgruntled former staffers to plaster
sensitive intelligence all over the cyberworld.
Corporations are hardly the sole victims. Consumer records can be left
uncovered in the process of a breach, and the virtual identities of millions are
left for the taking. In 2008 alone, 285 million consumer records or nearly
one per American were compromised.
Is the problem intractable? It is if we continue the same, static approach
to fixing it. For centuries, humans secured data primarily through two meth-
ods of identification: what you have (house keys, car keys, key fobs) and
what you know (the combination to a lock, your Social Security number, your
password). These methods typically work in limited and controlled environ-
ments. However, with the proliferation of the Internet and the abundance
of data sharing sites, such identification tools are hardly secure. Today, the
Internet has 1.7 billion users, a number that is increasing at the rate of nearly
1 million per day. Facebook itself would boast the third largest population on
Earth if it were an autonomous nation. Ashton Kutcher tells his 4.25 million
Twitter followers what he eats every day for breakfast. Our world has moved
into a virtual dimension, and as such, security requires an upgrade.
To do so, we must examine what makes an item secure through an en-
tirely different lens, moving from simple identification to complete authenti-
cation. In addition to what you have and know, network access points must
be able to authenticate who you are. Your palm, your face, and your typing
pattern are all unique characteristics that cannot be replicated, lost or stolen
as more traditional methods of identification increasingly can. Instead, you
become the key to your data. You become the password. While it may sur-
prise some, these advances are no longer the subject of sci-fi movies and
are ready to be applied today. The question is how do we use them?
Must retinal scanners be immediately installed on every computer? Prob-
ably not. However, new ways to secure login portals should at least be con-
sidered. More importantly, we all should take the initiative to become better
educated on the state of cybersecurity. As we gain more exposure to cyber-
space resources through e-mail, Facebook and online banking, for example,
it becomes easier to trust the security and privacy of such applications. It is
critical we resist this, however, remaining continually aware the information
could be intercepted somewhere between send and receive.
Further, this new paradigm of security where users become their pass-
words is only effective if the concept is ingrained system wide. Google is
a notable example of the many global businesses continuously under cyber-
attack. As the Internet matures into the primary forum to exchange sensitive
information, we will probably enter into a cyber-arms race of sorts a race
not only between private-sector competitors, but also foreign governments
and agents (Al Qaeda) who would seek to collapse Freedom 2.0. We didnt
choose this ground, but our banks, critical infrastructures, and government
agencies now line the battlefield of an iGen Cold War. But unfortunately, pre-
vention is simply not working well enough. We are losing the war.
One approach towards solving the dilemma could be the usage of
smarter security technologies that focus on the root of the issue, eradi-
cating the ability to use stolen information. For example, AuthenWare is
a software-based, strong security solution that protects against identi-
ty theft, web fraud and other system intrusions. It incorporates a break-
through, multi-dimensional approach towards validating user identity through
a series of biometric security algorithms that record and measure how
a person uniquely types their credentials.
This means that AuthenWare can distinguish one person from another;
ensuring that rightful users are granted access to the appropriate internal,
remote or web application, while stopping thieves by rendering stolen cre-
dentials completely useless. And it does so without the need for expensive
hardware, tokens or certificates in fact the user doesnt even need to be
aware that it is there at all. The security algorithms are intelligent so that it
learns and adapts to nuances in the users typing behavior even those
caused by physical injury, medication, stress or fatigue.
We must face opponents as one united front with both private and
public sectors aligned. The online universe is successfully tearing down
the walls that have, until now, separated and confined vast populations of
the globe. Yet in its openness, danger lies. Too much money, too much
proprietary information, and indeed, too many freedoms hinge on too little
security. The Internet is moving forward. Will we?
Tom Helou
President & COO
Tom Helou is the president and Chief Operations Ofcer of
Authenware Corporation, responsible for global feld ope-
rations including corporate strategy, consulting, marke-
ting, sales, alliances, and customer programs.
Prior to joining Authenware, Mr. Helou was with Fuego, as
VP of International Sales, and upon its acquisition by BEA
Systems in 2006, was placed in charge of BEAs sales for the Business Interaction
Division (BID), the fastest growing unit of BEA for 2007 and 2008.
Prior to Fuego, Mr. Helou was the VP of Latin America for Business Objects, and in
2002, he and his team achieved the highest sales growth in the companys histo-
ry. Mr. Helou also held managerial positions working for PeopleSoft and MRO So-
ftware where he was responsible for the development of International Markets for
each business.
Mr. Helou began his professional career at Intersoft in Argentina, working close-
ly with Felix Racca. Mr. Helou has a degree in marketing as well as in agronomical
engineering, and he continues to participate in graduate courses in marketing,
human resources, management and business administration, both as an atten-
dee and speaker.
You As a Password
Your palm, your face, and your typing pattern are all unique charac-
teristics that cannot be replicated, lost or stolen as more traditional
methods of identification increasingly can. Instead, you become the
key to your data. You become the password.

Вам также может понравиться