Вы находитесь на странице: 1из 215

INFORMATION SECURITY

BASICS:
FUNDAMENTAL READING FOR INFOSEC
INCLUDING THE CISSP, CISM, CCNASECURITY CERTIFICATION EXAMS
R ON M C F ARLAND , P H .D.

http://www.wrinkledbrain.net

Copyright 2014-2017 by Ron McFarland Ph.D.

WHY I WROTE THIS BOOK


I wrote this book because many of my readers and
students have requested preliminary information about
information security as it pertains to the CISSP
(Certified Information Systems Security Professional),
the CISM (Certified Information Security Manager), the
CCNA-Security (Cisco Certified Networking Associate
Security), or other information security exams. Other
readers and learners who were either new or not as
familiar with the information security world have asked
for a fairly easy read that covers many of the
information security aspects that an information
technologist or an information systems.
This book is written in a broad-brush manner. That is, it
is my intention to introduce many of the topics covered
in either the CISSP, CISM or CCNA-Security realms. For
some readers, you might be planning on taking one of
these certification exams. And again, for others, you
wanted to find a quick read on many security topics so
that you are more familiar with the overall field. If this
describes you, this eBook is for you.
2

As a further note, while this eBook does cover many


(not all) of the CISSP, CISM and CCNA-Security aspects
and also discusses many Information Security (InfoSec)
topics in general, more detailed books on this subject
are available (with an appropriate heavier cost).
However, in the next several months, I plan on
publishing additional InfoSec, Networking, and Ethical
Hacking eBooks that can supplement your studies.

WHY YOU SHOULD READ THIS BOOK


This book will help you understand many of the
preliminary topics involved in the very rich and everchanging world of Information Security (InfoSec). I
wrote this eBook to assist you, the reader, in
understanding Information Security from the ground
up. The topics in this eBook cover many of the topics
found in as it pertains to the CISSP (Certified
Information Systems Security Professional), the CISM
(Certified Information Security Manager), the CCNASecurity (Cisco Certified Networking Associate
Security), and other information security exams.
If you are new to the information security world or wish
to brush up on many of the topics in the InfoSec world,
this easy-to-read eBook is for you. It was my intention
to make this an easy read and enjoyable read as we
discuss many information security topics as you,
perhaps, prepare for an interview or prepare to take on
more extensive studying required of many of the
security-related certifications.
4

TABLE OF CONTENTS
Why I Wrote This Book ............................................................................2
Why You Should Read This Book .........................................................4
Table of Contents ........................................................................................6
Introduction ..................................................................................................7
Chapter 1. Introduction to Encryption ..............................................9
Chapter 2. Introduction to Symmetric Key Algorithms .......... 40
Chapter 3. Malware ................................................................................. 53
Chapter 4. Firewalls ................................................................................ 73
Chapter 5. Denial of Service Attacks................................................ 86
Chapter 6. Cryptographic Tools...................................................... 103
Chapter 7. Wireless Security ............................................................ 116
Chapter 8. Operating System Security ......................................... 151
Chapter 9. Database Security ........................................................... 166
Chapter 10. Computer Auditing ..................................................... 183
Summary .................................................................................................. 199
References ............................................................................................... 201
About The Author ................................................................................. 211
Other Books By Ron McFarland ..................................................... 213

INTRODUCTION
Information Security is a hot topic and, for the
professional information technologist, is an important
set of skills to be proficient in. Further and more
importantly, since we are in this ever-expanding and
ever growing field of information technology and
information systems, having a certification or two in the
information security area can be a boost to our career.
On a very recent Internet search from a reputable
source (Global Knowledge:
http://www.globalknowledge.com/training/generic.as
p?pageid=3430 ), the CISSP (Certified Information
Systems Security Professional) offered by ISC 2 was as
the second top-paying industry certification (followed
by the Project Management Professional PMP), which
further emphasizes the importance of the information
security field in the technology industry. Likewise, other
security-related certifications are as important in other
aspects of the information technology and information
security fields.
7

This eBook starts out with a discussion of encryption


and a discussion on various malware, as one focus of
our discussion on securing systems. Also, well discuss
various types of system attacks as well as several
methods to prevent these. Further, well discuss a few
relevant hardware and software-related items like
firewalls, Intrusion Detection Systems (IDS), Intrusion
Protection Systems (IPS), honeypots, and methods of
control for our systems. Further, well discuss database
and programming related security topics like SQLinjection and good programming practice. Also, well
discuss the relevance of auditing and secure
configuration as it pertains to information systems in
general with a security focus in mind.
And, as a reminder, if any or all of these topics are not
familiar, or if you feel a bit rusty with several (or all) of
these concepts, again, this eBook is for you. Well take a
measured approach to these topics (and a few more) as
we go through our discussion on information security.
Hang on for an interesting ride!

CHAPTER 1. INTRODUCTION TO
ENCRYPTION
I know that talking about encryption isnt a sexy topic.
Try talking about encryption at your next social event
and count the number of people who roll their eyes or
who immediately change the topic! However, if youre
reading this eBook, Im going to assume that youre one
of three types of people youre a geek (like me), youre
a wanna-be-geek (as I once was), or a student/learner
interested in information security (I suppose I fit this
category earlier on as well). Well start our discussion
about encryption since it is the basis of much of the
work done in information security.
In general, encryption (of a few flavors) of digitally
stored (also known as static data) and transmitted data
(also known as data in motion) has been important
since computers and networks were first used. Recently
the subject of information security and encryption has
become a topic of high public interest following the
release of documents by Edward Snowden which
9

detailed the various eaves-dropping programs by the


NSA (National Security Agency) and other intelligence
agencies that have been collecting organizational and
personal data for a number of years (NSA can be found
at: http://www.nsa.gov/).
First, lets generally discuss what encryption is and why
we do encryption. Many already know, either very
generally or more specifically, what encryption is, but
lets go through the specifics so as not to leave any stone
unturned. Next, lets talk about what a message is, in
terms of information systems or information
technology. This leads into a discussion about what a
key is, especially when we want to encrypt a message.
As a result of encryption, well derive a cipher, which is
essentially a scrambled message. After we get some of
these basic concepts handled, lets talk about both
symmetric key encryption and later about asymmetric
encryption so that we can get these important terms
understood. Well discuss the application of both
symmetric and asymmetric encryption throughout this
book, so lets get started with the details.

10

WHAT IS ENCRYPTION
Before we discuss encryption and what it is in terms of
information technology, lets chat a bit (pun intended, I
suppose) about where encryption comes from.
Encryption comes from the field of cryptography. Thats
another one of those terms that you can share in your
next party! Cryptography is the field of secret
writing that has its origins from the ancient Greeks.
Caesar, for example, would scramble his messages sent
to his generals in case the messages were inadvertently
intercepted. With scrambled writing, messages could
not be interpreted by the enemy. Theres an entire
history of cryptography that you can, optionally, read
up on. While there are textbooks galore about the
history of cryptography, you can find a bit of brief (and
interesting) reading about cryptography on one of the
premier technology websites Wikipedia!
Encryption is one aspect of cryptography that is
relevant to our discussion in this book. Encryption is the
11

process of encoding (scrambling) information so that


other unauthorized parties cant read it (other than,
perhaps the NSA!).
Encryption relies on an algorithm (a mathematical
process). Sometimes this is referred to as an encryption
scheme. First, the information or a message, also known
as the plaintext (since it is not encrypted) is run
through the encryption algorithm resulting in the
unreadable (and scrambled) output, which is also
known as the cipher text.
We have a lot of terms here already. To recap, the clear
text message (the plaintext) is used as input to an
algorithm (essentially a program) that will convert the
readable text into cipher text (the scrambled output).
Thats it in a nutshell. So, if this is a new concept, think
about this for a bit before we move on. Well use this
basic concept in encryption and will present a few of the
many altered ways that encryption can be handled in an
information system.
By the way, when I mention an information system in
this book, Im referring to the entire information system
12

including the end users, to the networks, to the file and


database servers and all elements in between. So, Im
using Information Systems in a very broad and generic
sense. Whenever the conversation needs to be
narrowed down to a specific detail, Ill provide more of
an explanation so that youre clear about the context
and scope of the discussion.

THE MESSAGE
To emphasize a bit more on the message portion in our
discussion on encryption, think of the message that a
typical web page will form. If youre on Amazon
ordering a new book (or an eBook) the screen may have
quite a few fields on it. The screen that youre working
with may have fields like your name, address, city, state,
zip code, and all the fields for the book information.
When you press enter, you send this information
bundled in a large message (for our example, it could be
as large as 5000 bytes). This message will ultimately be
sent to various networking components, sliced up into
smaller packets to be delivered from the sending end
(your side of the transaction) to the receiving side
(Amazon, in our example). During this process, the
13

message (the plain text) not only will be sliced up into


smaller pieces for delivery, but will be encrypted to
provide you, the user, with security with your order. To
scramble or encrypt a message, the encryption
algorithm uses both the message and a unique key to
scramble the plaintext message into a cipher text
(scrambled) message. Since weve discussed what a
message is, lets talk about the second necessary item
that the encryption process needs the key.

THE KEY
Whenever I think about keys, I think about the lock on
my storage shed and all of the stuff that I want to keep
secure behind the locked door. I know that I keep a lot
of both needed and useless items in my storage shed,
but thats for another discussion. Anyway, to secure and
protect your data, like you would if youre securing your
old furniture and miscellaneous items in a storage shed,
youd typically use a key and a lock to secure the
storage shed door. The analogy that we use for the
encryption of data is the same. When we encrypt data
(again, scramble data), the algorithm (program,
process) that scrambles the data requires, of course, the
14

data (the plaintext) and also needs a key that it uses to


scramble the data with.
First, the key is a string of data separate and unique
from the plaintext data. Again, the encryption program
looks for the key and the data to scramble your
message.
So, now that we know where the data is from (in our
example, the system got our data from the Amazon web
page that we filled out).
The key is generally (note the generally, since well talk
about variations later) obtained by the user. The
encryption system not only gets the data from the
webpage that youve sent by hitting enter, but the
encryption program will also ask of the system, What is
the users key that Ill use along with this message to
encrypt?
Theres quite a few ways that keys are generated and
stored. Well cover several of the important ones in this
book. But for now, just know that when you are
entering in data into (for example, the Amazon website)
you will also have a key that is uniquely yours that is
15

sent along with your message text to the encryption


algorithm. Again, the encryption algorithm uses your
plaintext along with your unique key to scramble
(encrypt) the message so that it is securely sent over
the network (the Internet) and to the end process (the
Amazon ordering site, in our example).
Well discuss more of the details in subsequent sections
and chapters in this book. There are a lot of variations
in how encryption works on an Information System, but
this general description will get you started with how
the process generally works.
Now, lets talk about what happens after the system
encrypts your message by using your key. The result is
the cipher that is discussed in the next section.

THE CIPHER
The result of your message and unique key that is sent
to the encryption algorithm is the cipher. The cipher is
literally a disguised way of writing. I dont know if you
remember Cracker Box candy (maybe Im dating
myself). Anyway, Cracker Box came with a small toy and
when I was young, I would buy Cracker Box mostly for
16

the small toy that was found in a small package at the


bottom of the box of caramel popcorn and peanuts.
While the product is quite tasty, from my recollection,
mostly the cool toys were my attraction. One of the toys
that were included was an encryption ring. Id use it to
send notes to my brother that he could read (he also
had an encryption ring). We sent notes back and forth
so that my sister couldnt read them. (not that my sister
was a bad person at all it was just that my brother and
I were, well, a bit nerdy and thought we were pretty
cool send scrambled messages back and forth that only
we could read).
Fast forward, what my brother and I were actually
sending to each other were cipher text messages (that is
disguised messages). The message that we each wanted
to send (the plain text message) required us to use the
ring (the key) in a particular way so that we could
scramble the original message into a cipher message.
Of course, on the other end my brother needed to use
his ring to decipher the message or, in InfoSec terms, he
needed to decrypt the message.
17

By the way, have you thought of the origin of Encryption


or Decryption? As a little tidbit of information, the root
word of encryption and decryption is crypt. A crypt is
an underground vault. It is also a burial place. A crypt is,
as such, a place where you want to store something that
you want to keep buried. Interesting?
Lets talk about decryption or moving what you wanted
to keep save in the crypt (encrypt) to a state where you
can now read the message.

DECRYPTION
Decryption, as youve probably already know by now, is
converting the cipher text into readable form. My
description about my brother and I using a ring to
encrypt and decrypt messages that we hid from our
sister holds with this definition of decryption. But
theres a twist here. Both my brother and I had identical
rings (other than the fact that I decided to paint my
ring). The ring that he used and the ring that I had were
used to both encrypt and decrypt messages. If were
using the same key to both encrypt and decrypt a
message, this is known as a symmetric key. Later, well
discuss what an asymmetric key is, but first, lets
18

discuss what a symmetric key is in greater detail and as


it relates to data security.

SYMMETRIC KEY ENCRYPTION


First, lets clarify a few terms. The term symmetric
essentially means to mean the same when you are
speaking about the general properties between two
items. For example, if we were talking about symmetry
(the sameness) of two parts of an item, if these two
parts were identical, we would say that a diagram (or
graph) that contains two parts is symmetrical (both
parts of the graph the left side and the right side, as an
example look the same). If a reflection of an object (a
building, perhaps) is perfectly symmetrical, the
reflection and the building would be identical. In reality,
this would be almost impossible to have a symmetrical
building and reflection since there are so many
variables. Or, in the example of our decoding rings, my
brother and I had the same rings or symmetrical keys
(same keys) to encode or decode messages. Keep in
mind that symmetrical is equivalent of the same.
Encryption is generally the scrambling of a message (or
a string of data) so that data can be hidden from
19

another party. The scrambling of data is done generally


by the sending party using a key that is applied to the
message with a resulting scrambled message as the
output. Again, the scrambled output is commonly
known as cipher text. How this is applied in encryption
is that both sides of a symmetrical encryption share the
same key.

ASYMMETRIC KEY ENCRYPTION


Now, this is where things get a bit more complex on
purpose! First, lets define asymmetric and then discuss
some general features about asymmetric key
encryption.
First, anything with an a in front of a term actually
means not. So, when people said I was an atypical kid,
I really didnt know what they were saying when I was,
for example, 8 years old. But what they were saying is
that I was not typical. Hmmmm. was that a
complement or Ill leave it as is.
Anyway, when we speak of asymmetric key encryption
that would essentially mean not symmetrical. If my
brother and I were using asymmetrical encryption (and
20

the resulting decryption), that would mean that Id have


a different key (ring) than my brother would have. Our
rings would be different. But, while I could encrypt a
message that I was sending to my brother (thus hiding
the message from my sister), he could decrypt
(unencrypt it) using a different key. How can that be?
So that we dont get stuck with the details of talking
about physical items and asymmetric keys, lets switch
to a bit of theory. Oh yes, I thought Id get that cringe
when I mentioned theory, but lets break down how
asymmetric key encryption and decryption generally
works. Again, take a special focus on asymmetric
encryption in this section. Quite a lot of security (not all,
mind you) is based on asymmetric encryption and
decryption in the computer and information security
world. This is very commonly known as Public Key
Encryption or Public Key Cryptography (the crypto
thing is back!).

PUBLIC KEY ENCRYPTION


As noted in the prior paragraph, were switching into a
bit of theory. But this theory is actually applied in
networking security, database security, and other
21

aspects of computer and information security at large.


Note that asymmetric encryption (aka Public Key
Encryption) are the same.
What confused me at first when I heard the term Public
Key Encryption was that I knew it meant asymmetric
encryption? Asymmetric key encryption meant that
both the sender and the receiver had different keys. So
why was it referred to as Public Key Encryption?
Well, the bottom line is that while we are still speaking
of the sender and the receiver having different keys to
encrypt or decrypt messages, the emphasis with the
term (and the term only) is that the public key is one of
the 2 keys in asymmetric encryption that can actually be
publicized. I can place my public key on Facebook, send
it with each email, post it on the bathroom wall
anywhere. It will still keep an encrypted message secure
even if everyone had the public key available. The
asymmetric message will still be scrambled! To either
scramble or encrypt a message that uses an asymmetric
algorithm, not only do you need the plaintext message,
you also need both a public and private key (think
asymmetric keys) to encrypt and decrypt a message.
22

So, before we continue with more detail in the


explanation of how this works, lets recap a few things.
First, we have the clear text (plaintext) message that is
encrypted. In asymmetric encryption, the two sides (the
sender and receiver) have unique keys. One of the keys
is a public key (that you can post of Facebook, or
wherever) and the other key, a private key, youll want
to keep secret.
Now, lets add a bit more detail so that we can get down,
as my grandpa used to say, to brass tacks. Lets add
some more detail about how asymmetric key
encryption (Public Key Encryption) is applied in the real
world.
First, the public key and the private key are
mathematically linked. Sometimes this is referred to as
a key-pair. Generally, the public key portion is used to
encrypt the message (plaintext) between sender and
receiver. Each sender and receiver will also have a
private key that theyll keep secret and will use do
decipher/decrypt a message. That is, both sender and
receiver will use the public key along with their unique
23

individual private keys to encrypt and decrypt


messages.
Before we get into more details about this, lets think of
an analogy. If I go into Home Depot for paint, lets say
that I pick a yellow color as my base paint (you wouldnt
normally do that, but for this example, lets say our base
color is yellow). So the yellow paint is the plaintext
message that we want to send. Now, I know that my
receiver has a public key (think of blue) and if I mix it
into the yellow, the paint will turn green. But, if
someone were to intercept the green message, if they
knew the publically available key of blue that my
receiver posted on Facebook, they could derive the
yellow message.
So what if I not only mixed in the public key (blue paint)
along with my private key (lets say red paint) which
resulted in a purple-ish color paint? Anyone
intercepting the purple-ish paint (that is, if paint colors
could be pulled apart), would need to know both the
public (available) and private (not available)
keys/colors to determine the right mix to get back to
the initial plaintext color (yellow, in our case).
24

On the receiver side, the receiver has the shared key


(public key) and their own unique private key. Again,
the sender and receivers private keys are unique and
are not ever shared anywhere. However, they are
mathematically related. So when the receiver of the
message receives the encrypted message, they can
apply the public key along with their uniquely and
mathematically related private key to the message
which will result in the plaintext (yellow) message.
This mathematical relationship, just so you roughly
know, is based on what is termed as modulus
arithmetic. I mention this since it may be noted in
other information and computer security texts and for
those who wish to delve into the mathematics of
asymmetric encryption. However, for the CISSP, CISM
and CCNA-Security exams and for general knowledge
about information security, you dont need to know the
mathematical details of how modulus arithmetic
allows us to mix two encryption keys (public and
unique private key) on one end and to decrypt using the
public and an altogether different (but related) private
key on the other end.
25

ENCRYPTION STRENGTH
One of the primary issues around the strength of
encryption centers around the key length (remember
private and public keys from the prior section?). The
strength of the encryption (and ease of decryption), in a
significant part, rests on the key length. Lets say if I had
a combination lock on my bicycle. And lets say that if I
park my bike and tie it up to a pole right out of
Starbucks where I sometimes get a morning cup of
coffee. If my combination lock only had 2 cylinders that
each tumbled from 0 through 9, it would take someone
seconds to figure out the combination. My key space is
defined by the number of tumblers (2) and the number
of numbers on each tumbler (10: 0 through 9). From
this small key space, I can mathematically calculate that
Id have 10 (total possible numbers on the first tumbler)
times 10 (total possible numbers on the second
tumbler) for a maximum possible combination numbers
of 100. That would take someone a minute or two to
break in to. And off goes my bike.
Now if I had 5 tumblers, the number of possible
combinations, each of 10 possible numbers, the total
26

number of combinations goes up to a point where it


would take someone hours and hours, making stealing
my bike improbable (assuming they dont have, of
course, a bolt cutter).
If this was digital, we know that computers can churn
numbers extremely quickly. Thats what theyre
designed for. So, the larger the key space that we
provide to encrypt (lock up) our data, the longer time it
will take a computer to try all combinations.
Whether its a bike thief attempting to steal your bike or
a computer attempting to crack into an encrypted file,
the longer the key length, the more impractical it is for
the bike thief or the hacker to steal your stuff.
Keep in mind that encryption is useful for one purpose - keeping information hidden from those who you don't
want to see it. The type of information can be
confidential information that a governmental agency, a
financial institution or a private individual wants to
keep secret. Data in the form of transaction information
(from Amazon purchases) to emails and even entire
files can be encrypted and kept private. Encryption
27

ensures privacy and will prevent misuse, abuse or


tampering (modification) of data.
In addition to the key length, the strength of an
encryption system is dependent upon other factors
including the algorithm used and how that algorithm is
implemented. Further, several information and
computer security experts have noted that there are no
shortcuts that should be implemented into encryption
algorithms to derive the plaintext from cipher text.
As of late, there has been quite a lot of discussion about
the possibility that the NSA has influenced the design of
various encryption algorithms. There is a suggestion in
the information and computer security communities
that NSA has not only allowed but required certain
encryption schemes to have a back door whereby
security (encryption) is short circuited and decryption
can be readily accomplished by various security
agencies. Im not sure if this is pure conjecture or is
based on reality. Investigate this on your own, if you are
willing to do so. However, the point is that from a
designers perspective, pure encryption and decryption
should never have any shortcuts (nor backdoors).
28

Also, in terms of being able to decrypt encrypted


messages (or files), the designed algorithm should not
be vulnerable to brute force attacks where a hacker
essentially will throw all possible numbers at an
encrypted file in order to break in. In the earlier
scenario of the bike lock, if the bike lock was our
encrypted file with a possible 100 combinations, a brute
force attack would be to try each and every possible
combination from 0-0 to 9-9 in order to gain access (to
the bike or to the data).
Also, as you dig into data security a bit deeper (where
studying for the CISSP, CISM or CCNA-Security, or if you
plan on deeper study into the InfoSec field) there is
quite a bit of discussion about Minimal Key Lengths
For Symmetric Ciphers. We wont go into that in this
book, but suffice it to say that the designers of many
encryption algorithms have noted weaknesses with
their algorithms based on smaller key sizes. And, as
weve discussed in this section, smaller key sizes leaves
you (and your data or your bike) susceptible to theft or
tampering.

29

SYMMETRIC KEY ENCRYPTION METHODS


In this section of the chapter, lets briefly discuss a few
symmetric key encryption methods that youll see
frequently as you move deeper into this field.

ONE-TIME PAD
The one-time pad symmetric key encryption can be
viewed as a disposable key, thus the one-time name.
A one-time pad encryption method uses a generated
key that matches the length of the information to be
encrypted. Each character of the message is shifted by
the value of the matching value in the key. This method
(assuming the key can be successfully randomly
generated) has been proven to be theoretically
unbreakable, but there are a few important difficulties
associated with the use of a one-time pad.
The first difficulty with a one-time pad is in generating
the key itself. Since there is no fixed key length because
the length varies dependent on the length of the data
being encrypted, there is a problem with the true
randomness of the key. To secure encryption, security
relies on not only the length of the key, as noted earlier,
but on the uniqueness of the key. For example, if we had
30

long keys for our encryption and decryption, but these


keys were not unique, wed open ourselves up to data
theft or other nefarious acts.
This reminds me of a story where one of the automobile
companies in the 1980s had a key set whereby the
possible number of key possibilities for their physical
keys were around 30,000. Considering that they had
produced nearly a million automobiles with this
particular key set, the probability of having a similar
key was about 0.3%, which statistically is high. Could
you imagine having this sort of percentage that youve
used for an encryption method that youve secured
sensitive data with? Given the speed of todays
computers, youve opened yourself up to risk.
And, as a result of uniqueness, keep in mind that the
longer the data set being encrypted, it is more likely
that a pattern will emerge within the random number
series. In other words, duplication can occur in a onetime pad allowing for what is known as cryptanalytic
attacks.

31

The second major difficulty with one-time pad


encryption (and one that is shared with other methods
of symmetric encryption) is in the way that keys are
distributed. The essential question is that once a key is
calculated and used to encrypt a file, how do we get the
key to another user so that they can decrypt the data?
This problem is compounded if we have encrypted a
file, sent it across state or the country, and need to get
the key to the receiving end so that the receiving end
can decrypt the encrypted message. You do not want to
send the key in the same way that the encrypted
message was sent. If someone captures the encrypted
file on a given network and you also send the key on the
same network, they will more than likely capture the
key as well, risking your data. Often, we send the key in
another way. If we are sending an encrypted file over a
particular network, as an example, we will often design
and use another method (or network) to send the key.
This is often termed and out-of-band method.
Finally, a one-time pad encryption key will be at least as
large as the data it is intended for used with. Thats
what I consider an interesting fact about the one-time
32

pad! Even if you are using a one-time pad, for example,


to encrypt files locally, storing the keys will at least
double the storage requirements for each and every file
or document you wish to encrypt.
However, there are ways to get around this by using
something that is called a pseudo-random system. Well
discuss pseudo-random and random numbers in the
next section. But suffice it to say for this section that
random numbers generally involve the entire possible
set of all numbers (the whole enchilada of numbers
possible which is, well a heck of a lot of numbers) to
derive the randomization. Pseudo random numbers, in
contrast, takes a very large set of possible numbers (a
very large set of numbers in contrast to ALL possible
numbers) from which the randomization is done.

RANDOM VS. PSEUDO-RANDOM NUMBERS


The one best thing about computers is that they can
work with numbers and they do it quickly. When we
use random or pseudo-random numbers to define a
key-space for an encryption algorithm, essentially what
were saying is that were going to use the number
space (random or pseudo-random number space) to
33

define a key. A purely (true) random number is pulled


from all possible numbers. Imagine that! But a
computer will certainly reach its computational limits.
However, to provide boundaries around this unlimited
true random number set, well often used what is
termed as a pseudo-random number. The definition of
pseudo is essentially not real or fake. What were
essentially saying when we create a pseudo-random
number space is that given the size of the number space,
we have something that approximates pure
randomness. Let me put it in another way. If you think
about dice maybe 1 di, to keep our discussion to the
point you have six possibilities in this pseudo-random
number space. Thats a pretty small set of randomness
and probably wont be a good representation of how
true randomness is handled. However, we know that
theres a 1:6 possibility of rolling a particular number
on di. Taking this to another item that we know well a
deck of cards. Typically (if a card is not missing) we
have 52 cards. So our pseudo-random number space is
larger. We have a 1:52 possibility of selecting a given
card. Bigger, but not too solid in terms of using this
pseudo-random number space. A small pseudo-random
34

number space is easily hack-able by todays hacker


tools. However, if we make our pseudo-random number
space 100 million numbers or larger, were getting
closer to approximating what pure randomness is and
also narrowing the possibility of some hacker
determining what our encryption keys are for a
particular file or service that we have encrypted. So,
computer scientists will develop algorithms that will
take in a seed value into a process that will create a
fairly large pseudo-random number space that a
computer can readily handle and that will more closely
approximate a true random number space.

STREAM
Whenever I think of a stream cipher, I think of a stream
of water flowing in the mountains where I hike in the
summer time. The analogy of a mountain stream with a
stream of continuous flowing data holds. A stream
cipher operates similar to how a one-time pad
functions. A stream, however, uses a smaller key, called
a seed, rather than a unique key built off of the entire
file. The seed is used to generate pseudo-random
35

numbers with which to perform the one-time pad. So


the seed provides a boundary, if you will, to the set of
numbers used to derive the encryption key. Each
element of the plain text in the stream of data is
combined with a pseudo-randomly generated element
to produce the cipher text related to the plaintext
element.
This form of encryption is quite fast and relatively easy
to implement. It is often implement on networks, for
example. However, there is a risk. If the same seed is
used a second time, steam encryption becomes
vulnerable to a cryptanalytic attack due to the fact that
the same seed will create the same size pseudo-random
space from which the encryption key is derived.
Additionally, by using a pseudo-random set of numbers,
as noted earlier, there is a possibility (though somewhat
remote, depending on the pseudo-random space) that
the encrypted bit stream is not truly random, creating a
somewhat inherent weakness in stream encryption
techniques. One example of the weakness that shows up
is the wireless WEP (Wired Equivalent Privacy) is a
standard, though not used frequently anymore, within
36

the IEEE 802.11 wireless standard. The 802.11 standard


is what we will use when we connect to a wireless
network, for example, at our favorite local coffee shop.
The WEP standard was an early privacy (encryption)
standard that was easily hack-able and if youre using it
now, be sure to change it to WPA2 (a more robust
protocol). The problem with WEP is (was) that it used
the Stream cipher, as we discussed earlier. The issue
was that the pseudo-random number space was 24 bits,
which as we discussed, is not a large enough pseudorandom number space and given the processing power
and speeds of computers in todays world, WEP can be
readily cracked in a few seconds by tools that are
commonly found on the Internet.

BLOCK
A block cipher takes in a chunk (block) of plaintext data
and encrypts that block using a key. Algorithms such as
DES (Data Encryption Standard), 3DES (Triple Data
Encryption Standard), and AES (Advance Encryption
Standard), to name a few, fall. Each of these algorithms
takes in a block of the plain text, and encrypts that block
37

using a key. The next block of data can then be


encrypted, either with the same key, or a modification
of that key. The processing is determined by which of
these three (and their variants) you decide to use when
encrypting files, network traffic, etc.
The block method is generally considered to be the
most convenient secure method. While very secure, it is
not as secure as a one-time pad. But it is far easier to
implement. With block cipher encryption algorithms, as
mentioned before, the main issue is how do we get the
keys distributed in the system without someone
capturing the keys as well as the data? While this was
briefly noted before, lets speak to this. Essentially,
whenever you are sharing keys for encrypted files, it is
optimal to share these keys in an out of band way. That
is, if you are sending your encrypted data on a given
network, youll want to send the keys on a totally
different network, thus preventing a hacker from
capturing both your encrypted data and your keys from
the same location, if you will.

38

39

CHAPTER 2. INTRODUCTION TO
SYMMETRIC KEY ALGORITHMS
As we noted in the prior chapter, there are several
encryption algorithms that have been or are currently
in use for symmetric encryption. The earliest
recommended encryption algorithm was the Data
Encryption Standard (DES) which is a block encryption
algorithm designed to use 40 or 56 bit key length plus
an additional 8 bits for error checking. The DES
encryption standard was originally adopted as a FIPS
(Federal Information Processing Standards) standard in
1977 and is in use for some limited applications even
today. The key length received criticism as being too
short, as it is susceptible to hacking. There are also
continued questions about NSAs involvement in its
development, since there are reported back-door
methods to readily hack into DES encrypted files.
DES remained the suggested encryption method
through the 1990s and even as late as 2003 (and can be
found in some limited use today). FIPS, in 1999, stated
40

that 3DES (Triple DES) is preferred as an encryption


standard. But, prior to the updated FIPS
recommendation, DES had already been successfully
hacked using brute force methods in a series of
competitions designed to demonstrate the
ineffectiveness of DES as a secure encryption algorithm.
As a note about FIPS, since youll continue to see FIPS
mentioned in this and any other book on information
security: FIPS is a set of standards, including encryption
standards, that non-military and governmental agencies
and contractors must adhere to. So, if you are doing
business with the government (as a contractor or have
been awarded a grant), you and your organization will
need to adhere to the FIPS standard thus the often
mentioned FIPS in information security texts and
documentation.
I mentioned 3DES earlier. Lets take some time to
describe what 3DES is. 3DES is oftentimes called Triple
DES. When it was noted that DES was hack-able, a quick
and interim solution to improve the strength of the DES
encryption before a newer encryption algorithm was
developed was to essentially run DES 3 times (thus the
41

3DES notation). This is basically a tripling of the DES


algorithm; the result of the first DES encryption is fed
into the algorithm an additional 2 times, which
effectively creates a 168 bit encryption mechanism. This
provided a much larger and more secure key. However
the 3DES method has its own difficulties. Because of the
tripling of processing required and the growing keys
with each iteration, 3DES involves a much higher
computational complexity, and uses far greater
resources to accomplish the encryption task. Again, Im
using 3DES in present tense since it is used on a limited
basis. However, after an interim period, a more robust
replacement was introduced and is, in a large part, in
use today.
The Advanced Encryption Standard (AES) was the
result of a search for an updated encryption standard to
replace both the hack-able DES and the clumsy 3DES.
The competition was completed in 2001 and was later
noted as the de-facto standard for FIPS (also noted in
2001). The algorithm is somewhat similar to the DES
standard in that it is also a block cipher. However, it
uses a larger block size as well as larger keys resulting
42

in keys that are more resilient to hacking. AES can be


used with any of a 128, 192, or 256 bit key length. It is
projected by information security researchers that
these much longer key lengths will allow for a relatively
long term encryption security lifetime for AES. That is,
unless there are some unforeseen leaps and bounds in
computing technology, as we have experience before, or
if cryptanalytic weaknesses are discovered with the AES
standard. At present, AES seems to be holding its own
and is, by and large, the current standard.

KEY SHARING
Recall that keys will unlock encrypted files. So well
need to distribute the keys in symmetric encryption.
Also recall that symmetric encryption uses the same key
on both sides used by the sender and recipient of the
encrypted data. With both sides needing the same key,
there is an implied trust that the key was not shared
either intentionally or unintentionally to a third party
by either the sender or receiver. Because of the key
sharing trust issue, it is noted by information security
experts that the key sharing is the most difficult item to
handle.
43

A direct physical transferring of the key is the most


secure method for sharing keys. There is no third party
involved in transferring the key. As an example, if we
were commonly transferring data between 2 servers in
our network and were using a symmetric encryption
algorithm, the network administrator could physically
place the key on his/her flash drive and ensure that it is
installed on both servers for encryption and decryption.
This probably will work fine in some smaller
environments, but it is not a practical solution for larger
environments where servers are located miles or even
thousands of miles away from each other. Also, what
happens if the key value changes?
Another solution that is predominantly used is to have a
trusted third-party distribute keys. In information
security, there are a few variants of how a trusted thirdparty is implemented. Well discuss a few.
One such trusted system for distributing symmetric
keys over a network is to simply use an asymmetric
public key encryption to distribute the symmetric key. A
second solution for distribution of symmetric keys is to
use another variant for key distribution known as the
44

Diffie-Hellman key exchange. In summary, DiffieHellman is a way of exchanging cryptographic keys and,
while it was developed in 1976, this method for key
exchange is still readily used. For example, if youre
studying for the CCNA-Security exam or the CISSP, this
is a vital part of the VPN setup that youll need to
understand and know. While this is an introductory
book on information security, Ill continue to note a few
items that you will run into later in your studies.
The third solution for key distribution will be examined
a bit more in the next section is the Kerberos system
that was designed by the Massachusetts Institute of
Technology (MIT) in the 1980s and 1990s which is still
in use in many production level products (like Microsoft
Server products, for example). Well look at Kerberos
next.

KERBEROS
Kerberos is a name derived from Greek mythology.
Kerberos is the three-headed dog that guards the entry
to Hades. I suppose that the MIT researchers had a bit of
a sense of humor. But, well see how the three aspects of

45

Kerberos serve to protect the key exchange in the upcoming paragraphs.


First and foremost, Kerberos is a system designed to
enable the use symmetric key encryption over a
network without relying on public key encryption for
the delivery of those keys to remote users. Instead,
Kerberos uses what is termed as tickets to allow servers
and notes to communicate over a largely more nonsecure network.
Before we get started with the details, let me give you a
general analogy of how Kerberos works. Well describe
the details in a second.
When we go to the movie theatre, we purchase a ticket.
When we purchase a ticket, we are authorized for, lets
say, a student, adult, or senior citizen ticket (some
verification by student id or your drivers license may
occur). Suffice it to say that you are then authorized, by
the means of getting a ticket, into the theatre. Once you
enter into the theatre, youll (typically) hand your ticket
to someone who verifies that you purchased a ticket
and that the ticket is for a current show (why would it
46

be any different?). However, keep in mind that you have


another party checking the ticket that you just
purchased form the front window. Nevertheless, the
movie ticket verifier will point you in the direction of
the theater that is showing the movie that you wanted
to see (as noted on your ticket).
Now, back to Kerberos. Kerberos servers act as a
trusted third party key generator and distributor that is
used with symmetric key encryption.
First, before we get back to relating our movie ticket
buying with the Kerberos process, to use the Kerberos
system on our secure servers, well need to set up a
facility on each of the servers that we want to use
Kerberos with. This is termed as Kerberizing your
servers (I didnt make this up).
The general process for using Kerberos is:
1. The user logs into a client machine using a
name and password. So far -- not different from
anything weve seen before
2. The client then takes the password and creates
a symmetric cipher. There are a couple of ways
47

to do this, but for our discussion, lets keep this


fairly general in terms of the Kerberos process.
3. Next, the client sends a clear text message that
contains the user id to what is called the
Authentication Server (AS). Of course, the AS
authenticates that the user id is valid for the
Kerberized system.
4. If the AS determines that the client is valid on its
AS database, the client will return 2 messages
back to the requesting client (remember, the
client is requesting access to the system much
like when we buy a movie ticket, we are
requesting access to see a particular movie).
What is returned, again, are 2 messages
containing:
a. A first message: Client appended to a
Ticket Granting Service (TGS) Session
Key that is encrypted using the secret key
of the client/user.
b. A second message: A Ticket-GrantingTicket (TGT) that includes information to
reference back to the TGS (like the client
id, network address, and the TGS session
48

key) as well as how long (the validity


period) that the TGS can use.
If we think back to the analogy of buying the movie
ticket, if our movie ticket was purchased to see a movie
(lets say were in a movie theater that is showing the
oldies and we purchased a ticket for Gone with the
Wind), the ticket grants us (TGS) to the service of
seeing the movie. If its a high tech ticket, perhaps the
ticket also allows using the ticket for duration of time
(rather than one movie). Perhaps the ticket also has on
it our name as well as the duration that we can use this
ticket to see Gone with the Wind perhaps over a full
day instead of a particular instance.
Now once the client has the ticket (TGS and TGT), the
client can now request services. If the clients ticket
contains information so that the client can access the
production payroll database, the client can request
services from the payroll database, in general terms. So
the client will attempt to authorize with the Kerborized
payroll server. Its like the movie theatre analogy
when we walk into the theater, someone inside
authorizes us to a given movie. So, the client service
49

authorization is similar and when access is wanted, the


access is verified by the combination of the TGS and the
TGT. This, in effect, is the server asking: Do you have
authorization to use this payroll service?, How long is
your access?, Who are you? etc.
This is the general process. There are some more
specifics about this that youll further study in your
CISSP, CISM or CCNA-Security books. However, keep in
mind that there is a carefully planned exchanged of keys
as well as authorization that goes on with Kerberos.
Also, note that the Kerberos protocol (or process, if you
will) itself does not specify an encryption algorithm that
must be used. Instead, different Kerberos
implementations will support differing encryption
technologies including DES, 3DES, AES, and other
symmetric key algorithms.
As a recap, remember that Kerberos was designed to
assist with privacy and secure use for services in an
organization. Further, Kerberos is designed to be
implemented on multiple servers (keep in mind the
Kerberized server). Also, a central server in the
50

Kerberized system acts as an authentication agent (the


Authentication Server), while services that a client is
interested in accessing are hosted on the varying server
instances (think about the access to the payroll server
that we discussed before). The way that Kerberos is
designed with multiple layers adds security to the
Kerberized system. However, while the security is fairly
robust on a Kerberized system, since it involves the AS
and other hosted servers; this does increase the
complexity for implementation.

CONCLUSION
As weve seen so far, organizations wanting to protect
their data have several choices among symmetric
encryption algorithms that will help to keep their data
safe. But while the many symmetric encryption
algorithms available today are assumed to be fairly
robust and secure, the primary factor that presents a
stronger encryption key is basically the key length.
In summary, the key length required to keep data
secured will be dependent at least partially on how long
a user wants that data to be secure. The general rule of
thumb with key lengths is that the longer the time
51

frame is required for secure data, the longer the key


length should be. Though, it may seem best to choose
the longest key length for a file or transmission, the
weighing factor is that longer keys take a longer time to
decrypt and encrypt. This can certainly have
performance implications, especially on a network.
Keep in mind that smaller keys have a lower
computational cost but an inversely higher risk. So
there is a balance that must be obtained, especially in
network applications, between key size (security) and
performance (time to process an
encryption/decryption).

52

CHAPTER 3. MALWARE
The world of malware is teeming with various breeds of
software, each with their own behavioral presentations.
It is imperative that users develop an understanding of
the major categories of malware, and their signature
behaviors, in order to devise appropriate strategies of
prevention and defense. The following pages endeavor
to present the reader with a general survey of malware,
describing their presentations as well as standing
current techniques of handling instances of the
malware. These categories of malware are used to
describe how the software behaves within a system, or
how it propagates amongst users.
Information technology has become an
increasingly used tool with developed applications in
nearly every field of society. As the world of information
systems has developed to its current level of
sophistication, so have the methods of malicious
attackers and users who compromise security. Lets
look at several categories of malware that are used to
53

propagate malware. These include techniques of viral


propagation including networking worms and Trojan
Horses. Also, techniques for distribution of malware
include the use of rootkits, bots, and spyware.

ROOTKITS
Generally, there are two primary forms of rootkits that
use different methodologies to compromise a computer
system. These are the binary-level rootkits and kernellevel rootkits.
Binary level rootkits operate in ways that modify data
upon finer levels of system processing. Essentially,
binary-level rootkits tamper with user-level processes.
In contrast, Kernel level rootkits work with a more
atomic level of the system. Kernel-level rootkits work
within the kernel of the operating system and tamper
with system-level processes and system-level calls.
Both rootkit types are type of Trojan file that live upon
the essential levels of a system that are installed in
order to grant a malicious user access control, or
otherwise tamper with a system in some time in the
54

future. A rootkit is a feasible means of malicious


processing that a user can invoke after he has gained
access to a system, since they have reached the kernel
level of the system. Once the rootkit is in place on the
victims system, it acts as a backdoor that allows a
hacker to gain entry to the system or its root at a later
time.
Both general forms of rootkits pose their own serious
threats to a system. Unfortunately, there are only a few
methods in handling or detecting kernel-level rootkits.
Potential methods for binary-level rootkit detection
include signature analysis, and cryptographic
checksums. These methodologies are typically
employed by automated solutions, such as software
packages built to detect and manage rootkits. Mostly,
these automated solutions employ the typical reactive
methodologies wherein the software observes the
system and identifies already known patterns of typical
rootkit behavior within a system.
While kernel-level rootkits are among the newest of this
type of malware, it is more than just a lack of experience
and exposure that leaves the computer world
55

vulnerable to these kinds of attacks. The very nature of


a kernel-level rootkit allows it access to a systems
memory management, its file system, and system calls;
allowing a malicious user unmitigated access and
control over the modification and employment of the
system kernel functions. There are levels in which these
rootkits actions regularly lose their invisibility, such as
the system call table. In order to detect and handle
kernel-level rootkits, an automated system can be
arranged such that it can save the state of the system
call table at regular intervals, and store these states as a
reference that can be used to recover the system later.
Upon recognizing irregular system pattern calls, the
system and user can consequently be made aware of the
tampering of a kernel-level rootkit, and handle the
issue.

TROJANS
The intrusion architecture that makes up Trojan
infections generally comes in several forms. The main
forms are:

56

1. Direct masquerade: A direct masquerade


Trojan is one in which the Trojan masks the file
as an existing system process in the machine. For
example, the Trojan can run as process with a
name such as Explorer, imitating the file
management system of Windows. Trojan files
that employ the simple masquerades take to
resembling a process that is perceivably more
legitimate than itself, such as the names of
existing executables.
2. Slip masquerade: Trojan files which employ slip
masquerade techniques form system processes
on a machine that could resemble a legitimate
process; for instance, it may take the name of a
critical system process and modify it by a mere
character, such as taking Explorer, and
rewriting its own process name as iExplorer.
3. Environmental masquerade: The
environmental masquerade technique hides
malicious functionality within an existing
process and relies upon a form of execution from
that process in order to begin its operations;
these can hide in functions as simple as login
57

operations. While these fields of categorization


for Trojan intrusion are general, it is necessary
to note that the very nature of Trojan horses is
that their methods of concealment are vastly
diverse, their presentations are constantly
evolving to hide within a system in new ways; as
such, some presentations of Trojan malware are
likely to cross the borders of these categories.
Ultimately, a Trojan attack does not describe the
architecture of the injected malware, or even
necessarily the behavior that ensues upon infection;
Trojan horses are simply used to describe the forms
of malware that employ techniques of insertion and
concealment under the previously described
categories. Trojan infections can be viewed as a
conglomerate of two definite components. These
are:
1. Trojan component: in instances of a Trojan
attack, the malware enters the system attached
to an inconspicuous piece of seemingly
legitimate software. This component, best
known as the payload, is the batch of malicious
58

functionality that uses the package it is attached


to invisibly make entry into a computer.
2. Dormancy component: after a Trojan malware
package infects a host machine, the dormancy
component is responsible for maintaining the
malwares position in the host machine.
It is important to note that Trojan Horses do not deliver
a specific set of actions, but are rather defining of the
method wherein malware infects a system. The issue
that a Trojan Horse poses is not necessarily how the
mechanism itself affects a users system, but how it
exploits vulnerabilities of systems to deliver malicious
functionality. With that in mind, defense against an
intrusion via Trojan Horse is difficult to arrange in a
proactive manner. Commonly, the best defenses a
potential victim may have are a set of best practices to
follow as to avoid allowing any potential infections into
their machine. Likewise, Internet security packages may
offer reactive protection to Trojan Horse attacks, by
forming libraries of common sites or files where Trojandelivered malware has been known to be distributed
from or as.
59

VIRAL MALWARE
This category of malware is often the most misused in
order to label instances of malware. It is often times
used to identify an existing instance of malware that
does not fit the constraints of its definition. Amongst the
earlier forms of malware, viral malware infections can
be the most troublesome to purge from a system, as
they are defined by their ability to reproduce
themselves rapidly within a local machine. As viral
malware has had much time to evolve, these infectious
pieces of software see various levels of sophistication
and employ a great many methodologies in their
propagation. Often times, modern viruses do not simply
multiply upon a system, but evolve themselves
periodically with further generations as a method of
defense against detection or eradication; meaning that
most viruses increase their affects upon a system at
something of an exponential rate. Like many of its
malware cousins, viral infections have a great many
architectures that determine its function, method of
evolution, propagation, and its many other traits.
However, viruses generally share a handful of common
60

techniques in order to remain hidden or obscure to


detection by the system or user. Four methods that a
virus may employ to prevent its detection are as
follows:
1. Encryption: Generally the simplest approach
that viral malware takes to hiding itself within a
system, viruses may deploy itself as an
encrypted piece of software with its own paired
system for decryption. However simple,
encryption enables a virus to constantly mutate
its presentation upon a system. The simplicity to
this scheme is both this formats strength as well
as its weakness; as well as this schema
consistently provides the literal data of a virus
with unique identities, the decryption
mechanism is also capable of working towards
the system in decrypting an uncovering all
generations of the virus.
2. Polymorphism: Much like the basic encryption
method described above, polymorphic
presentations of viral infections use an
encryption pattern to be decrypted by a
61

provided decryption mechanism. However, this


technique essentially addresses the weakness of
simple encryption based computer viruses in
that each generation of a computer virus
develops its own decryption algorithm. Thus,
adding a level of complexity when addressing the
issue, as the host now has several batches in
each generation to address; rather than the
singular batch that could be addressed uniformly
under the same decryption key.
3. Metamorphism: In response to defense
techniques identifying the virus body to handle
the malware, malicious users devised computer
viruses that would modify their own body
alongside encryption schemes. This enabled the
software to change its methodology in delivering
its payload, and ultimately evade detection
techniques that relied upon identifying specific
behaviors. Several methods that would ensure
that a computer virus operation integrity would
remain intact whilst undergoing metamorphosis
are as such:
62

Register usage exchange: in this technique,


the execution code simply exchanges the
registers used each generation, accessing
different locations in memory to operate than
the previous generation.

Permutation: while viruses change


generations, subroutines are reordered, such
that functionality that is not order-critical is
executed in different patterns than that of
other generations.

Useless Instructions: Meaningless


instructions are stuffed between viruscritical functionality in order to confuse the
user or system, giving each generation the
illusion of wholly different execution.

Substitution: With this more complex


technique, new generations of a virus
generate or change sets of instructions for
equivalent sets of instructions. This provides
similar anonymity as the insertion of useless
instructions, but restructures more critical

63

portions of code to offer a more robust safety


from detection.
4. Viral Construction Kits: As a great many
viruses are written in assembly language, the
development of viruses (in this paradigm) was
typically laden with complications. However,
techniques and tools were developed to enable
swifter development of unique infectious
software. Viral construction kits were devised to
bypass the time cost and complexity of
engineering new malware to deploy, further
taxing the defensive technique of cleaning up
previously identified malware.

As the techniques of robust viral infections have grown


on computer systems, however, so too have defensive
techniques to counter the rapidly multiplying malware.
The four common techniques to defend against the viral
variety of malware include (1) pattern recognition, (2)
exact and exact identification methods, (3) malicious
code emulation, and (4) adaptive tools to identify future
threats.
64

Producing unique viruses becomes more difficult as


resources such as viral construction kits are exhausted,
methods that employ pattern recognition from known
viral signatures become more reliable, and a stronger
first line of defense. These pattern-based scanners
operate by interpreting bytes that are members of
known viral signatures, allowing a system to identify
previously handled infections. As time passes, the
databases or libraries that are responsible for
maintaining known virus signatures expand, leaving
less room for the possibility for unique infections to
enter the host machine.
The second line of defense uses similar scanning
techniques as basic pattern recognition antivirus
software, with added benefits of several more tools to
account for any mutations that the virus may have
undergone. This method, known as nearly exact
identifications employs the user of double strings,
cryptographic checksums, and hash functions in order
to swiftly and accurately sift through and identify
mutated viral bodies.

65

Antiviral programs may also employ the use of CPU


emulators that viral infections are essentially distracted
to, allowing the malware to make attacks upon a faux
processing system. This technique does not just distract
malware, but also enables the antivirus software to
identify the unencrypted forms of the virus as the
malware must decrypt itself in the process of execution.
This grants the system a window to recognize the
malware and handle the instance without making its
CPU vulnerable to the attack. This technique can be
cumbersome, however, and loses its practicality if it is
run too long.
Last, heuristic analysis of viral instances uses fragments
of known viral code and performs analyses upon
potential threats. While these tools are intelligent and
can often identify previously unknown viruses, it is also
known to often recognize a normal system process as a
viral infection.
Viral infections consequently share very similar traits
with another form of self-replicating malware: Worms.
Their presentation and handling shares many traits of
viral infections. However, it is important to note their
66

objective in spreading. The distinction between worms


and viruses is in the systems in which they replicate and
propagate. While viral infections focus upon a single
system, Worms operate with the explicit intent to
spreading to other machines upon the network,
focusing upon specific weaknesses of a system and its
network to spread.

BOTS
While bots are software systems that can be
constructed to serve benign purposes upon a system,
this type of software has also been utilized to be the
core architecture of many forms of malware.
Distinguishing itself by its behavior, bots are instances
of software that are placed upon a machine to perform
automated tasks in the stead of its malicious user.
Normally, modern bots infect collections of computers
that become a part of a network of computers with a
similar instance of the bot; this web of infected
computers becoming part of the botnet. Normally, bots
that take over systems give the distributor of the
67

malware control over the machine; however, in great


numbers, bots are not typically used to control a
singular machine. Instead, they are used to send out
general, mass instruction to each infected node; these
members of the infected network otherwise known as
zombies. Botnets can span anywhere between tens of
computers, to hundreds of thousands of computers in
their network of control.
Normally, bots exploit computers with network
vulnerabilities, though may have access to machines
through an innumerable mass of methods. Once upon a
machine or collection of machines, bots have been
known to have a large number of consequences. Bots
have been commonly known to distribute spam, viruses,
spyware, and a multitude of other malicious software to
a botnets zombies. In addition, botmasters have utilized
the control over an infected machine to retrieve
personal information such as credit card numbers, bank
credentials, and similar personal information; and even
perform Denial of Service attacks, compromising the
security of servers, and machines that ultimately

68

renders the system unusable or unreachable for their


intended users.
The circumvention of bots on system relies upon the
same techniques that handle viral infections on
machines, with added prevention methods that a user
can employ as a part of their operating systems tools.
Aside from antiviral software, internet options allow for
levels of operating-system-dictated security standards,
which ensure users keep their traffic within safe
bounds.

SPYWARE
Within this class, there are a number of formats of
malware that are responsible for directly or indirectly
enabling a malicious user to monitor and direct the
activity of a system that has been compromised by
Spyware. The general forms that Spyware may take
are:
1. Cookies/Web Bugs: Cookies offer small spaces
to save state information to individual clients,
enabling websites to store user-specific
69

information about traffic, usage and preferences


specific to their site. However, as many sites can
often cross systems (such as using common ad
providers), a potential side effect of this
relationship enables third parties to access the
information stored in the cookies of associated
sites. This presentation of Spyware is not
directly a consequence of malicious coding, but
an exploitable side effect of web architectures.
2. Browser hijackers: Spyware can take the
presentation of hijacking Internet traffic to the
infected system. The results will vary depending
upon what the malicious user intends to achieve;
potential outcomes may include: forcing a user
to a specific homepage, or even locking certain
Internet traffic at ransom (also known as
Ransomware). Normally, the hijacking
mechanism takes the form of an extension to the
browser, but can infect a system through deeper
means such as modifying system registries.
3. Keyloggers: Not always present as malicious
software, Keyloggers are programs capable of
observing near every activity upon a machine.
70

Capable of monitoring and logging individual


keystrokes, to noting all network activity in and
out of a machine, keyloggers are capable of
reporting anything a user (malicious or
otherwise) would like to monitor.
4. Tracks: Used as a label for information stores
that represent a multitude of user activity, tracks
are not particularly malicious upon their own.
The stores that contain tracks, such as browsing
history, can be exploited by Spyware; providing
malicious users with a centralized location to
data mine personal information from a user.
5. Adware: An ambiguous form of Spyware,
Adware posts advertisements in an infected
users browser, changing in the content
conveyed depending upon the traffic upon a
machine. As a consequence, Adware gathers
together menial pieces of information to return
to the malicious user.

CONCLUSION
The pages above have presented the broad forms of
malware that exist in the world of information
71

technology. Each given field defines a set of strategies


and behaviors presented by malicious software. The
research has shown that, while these forms and
techniques of malware propagation and execution can
be applied in parallel, they have essential weaknesses of
predictable, definable makeup. With each category of
malware clearly defined, and rudimentary defensive
techniques posed for each form, a user has the essential
information to prevent, handle, or devise solutions in
any potential instance of infection.

72

CHAPTER 4. FIREWALLS
The internet like any untrusted network can be a scary
place. Every day, new vulnerabilities are being
discovered and also exploited. When a flaw is discovered,
it needs to be patched. While this is easy to do for one
system, it becomes complicated when many systems are
involved, and only gets worse when those systems
employ different software and setups. While this is
recommended but difficult, it can be done with host
based security. However, this can be impractical for the
large scale. An alternative to this is the firewall.

The first line of defense for most networks is the


firewall. What exactly is a firewall and how do they
work? The term firewall has been used for things other
than computer security; in cars the firewall separates the
engine from the passengers keeping them safe from the
engines heat, in buildings the firewall is a barrier meant
to help prevent the spread of fire. Basically keeping out
something that was not wanted, in these cases
dangerous levels of heat. At the simplest level this is
73

what a network firewall does as well, it keeps out


attackers by blocking their access to network resources.
In the late 1980's firewalls began to make their
appearance as advanced packet filters. As attacks grew
in the 1990's so did the capability and complexity of
firewalls. Today firewalls have advanced from
something that was just for a few large networks, to
something that can be installed locally on just about any
PC. Firewalls are now an integral part of network
security.

TYPES
There are three types of firewalls. The first type of
firewall is the packet filter which has a fairly basic
security method. After packet filters came stateful filters.
Stateful filters are more aware of the network data and
provide a higher level of security than packet filters. The
final type of firewalls is gateway firewalls, also known
as proxies.

74

PACKET FILTER
Packet filters are aware of the packets data is
transferred in and the basic parts of each packet. The
parts a packet filter knows about are the source and
destination IP addresses, the port numbers, and the IP
protocol field. Packet filters work by allowing or
blocking packets based on a set of rules related to the
packet information they understand. If a packet does not
match a rule then there are two possible default actions
they may take.
The discard default policy blocks any packet that is not
allowed by a rule, while the default forward policy
allows any packet not blocked by rules to pass. The
default discard policy is more secure but it may hider
users more until rules to allow specific traffic are added.
Packet filters are simple which makes them fast, but they
can be difficult to configure correctly and they provide
no authentication for use of the connection.

STATEFUL FILTER
The solution to the problems of a packet filter is the next
type of firewall, a stateful filter. A stateful filter knows
75

more about packets and their contents along with higher


level protocols. This advanced knowledge makes them
more secure and easier to configure than packet filters. A
stateful filter has knowledge of the higher level protocol
TCP. A stateful filter uses the fact that TCP works via
connections to increase security. The filter can tack
outgoing TCP connections and allow incoming data on
them, as well as allow incoming TCP connections based
on similar rules to a packet filter.

Some stateful filters add additional security by knowing


about even higher level protocols on top of TCP, such as
SMTP and FTP. While the security does up in stateful
filters with the more information they have the speed
goes down because more processing of the data is
required. Although they have higher security then packet
filters stateful filters are not perfect. An attacker could
generate a packet with a header indicating it is part of an
established connection, in hopes it will pass through a
firewall.

GATEWAY
76

The final type of firewall is the gateway. There are two


types of gateways, they application gateway and the
circuit level gateway. An application level gateway
provides security by using authentication before a user
can send data through. After a user is authenticated they
c a n only send data using a protocol the gateway
understands. This means that a gateway must know its
supported protocols fully and provides top level security
for each. Another benefit of knowing the protocol is that
it allows for full logging of all the network activity. The
drawbacks of an application level gateway are the
processing of a lot more data and the multiple
connections to be managed between external and
internal machines.
A circuit level gateway has multiple connections like an
application gateway but it works on a lower level. The
circuit level gateway does not know about the high level
protocols and only manages connections. It is faster
than an applications gateway because of its low level but
it does not provide the same level of security.

There are three types of firewall basing. Bastion hosting,


77

Host based, and person firewalls are all different ways


of deploying a firewall.
1. A bastion host is a system identified by the
firewall administrator as a critical strong point in
the network's security. In general, crucial
application gateways are run on some kind of
bastion host. Each bastion host may run multiple
gateways that support different protocols. These
machines run on a hardened operating system
that contains only the basic software to run the
gateways and support wanted protocols.
Generally these systems will run with writing to
the disc disabled. This prevents malware from
saving itself so if a system is infected with
malware it can be restored by a simple restart
which will wipe the ram and the malware.
2. A host based firewall is a software firewall on an
individual host, commonly a server or
workstation. Usually the software of a host based
firewall is part of an operating system or a low
level add-on. Host based firewalls allow for a
configuration based on the needs of the specific
78

host and they offer an extra level of protection


when combined with bastion host firewalls.
3. The third and final basing is the personal
firewall. This is a software firewall made for
personal computers. They are less complex than
bastion or host based firewalls. The main
methods of security for personal firewall is port
blocking, which by blocking ports that no known
application uses, enhances security. A common
additional feature of personal firewalls is the
monitoring of outgoing traffic for malware. This
feature helps to prevent the spread of malware
i n a network if one of the machines becomes
infected.

CONFIGURATIONS
Firewalls are a central point in the network where all
network traffic passes through. Because of this they are
a continent point for network management, and an
integral point in network security. This allows firewalls
to be configured to perform some advanced network
management, allowing additional features to be easily
added into the firewall.
79

DEMILITARIZED ZONE (DMZ)


Say there is a network that needs to have dedicated
servers that need access to the internet or another
untrusted network, and those servers may be
susceptible to attack. While at the same time the rest
network those servers reside on need to be well
protected. Then there is a case for a DMZ. The DMZ will
allow the servers to operate normally with the
maximum level of protection, while at the same time the
rest of the network is safely protected by its own
firewalls.
The most common network configuration has two
firewalls, an external firewall and an internal firewall.
Between these firewalls is what is known as a DMZ. This
area of the network is usually where services that need
external network access are places. General the security
policies within the DMZ are set to allow external
incoming connections. This allows services such as
WEB servers, EMAIL, and DNS to function properly with
both the external network and the local network,
without compromising the network protected by the
internal firewall. The purpose of the external firewall is
80

to protect the DMZ from attack, while the internal


firewall guards against the consequences of a
compromised external firewall. Such as rootkits, bots,
or other malware that has been lodged inside the DMZ.
Either or both of these firewalls will protect the
network from an attack. Additionally multiple firewalls
can be used to protect the internal networks from each
other.

VIRTUAL PRIVATE NETWORK (VPN)


Imagine there is a company with two buildings that are
separated by some distance need have a need to share
network resources. The distance between these
networks is great enough that it is impracticable to run
cabling to directly connect the networks. But these
building have both have access to an untrusted
network. Then someone has the bright idea to connect
the two buildings networks through the untrusted
network. But what if someone were to snoop on this
traffic? Well the answer to this scenario is to simply use a
VPN.
Most firewalls already have integrated virtual private
network (VPN) functionality, and most firewall vendors
81

are now integrating gateway virus scanning, intrusion


detection/prevention, and Web/application security.
VPN's are a secure way to remotely connect into your
network. They act as a tunnel that connects the remote
systems and a secured network. VPN's use encryption to
ensure that the data being moved between the
networks is secured. This bundled with proper
authentication for a secured distributed computing
environment.
A VPNs' encryption can be the weak point in network
security. Intrusion Detection and Prevention Systems
(IDPS) cannot detect attacks within encrypted traffic;
attacks from across t h e VPN can be almost as
dangerous as attacks from within the secured network.
If the VPN tunnel is implemented after the IDPS in a
separate box, the IPDS has no way to detect what kind
of traffic is being moved. For this reason it is
recommended to have the firewall and VPN in the same
box, this allows for an implementation where the VPN
traffic is decrypted and checked by the IPDS at the same
time.

DISTRIBUTED FIREWALLS
82

What left is the case where a system administrator has


many systems with different firewall requirements?
These systems can be vastly different from one another,
internal email external email, web server, multiple subnetworks, and possibly the need to be able to
dynamically add new systems or networks, each with its
own set of requirements. At the small scale this isn't
that big of an issue. At about ten systems a single
technician could easily administrator each device
independently, at one hundred it that same technician
would be heavily worked but it might be manageable, at
a thousand or more it becomes a major problem. Sure
more technicians could be hired but then there is the
issue of what if one system is configured incorrectly.
That one system could then potentially make the entire
network insecure. This is where distributed firewalls
come into play.
Distributed firewalls decentralize the firewall, allowing
LAN attacks to be distributed to many different
firewalls. This allows the attack to be pushed to a
firewall that is nearer to the source thus cutting the
attack off at an earlier point. This means that the
83

firewall can cooperate with each other to enforce a


higher level of security. While initially this doesnt
sound like a big deal, one only has to look back at a VPN
to see how this could be highly beneficial. Say there is
an attack from with a VPN to a subnet within the
network. A distributed firewall can easily block the
attack at the earliest firewall, thus cutting off the entire
network. This not only prevents this attack, but prevents
the attacker from switching to another VPN within he
network or subnet and continuing the attack.
Distributed firewalls have many other advantages, such
as log aggregation and analysis, firewall statistics, and
fine-grained remote monitoring of individual hosts. This
is possible because of the nature of how distributed
firewalls work. While a firewall by its self can do all of
these, it can only do it for traffic that passes through the
firewall. This can b e a problem if a network is
connected at many points to untrusted networks. In
general you would want all traffic to pass through one
point the firewall; this would weaken the firewall to
attacks such as DDoS. Because Distributed firewalls
communicate with each other they can pass data to
84

each other, this allows for centralized administration,


and a single database for the firewalls logs.

CONCLUSION
Firewalls are an integral part of network security.
Firewalls keep out attackers and unwanted traffic. As
central point in the network they can easily have
additional features added to enhance the network,
including advanced monitoring and VPN. Without
Firewalls larger networks like the internet could not
function. They are the first and often best defense
available.

85

CHAPTER 5. DENIAL OF SERVICE


ATTACKS
In todays society, the Internet is more prevalent than
once was envisioned. The Internet is continually
pressing forward and becoming integral into multiple
facets of our daily personal and business lives. People
rely on the services provided on the Internet for
everything from personal entertainment and banking,
to working in an organizational environment. In fact, as
an example, many jobs that were once brick-and-mortar
physical presence jobs are done via telecommuting.
Think of all the distance learning schools that have
virtual faculty. Also, as another example, there are
some very large organizations who report that over
60% of their technical staff (programmers, database
analysts, networking engineers, etc.) work virtually.
From a business perspective, it is a requirement that
even smaller more unique (boutique) as well as the very
large organizations require some level of work with the
Internet. Further, governmental agencies also have
86

implemented many front-facing websites and services


to connect with their constituents.
The Internet age has spawned malicious activity that
has prevented organizations and governments from
offering services via technology means through DDOS
(Denial-of-Service) attacks. Attackers can bring services
people rely on to a halt, disrupting business operations,
and causing great problems for the networks that are
attacked. Millions of dollars are lost annually due to
DDOS. Further, DDOS attacks are now more widespread
than ever, can come from anywhere in the world, and
are getting harder to trace due to advanced techniques
used by hackers.
In this section, we will discuss the different types of
DDOS attacks, the different methods for DDOS detection
and prevention that are currently being used, and make
an assessment of what one can do to protect themselves
against DDOS attacks.

OVERVIEW OF DDOS ATTACKS


A Denial of service (DOS attack) occurs when a target
machine is flooded with malicious traffic until resources
87

are exhausted which causes the system to go offline.


Expanding on the DOS-attack concept, the Distributed
denial of service (DDOS attack) works much the same
way, except in this particular instance, the attack is
amplified by enlisting other machines and computers in
the attack. Most large-scale DDoS attacks rely
on botnets. DDOS attacks all share four basic elements.
These elements are:
1. The attacker.
2. Compromised hosts, known as handlers, capable
of controlling multiple agents that spawn an
attack.
3. Attack daemons or zombie hosts, responsible for
generating packet streams towards the intended
victim.
4. The victim or target host(s).

The machines (the compromised hosts and attack


daemons) responsible for the attack are usually
external to the victims network to avoid efficient
response from the victim, as well as external to the real

88

attackers network to avoid liability if the attack is


traced.
Preparing a DDOS attack involves selection of agents
that have a vulnerability that can be exploited for a
hacker to gain access. The compromised machines also
need to have the resources that will enable them to
generate powerful attack streams. This used to be a
manual process but has been made easier by freely
available scanning tools, which can automate this
process. Once the agent machines have been selected,
the hacker installs code on the machines, commonly in
the form of self-propagating worms, which is
undetectable by the compromised machines and allows
them to be controlled by the hacker. The hacker is able
to communicate with the agents to see which agents are
online at any given time. The agents are configured in a
manner that instructs them to communicate with
multiple handlers to create the most powerful attack
possible.
All this communication can be done through common
protocols such as TCP, UDP, or ICMP. Once the hacker
has configured all the agents the actual attack can take
89

place. The hacker commands the attack, selecting the


victim, the duration of the attack, as well as the type of
attack and port numbers to be attacked. The hacker can
adjust packet properties creating a variety of different
packets making the attack harder to detect. Creating a
DDOS attack not only exploits the victims network but
also exploits the machines involved in the attack, which
have no knowledge they are being used maliciously.
DDoS attacks are most powerful when the hacker has
exploited a sufficient number of compromised hosts to
send the greatest amount of useless packets to the
victim at the same time.

DDOS ATTACK TYPES


There are many different types of DDoS attacks, but
they all share a common goal and that is to attempt to
prevent the legitimate use of a service. A distributed
denial-of-service attack differs from a DoS attack as it
deploys its attack from any number of different
computers, or bots that can be widely distributed. DDoS
attacks can be classified into two categories: direct
attacks and reflecting attacks. Direct attacks send a
large number of attack packets directly towards a
90

victim with spoofed source addresses so the response


goes elsewhere. Reflecting attacks use innocent routers
and servers as reflectors. An attacker sends packets that
require responses to the reflectors with the inscribed
source address of the packets set to victims address.
Here is a quick overview of eight of the most common
attack types:

UDP Flood: A UDP flood attacks random ports


on the victim or target machine flooding the
ports with packets that cause them to listen for
applications and report back with an ICMP
packet.

SYN Flood: A SYN flood attack sends repeated


spoofed requests from a variety of sources to a
target server. The server responds to the
requests with ACK packets to complete the TCP
connection, but instead of closing the connection
the connection is allowed to timeout. With a
strong enough attack the victims resources are
eventually exhausted and the server will go
offline.
91

Ping of Death: Ping of death is a denial of


service attack that manipulates IP protocol by
sending packets larger than the maximum byte
allowance. Packets are divided across multiple IP
packets and once they are reassembled they
create a packet larger than 65,535 bytes, the
maximum allowable size, causing the victim
servers to reboot or crash.

Reflected Attack: A reflected attack is where an


attacker creates forged packets that will be sent
out to as many computers as possible. When
these computers receive the packets they will
reply, but the reply will be a spoofed address
that actually routes to the target. All of the
computers will attempt to communicate at once
and this will cause the site to be bogged down
with requests until the server resources are
exhausted.

Peer-to-Peer Attacks: Peer-to-Peer attacks


exploit peer-to-peer server to route traffic to the
target website. People using the file-sharing site
are instead sent to the target website until the
92

website is overwhelmed with traffic and sent


offline.

Nuke: Corrupt and fragmented ICMP packets are


sent via a modified ping utility to the target until
the target machine goes offline. This attack
focuses on comprising computer networks and is
an old distributed denial of service attack.

Application Level Attacks: Application level


attacks target system applications that have
more vulnerability. Rather than attempt to
overwhelm the entire server, an attacker will
focus their attack on one or more applications.

Multi-Vector Attacks: Multi-vector attacks are


the most complex form of a DDoS attack. Instead
of utilizing a single method, a combination of
tools and strategies are used to overwhelm the
target and take it offline. Multi-vector attacks can
target applications on the target server, as well
as, flood the target with a large volume of
malicious traffic. These types of DDoS attacks are
the most difficult to mitigate because the attack
93

comes in different forms and targets different


resources simultaneously.

DEALING WITH THE DDOS PROBLEM


There is no easy solution to dealing with DDoS attacks.
Currently there are many different strategies of how to
deal with these attacks. There are three lines of defense
against a DDoS attack:

Attack Detection and Filtering - during the attack

Attack Prevention and Preemption - before the


attack

Attack Source Trace-back and Identification during and after the attack

The best solution to dealing with an attack would


combine all three of these defense mechanisms.

DDOS DETECTION METHODS


DDoS attack detection is a very complicated problem.
Because of the many different types of DDoS attacks and
the rising sophistication of attack methods, there has
yet to be a single method perfected for attack detection.
There are currently many different methods for DDoS
94

attack detection. We will discuss some of the common


methods used to detect attacks, and look at a method
that may improve on the current methods being used.
Here is an overview of commonly used attack detection
methods:
1. Soft Computing: Soft computing is a common
method used for attack detection. Soft computing
is a general term for describing a set of
optimization and processing techniques that are
tolerant of imprecision and uncertainty. This
method uses neural networks, genetic
algorithms, and radial basis functions because of
their ability to classify intelligently and
automatically.
2. Knowledge Base: Knowledge base detection
methods check incoming network traffic and
events against predefined rules or patterns of
attack. General representations of known attacks
are formulated to identify actual occurrences of
attacks.
3. Statistical: Using statistics gathered from a
network is another common way of DDoS attack
95

detection. Statistical properties of attack


patterns can be exploited for detection of DDoS
attacks. A statistical model for normal network
traffic is determined, and then tests are applied
against the model to see if new network traffic
conforms to the model. Traffic that does not
conform to the model is classified as an anomaly
and potential attack.
4. Machine Learning: Machine learning
techniques use preventive and deterrent
controls to remove system vulnerabilities on
target machines. Adaptation techniques are used
to launch protocol anomaly detection and
provide corrective intrusion responses.
Enforcing dynamic security policies tailored for
protecting network resources can be especially
useful for detecting DDoS attacks. There are
many automatic tools available and being
developed that attempt to learn specific patterns
of network attacks to protect the target systems
resources.

DDOS ATTACK PREVENTION METHODS


96

Many attack prevention methods have been developed


to protect and keep servers up and running. These
systems monitor incoming and outgoing traffic between
servers and are deployed as a barrier between the
Internet and a server. The attack prevention methods
exist in firewalls, intrusion detection systems, and
security enhanced routers. Some of the fundamental
technologies used to prevent attacks include traffic
analysis, access control, packet filtering, redundancy,
and address blocking.
An intrusion detection system logs incoming network
traffic and is able to create and analyze traffic statistics
to provide valuable information about traffic and usage.
This can be used to identify peak usage times and create
traffic profiles to be compared against a baseline to
identify abnormal patterns, which could potentially be
DoS attacks.
Firewalls are able to inspect and filter unwanted
packets. They can allow or deny packets according to
their protocols, ports, IP addresses, payloads, and
connection states. Routers also can employ similar
97

defenses like firewalls do but are able to keep malicious


activity further away from a targeted network.
We will provide an overview and briefly discuss
commonly used attack prevention techniques that are
currently deployed.

INGRESS FILTERING
Ingress Filtering will drop traffic on a network if it does
not already have previous knowledge of an IP address
that does not match its own domain prefix. An ISP is
able to reduce spoofing and can locate the source of an
attack if it has implemented ingress filtering. Ingress
filtering is not able to prevent all illegitimate uses
though. A user can forge their hosts source address
with another that has a permitted domain prefix.

EGRESS FILTERING
Egress Filtering is a filter that filters outbound traffic. It
makes sure that only IP addresses that have been
assigned an address within the IP address space are
allowed to create outbound traffic. This type of filtering
is able to protect other Internet domains from an attack
from an unknown user of a system. However the
98

domains precious network resources are used to


enforce this.

ROUTE BASED DISTRIBUTED PACKET FILTERING


This type of filtering works on the principle that there is
only a finite source of IP addresses that legitimate web
traffic can be generated from. Spoofed IP addresses will
have their flows of IP addresses blocked from reaching
their destination.

HISTORY BASED IP FILTERING


A database of IP addresses is built and only incoming IP
addresses are accepted from this list. A user may be
able to invade the system by tricking the server to
include their IP address in the database.

Secure Overlay Services


Secure overlay services are able to provide
exceptional protection at the cost of slowing down the
clients system. Hash routing is used and user traffic is
authenticated via SOAP. Traffic is routed through a
small number of nodes called as servlets to the victim.
But because of the cost of slowing down a client, this
method is not recommended for public servers.
99

LOAD BALANCING
ISPs can increase the bandwidth provided to critical
connections in order to prevent them from being
affected by an increase in traffic (legitimate or not).
Implementation can be costly and complex. However
normal performance can improve in multiple server
architecture because of this load balancing.

HONEYPOT
An attacker is lead to attack a honeypot instead of the
actual system. Because of the implementation of
honeypots, an attacker believes they are attacking the
actual system. If the signature of an attack is detectable,
records of the attacker are able to be traced and stored
including the software and type of attack used.

DDOS TRACE BACK AND IDENTIFICATION


DDoS Trace back is a response that takes place after a
victim has been attacked. The IP address of the attacker
sending the packets is traced without using actual
source information from the packets. Routers are set to
record information about seen packets and their
destinations. This method is not very effective, as the
100

actual destination of packets cannot always be traced


back due to NATs (network address translations) and
firewalls. This method is also completely ineffective in
reflector type attacks. The one benefit of DDoS trace
back is that it can be helpful for law enforcement to
trace the source of DDoS attacks that have taken place.

IMPACTS OF DDOS ATTACKS


DDoS attacks continue to grow in frequency and impact.
While we mostly hear about massive DDoS attacks in
the news, there are a growing number of smaller DDoS
attacks aimed at poorly defended websites. These
smaller attacks are well planned and can cause lasting
damage. More and more companies are starting to
invest in some sort of DDoS protection, but most are
still relying on firewalls and other traditional solutions.
The amount of successful attacks was drastically
reduced when some sort of DDoS mitigation service was
used. Relying on firewalls and routers to keep your
network safe is no longer a sufficient method of defense.
One of the major impacts of a DDoS attack is the
financial repercussion it can have on a company. When
101

a companys website is attacked the downtime of the


site, along with customer complaints, and mitigation
costs can be huge.
The costs associated with DDoS attacks can be
extremely costly to a company. Companies need to be
aware of the risks that DDoS attacks pose to the business.

CONCLUSION
DDoS attacks come in many forms with varying levels of
complexity, which are always evolving. The current
solutions for detection and prevention that we have
discussed have proved inadequate in protecting
systems from the ever-growing threat of DDoS attacks.
There is more research needed in this area, which will
have to be ongoing to keep up with evolution of DDoS
attack types. Currently the best way to protect a system
from DDoS attacks is to use a comprehensive solution,
which includes detection, prevention, and DDoS trace
back.

102

CHAPTER 6. CRYPTOGRAPHIC TOOLS


Cryptography involves the enciphering and deciphering
of information so that the information is then only
understandable by the intended receiver. Typically the
information is to be sent for communication between
persons or devices. Cryptography is used in providing
accountability, fairness, accuracy, and confidentiality,
which is applied in a number of ways including:
prevention against fraud in electronic commerce,
validity assurance of financial transactions, showing
proof of identity, and protection of anonymity. The field
of cryptography is very deep and demands much
consideration. Many areas of mathematics comes
together to form cryptography, including number
theory, complexity theory, information theory,
probability theory, abstract algebra, and formal
analysis.

Cryptography is a constant game of cat and mouse


between those working to protect data for its owners
and those trying to take data for themselves. The nature
103

of the problem lies in that it is an uphill battle where the


good guys have to provide an impenetrable defense
while the bad guys only need to find a single flaw to
break the entire system. Making the battle even more
difficult is that the defenders are fighting with one hand
tied behind their back, appeasing users who dont want
their convenience hampered by something that they
dont want to have to deal with. These factors add
together to produce a multi-billion dollar industry that
is constantly pushing the boundaries of technology and
mathematics.

Cryptography has been an essential part of warfare


since the days of Julius Caesar and possibly even earlier.
Until the advent of the business class electronic
computer, cryptography was almost exclusively utilized
by governments for military use in issuing orders
among commanding figures. The Caesar Cipher, German
Enigma Machine, and other coding schemes (such as the
use of the Navajo Native American language in WWII as
a radio cipher) were used for relaying military tactics
over dispersed squadrons/forces. At their time, these
ciphers were highly effective. The Caesar Cipher was a
104

simple substitution cipher that shifted letters down the


alphabet. Perhaps a reason the cipher was strong for the
time was that those who may have intercepted
encrypted messages were illiterate or believed the
message was written in a foreign language. In time,
accompanied with increasing numbers of literate and
intelligent individuals, the cipher inevitably would not
maintain its secrecy. In the 1500s, Blaise de Vigene`re a French cryptographer and diplomat - developed a
polyalphabetic cryptosystem (as opposed to the Caesar
Cipher mono-alphabetic cryptosystem), which was
much more difficult to break.

Another example of cryptography in warfare is the


development and use of the German Enigma Machine
during World War II. This electromechanical device
used a series of rotating cylinders and reconfigurable
keyboard mappings in predefined configurations to
generate a rolling cypher to protect all German military
transmissions. Every time a letter was typed, the
cylinders would rotate, allowing for different letters to
be encrypted as the same one, making frequency
analysis completely useless, unlike in the past. However,
105

the devices strength also corresponded to its greatest


weakness.

No letter of the alphabet could map to itself after being


encrypted. It was this fundamental design decision that
allowed British mathematician Alan Turing to
spearhead work on the bombe machine. This
electromechanical device allowed Allied code breakers
to determine the days Enigma configuration using logic.
The machine would be programmed with a probable
translation of an encrypted message and make
assumptions about the configuration. When a logical
fallacy was detected (e.g. one letter on the plug board
needing to be mapped to two different letters at once),
all deductions relating to that assumption were known
to be false. Since the development of the bombe
machine, cryptographic algorithms have only become
more and more complex and more heavily scrutinized
for such possible weaknesses or flaws.

As digital computing advances, security needs to be


developed and researched at an increasingly rapid pace.
This is a recurring theme and is still applicable today.
106

Since systems built today will be used for years ahead,


they must be able to stand firm in the midst of what the
future brings The evolution of attackers, computational
power, and incentives to undermine a widespread
system. With the spread of interconnected devices
across the globe, malicious parties have even more
targets and opportunities to probe systems security
and exploit vulnerabilities. This has led to widespread
use of cryptographic technologies online.

It is the rise of network traffic and online transactions


that has created a need for securing channel
communication. Shopping, banking, and personal
communications are all protected with modern
encryption standards. With each of these activities
being tantalizing to potential thieves, strong protection
is a must. With the ever increasing need for security,
digital encryption standards only improved over the
years leading to the creation of standards such as the
Data Encryption Standard (DES) and the Advanced
Encryption Standard (AES), hash functions like Message
Digest (MD5) and the Secure Hash Algorithm (SHA), and
the public key cryptography work of Diffie-Hellman and
107

RSA. All of these technologies are (or were) used in one


way or another to form the backbone of the modern
Web.

The main issue seen in the advancements of these


technologies is the slow rate at which old, insecure
technologies are no longer supported. Despite the
widespread adoption of the Secure Sockets Layer (SSL)
3.0 and Transport Layer Security (TLS) 1.0, in 2006,
many web servers still supported the outdated SSL 2.0.
In addition to the deprecated protocol, a majority of
servers also still allowed connections to be encrypted
with DES-64 or worse, not requiring at least 3-DES. The
good news about cryptography is that we already have
the algorithms and protocols we need to secure our
systems. The bad news is that that was the easy part is
that implementing the protocols successfully requires
considerable expertise. When providing a server for
customers to connect to, it is the responsibility of the
person who runs the server to make sure that the
connection is as secure as possible. This de facto
responsibility is due to the fact that most users have no
108

idea what algorithms and protocols their machine is


using, they just want it to work and be safe.

In terms of providing security for the future, there have


been many inquiries into and explorations of the
security of AES, but none so far have shown any
significant flaws. This is encouraging, showing that
security researchers are staying ahead of their
competition to make practical encryption as difficult to
defeat as possible.

All modern encryption is based on the concept of


practical security as opposed to perfect security.
Perfect security is something that can never be
defeated, no matter how long you try to break it.
Practical security relies on the concept that some
encryption algorithm can be so difficult to defeat that it
is not practical. This can be due to raw computing
power necessary, that the information would no longer
be relevant by the time it was retrieved, or even
because it would take longer than a persons lifespan
with available resources. Its the restriction on time and
resources that provides a barrier to the attackers. While
109

these restrictions place a hamper on attackers


advances, the natural progress of technology plays into
their hands with improvements in processing power
smaller scale fabrication processes that now reach
down into the 16nm range -- and also in availability (e.g.
cloud computing clusters and services with massive
parallel computing power that can be rented by
anyone). The ease of access to these newer technologies
starts to tip the scales in favor of the attacker who can
now call on massive amounts of computing power
cheaply and on demand.

The most difficult aspect of practical security that needs


to be considered is that it will be used by everyday
people on an everyday basis. What this means is that it
must be fast enough that it does not impede a users
actions with time consuming processes while still being
robust enough to stand up against the constant barrage
of those who want to break it. This balance of easy to
use and hard to break has led to the concept of a oneway trapdoor function. These are used in the Integer
Factorization Problem (which the RSA cryptosystem
has as its fundamental problem), Discrete Logarithm
110

Problem in Finite Fields, and the Elliptic Curve Discrete


Logarithm Problem. The trapdoor doesnt make the
problem easier Attackers would still need to tackle the
problem the usual way which would take an unfeasible
amount of time. In modern research and use, Elliptic
Curve Cryptography (ECC) has taken center stage for
online security. For years, RSA has dominated the online
landscape as the preferred method of key exchange
services used by Transport Layer Security and the
Secure Sockets Layer, and is now being replaced with
ECC in order to future proof systems. While RSA has not
yet been broken, it is has been used for a long time
(technologically speaking) and many fear that it will
soon not be secure enough. With recent discoveries of
how far governments are willing to go when it comes to
electronic security and surveillance, many online
content providers have begun to see the necessity of
proactive protection in order to protect their users and
save face by preventing any breaches from occurring.

The main problem with online security, in some ways,


can be traced back to the fact that the web was never
built with having security at its core. Once researchers,
111

designers, and developers started to realize how


widespread and varied the use of the web had become,
unforeseen uses of the technology all but required the
use of secure communications to be practical. In order
to make systems secure, cryptography needs to be
incorporated from a systems conception to installation
Cryptography is not simply an additional component
that can be tacked onto the system as an afterthought.
As development of web technologies picked up in pace,
security concerns began to be built in to the heart of
protocols being designed and implemented, falling more
in line with Schneiders philosophy. Unfortunately, the
nature of the web itself lends to insecurity as it is just an
interconnecting and conglomeration of different
protocols, some of which were not meant to be used in
networked environments. This eclectic nature of
networks allows attackers the ability to compose their
attacks across these different protocols. Composition is
a key issue for protocol security. There are many layers
of security throughout an interworking system much
the same way as software architectural layers, but
require careful scrutiny, watching for the not so obvious
holes that attackers may find across the different
112

dimensions of the system. While individual protocols


may be secure, the ways in which they interact may be
vulnerable to interception, injection, or other
vulnerabilities.

Aside from encryption, using secure protocols also


allows for servers to authenticate themselves through
certificates to the end user. Due to the sheer scope that
the internet now encompasses and the ease with which
someone can set up their own server presence on the
web, the new problem of spoofing or electronic
impersonation emerges. This new threat has led to the
use of previously developed encryption techniques to
be developed into means of authenticating ones self
online. Such methods of authentication have become
core parts of protocols, such as SSL, that allow for users
to identify servers using asymmetric key cryptography
such as RSA and while simultaneously allowing users to
authenticate themselves using easily memorable
passwords as opposed to their own set of public and
private keys. Through the use of these authentication
mechanisms, phishing attacks and related malware
issues could be reduced, saving end users frustration
113

and saving companies time and money. With the


average malware breach costing a company around $6.6
million, the costs of encouraging widespread
implementation of secure web connections seems like it
would be able to pay for itself.

CONCLUSION
Through this examination of security over the ages, it is
apparent that cryptography has only become a field of
ever expanding research, whether in the pursuit of
strengthening it or that of breaking it. With technology
and connected devices doing nothing but proliferating
through our everyday lives, keeping all of our data and
communications secure becomes more and more of a
pressing issue. Throughout the world, new ideas and
inventions are being created and implemented for
people to use for entertainment (communicating with
loved ones or sharing pictures), making our day to day
lives more convenient (sending money, electronic locks
on our houses), or even making them safer (connected
smoke alarms, collision detection systems). In all of
these areas, there will always be someone who wants to
get their hands on this information for one reason or
114

another, whether it is to steal from us, impersonate us,


or just to watch the world burn. Yet, also in these areas,
there are those who defend us, who believe that we
have the right to safe and secure lives and keep these
dark forces at bay through ever evolving mathematical
and technological developments. In conclusion,
cryptography will be a concern for a long time to come.

115

CHAPTER 7. WIRELESS SECURITY


In a first part, we discuss security and encryption. The
principles and purpose, securing and ensuring the
secrecy of a system, before discussing the difference
between symmetric (single key) and asymmetric
(public-key) encryption, to end with block and stream
cypher and some examples of encryptions protocols.
In a second part, we review some wireless network
technology architecture, after discussing common
attacks on wireless networks: man-in-the-middle,
eavesdropping, DoS and impersonation; and how do
detect such attack by monitoring the client, the access
point or the global traffic.
We follow by discussing best practices and policy which
will help minimize security risks. Attackers are
everywhere today and every technology has its
weakness. It is up to the administrators to do
everything in their power to secure their systems.
Then, we present how to implement strong security
architecture, using a VPN, a user authentication
116

framework and a shared-key mechanism to ensure


protection.
To conclude, we perform an attack on a not secured
wireless network. By this attack, we prove how easy it
is for someone to access data on a not secure wireless
network, and the need for strong security mechanism.

OVERVIEW
Computer security represents today a very important
part of computer science, especially the network
security. Almost everything is today linked to a
computer network or another; internet or private
network, and computer are used for almost every
modern work. Ensure the security of such practice
became the priority for users and companies, as data
exchanged through networks were more and more
confidential, and the possible threat against network
were growing.
A large amount of effort has been put into research on
network security, in order to prevent data loss, identity
stealing, and every problem caused by lack of security.
In this section, we will focus on wireless network,
117

security issues, impact, and protection mechanism


applicable on this large brand of networking
technology. We will describe encryption method used
today, the usual attack method used on wireless
network and how to detect them. Then, we will review
the architecture and weaknesses of popular wireless
technology currently in use today, before protection
solution and defining best practices for wireless
network. Finally, we will study a strong wireless
architecture, and perform a simple network attack on an
unsecured network to prove the need of security and
encryption.

SECURITY AND ENCRYPTION


In this part, we will present basic concepts of security
and encryption, such as symmetric/asymmetric
encryption, different methods of encipherment and
some examples of encryption algorithms. The main
purpose of security is to ensure the confidentiality of a
system. This purpose is based on 5 main principles:
1. Authentication: verification of the identity of
each party
118

2. Confidentiality: only the person for which the


message is meant to can read it
3. Integrity: ensure that the data transferred have
not been altered in any way
4. Non-repudiation: ensure that no party can deny
having send a message
5. Reliability and availability: ensure the quality
of service for the system

SYMMETRIC AND ASYMMETRIC ENCRYPTION


Encryption is the conversion of data into a cipher text,
thus it is not readable. There is two main way for
encrypting data : symmetric encryption, using a single
key for both party, and asymmetric encryption, a more
complex system using a pair of key for each party : one
for encrypting, the other for decrypting. Both methods
are described below.

SYMMETRIC ENCRYPTION
As a recap from a prior section where we discussed
symmetric encryption, symmetric encryption uses a
single key, for both encrypting and decrypting data. The
schema is very simple: the text is enciphered using an
119

algorithm and the key, and deciphered using the


reverse algorithm and the same key. Thus, both parties
share the same key.
This encryption system is the most used today. To be
efficient, we need to use a correct encryption algorithm
(Such as AES) and a key of a sufficient length, thus
preventing it to be cracked. The most common attacks
on such encryption are the cryptanalysis attack,
exploiting the

ASYMMETRIC ENCRYPTION
As we described in a prior section, asymmetric
encryption (aka Public Key Encryption) are the same.
Recall that the sender and the receiver having different
keys to encrypt or decrypt messages, the emphasis with
the term (and the term only) is that the public key is
one of the 2 keys in asymmetric encryption that can
actually be publicized. I can place my public key on any
public forum, send it with each email, post it
anywhere. It will still keep an encrypted message secure
even if everyone had the public key available. Again, to
either scramble or encrypt a message that uses an
120

asymmetric algorithm, not only do you need the


plaintext message, you also need both a public and
private key (think asymmetric keys) to encrypt and
decrypt a message.

BLOCK & STREAM CIPHERS


There are two methods for enciphering data, the block
cipher, and the stream cypher, presented below.

BLOCK CIPHER
The block cipher, as shown by its name, will take as
input a block of data / plaintext, to produce a block of
cipher data. Thus, all the data to be enciphered will be
cut down into blocks, in order to produce multiple
blocks of enciphered data.
They usually are more susceptible to noise during
transmission resulting in a loss of all the data contained
in the block. However, they can provide integrity and
authentication protection.

STREAM CIPHER
Stream cipher differs to block cipher, as it takes in input
our plaintext bit by bit. It has two main components: a
121

key stream generator, the main unit in the stream


cipher, and the mixing function, usually a TOR function.
To start, we need an Initialization Vector, which will be
used to generate the other vector and generate the
enciphered data.
Stream cipher is faster than block cipher, but they are
more difficult to implement correctly and more subject
to user error resulting in a lack of security.
Finally, block cipher are better when the amount of data
to encrypt is known, such as files, where stream cipher
work best for continuous stream of data, for example on
a network.

WIRELESS NETWORK, GLOBAL ARCHITECTURE AND


WEAKNESSES
The next section presents the global architecture of the
most used wireless technologies, and their weaknesses.

COMMON ATTACKS ON WIRELESS NETWORKS

122

Lets start by presenting the most common attacks


against a wireless network, and how they are
performed.

SURVEILLANCE - EAVESDROPPING
On unencrypted networks, the attacker could read the
data stream which is sent from and to this network.
From encrypted network, such as those using WEP, the
attacker could use a crack software in order to break
the key.

DOS ATTACK
A DoS attack can be performed on network layer 1 and
2. The layer 1 attack consist in emitting a strong signal,
stronger than the one attacked, in order to increase the
noise on the channel used and unable any user to
access the legitimate network. The layer 2 attack will
consist in a flood of the network and attached client
with malicious packets.

IMPERSONATION
After the surveillance, one is able to identify MAC
address of the users of the network. Even if network is
123

encrypted using WEP for example, stealing a MAC


address can be used in different purpose, as accessing a
network only allowing a certain range of MAC address.

MAN IN THE MIDDLE


This attack is made in two times. First, the attacker has
to put down the legitimate Access Point (AP), using a
DOS attack for example, and then set up a bogus AP with
the same SSID than the first to allow the victim connect
to it. Then, it can read and/or modify every packet
before sending to the legitimate AP.

DETECTING AN ATTACK
There are several ways to detect an attack on a wireless
network. There are presented below.
1. Monitor the access point: Monitor every packet
sent by every known AP on the network. If there
is a new AP coming, one getting down, or any
change which is not supposed to happen in the
network, an attack could be occurring.
2. Monitor the clients: It can be possible to
monitor the client, with mechanisms such as
black-listing,

detecting
124

potentially

harmful

packets from a client (such as flood), re-use of


the same IV vector for generating a WEP key
(see the WEP explanation below) and checking
for the IEEE 802.11 header of the client for
change (if its MAC/IP address has been stolen).
3. Monitor the global traffic: Monitor the global
traffic can be used to detect flood attack, using
de-authentication, de-association, authentication,
association, and bogus authentication. Failure to
authenticate and associated to the network can
also be monitored.

WLAN 802.11 FAMILY ARCHITECTURE


802

Based networks are very common and are used


in public areas such as coffee shops and hotels,
and private areas like offices, homes and
industrial warehouses. Security issues arise
because these LAN networks broadcast on radio
frequency networks (RF).

Global architecture
The 802.11 standard is structured as follows:

125

Standard

Data

Frequency

Rate

(MHz)

(Mbps)
802.11

1-2

2.4

802.11a

6-54

802.11b

1-11

2.4

802.11g

20+

2.4

AUTHENTICATION
WLAN authentication consists of:
1.

Service Set IDs (SSIDs): Service Set IDs


distinguish one network from another. The client
must have the correct SSID in order to connect to
the network.

2.

Open Authentication: Open authentication is null


authentication and any device will be authorized to
connect to this network. It is used in areas where
devices cannot run complex authentication
algorithms or need to connect quickly. The device
will need to know the correct SSID in order to
connect.
126

3.

Shared Key Authentication: In shared key


authentication, a client can connect to a WLAN that
uses a Wired Equivalent Privacy (WEP) protocol.
It works in the following steps:
802.12 The client sends an authentication request
to the access point.
802.13 The access point sends an authentication
response with a challenge text.
802.14 The client uses a WEP key to encrypt the
challenge text and sends an authentication
request.
802.15 If the access point can decrypt the
authentication request and retrieve the challenge
text then an authentication response is sent that
grants the client access.
4. MAC Address Authentication: Although it is not
part of the 802.11 standard, it is supported by
vendors like CISCO. The procedure compares the
clients physical MAC address to a local list of
allowed devices.
5. WEP Encryption: WEP Encryption is employed
with the goals of confidentiality, access control
127

and data integrity. It can be either 64 bit or 128


bit encryption.
Weaknesses and Common attacks
Each one of the authentication methods described above
has its weakness.
SSIDs are stored in plain text and can be found
using network sniffing.
Open authentication has no security measures.
In Shared Key Authentication, the challenge text
is exchanged over a wireless network and is
prone to a man in the middle attack. Packet
sniffers can capture the challenge text and the
response.
Mac Address authentication can be tricked when
an attacker spoofs an address using a protocol
analyzer.
Encryption is prone to:
Key attacks
Secret Key Problems
Confidentiality attacks
Integrity attacks
128

Authentication attacks

MITIGATION MECHANISMS
It is recommended to use WEP encryption wherever
possible to circumvent the lack of security offered by
SSIDs and open authentication.

WPAN 802.15 - BLUETOOTH, ZIGBEE & NFC


The following section represents the 802.15 wireless
technologies. These technologies use a short-range
connectivity, which can be Low Rate (LR), with a low
data speed transfer but long life battery level, and High
Rate, with high data speed transfer. The most common
of them are Bluetooth, ZigBee and Near Field
Communication (NFC).

GLOBAL ARCHITECTURE
WPAN can be divided into two global architectures: lowrate and high-rate architecture. Low rate will use low
radio frequency, where high-rate will use high radio
frequency.

LOW RATE ARCHITECTURE


129

In this kind of topology, there are two types of devices:


Full Function Device (FFD) and Reduced Function
Device (RFD), and two types of network topology -- star
network and peer-to-peer topology.

Star Topology: In a Star Network Topology,


composed of PAN and several FFD and RFD
devices, every communication goes through the
PAN coordinator.

Peer-to-Peer Topology: A Peer-to-Peer


Network Topology works as an ad-hoc, selforganizing and self-healing network. Any device
can communicate with any other, even if the
network is organized around a PAN coordinator.

HIGH RATE ARCHITECTURE


HR-WPAN network is called piconet, and networks
nodes are known as Devices (DEVs). One node will
assume the role of Piconet Coordinator (PNC), providing
synchronization service between the nodes. The
network is ad-hoc, which allow node to leave and enter
the network very easily. There are 3 types of Piconets
topologies. These include:
130

1. Independent Piconet: An independent piconet


is formed with a network coordinator and
several network nodes.
2. Parent Piconet. The parent piconet controls the
functionality of one or more Piconets.
3. Dependent Piconet. The dependent piconet is
divided into child (or neighbor) piconet. Child
piconets are created by parents to extend the
network, coverage and/or provide additional
resources to the parent.

WEAKNESSES, COMMON ATTACKS AND MITIGATION


MECHANISMS

Weakness and common attacks depends of the


technology considered. We can find the common
attacks (DoS, eavesdropping, impersonation, etc.) on
almost every technology, but the way to perform it and
to defend against it change for everyone. We will
present threats and mitigation mechanism for some
very common technologies.

BLUETOOTH
131

Bluetooth enable two or more user to exchange data on


a short range. Before exchanging data, a device has to
be discovered by another, and a matching between the
devices has to be done. Then, a device will act as the
master and the other as slaves. All communication
is managed by the master. Before being paired, devices
exchange a link-key allowing recognizing the device
afterward. Bluetooth is sensible to several attacks:

Bluejacking, sending unwanted data to a


Bluetooth device, using for example the
BlueChat protocol (chat protocol)

BlueSnarf, allowing the attacker to access


phone data without authentication

BlueBump, weakness in the key


management allowing devices which are no
longer granted authorization to access the
phone anyway if they are still paired

BlueSmack, a DoS attack, very similar to the


Ping of Death attack

BlueDump, resetting the link-key storage,


thus allowing eavesdropping during the new
key exchange.
132

Some countermeasure can be used to prevent such


attacks. First, when defining the PIN code between two
devices, do not to it in a public area, as it can be
eavesdropped during the pairing process. Do not reenter the PIN code every time a pairing is wanted, use
the old one. Also, turn your device into a nondiscoverable mode every time your transmission is
finished thus is preventing it from being discovered and
potentially attacked.
Finally, never pair with someone you dont know for
sure the identity.

ZIGBEE
ZigBee is a Low Rate WPAN, low cost and very low
power consumption, using a two-way communication.
ZigBee is vulnerable to several attacks:
1. Jamming, sort of DOS attack. As ZigBee systems
often use only one channel, it is easy to send a
lot of information on this channel in order to
disable the device.
2. Collision attack, especially using
acknowledgment frames, which is very effective
133

and difficult to detect.


3. Route disruption, sending false association
request to the coordinator, running him out of
capacity for any legitimate association. A
compromised coordinator can also attract some
target and perform attacks against them once
they are associated.
4. It is also possible to send a packet to a nonexistent packet. No verification is made, and the
packet will be sent again and again as no
acknowledgment is received.
It is possible to protect against such attacks. For
acknowledgement frames, a solution should be to check
if an acknowledgment frame is needed, to prevent the
use of out-of-scope address. Also, the key management
is centered to the trust center. Thus, for large network,
key are carried by the entire network and this can be
dangerous. To mitigate this problem, a hierarchical or
distributed key management should be considered.

NEAR FIELD COMMUNICATION


134

NFC is a wireless using very small distance


communication, often when two devices are very close
(a few centimeters). The shortness of the
communication gives a certain security aspect to the
communication. However, data are not encrypted,
certain attacks can be performed.

Eavesdropping attack, using an antenna.

A simple DOS attacks, making noise on the used


communication channel, preventing the user
from receiving any message.

Data insertion, when an attack will send a response


before the legitimate responder could. To protect
against such attacks, establishing an encryption is the
best method mechanism which can prevent
eavesdropping and data insertion.

WMAN - 802.16
Specification 802.16 consists of wireless broadband
communication standards for Metropolitan Area
Networks (MAN). It is designed to cover a larger area
than WLAN networks and use a tall antenna to make
this possible.

135

GLOBAL ARCHITECTURE
The security architecture is laid out as follows:

Security Associations: Security associations are


set of security information used to establish
communication between the base station and
the subscriber stations.

Certificate Profile: X.509 Digital Certificate used


for identifying the subscribing stations.

Privacy Key Management (PKM): Responsible


for the authorization, periodic reauthorization
and reception or renewal of keys for subscriber
stations.

Communication between base stations and subscriber


stations takes place in three steps:
a. Subscriber Authorization
b. xchange of Key Material
c. Encryption of Data Stream and Key
Management

WEAKNESSES AND COMMON ATTACKS


WMAN faces some threats that are common to wireless
networks.

Properly positioned radio receivers can intercept


136

messages sent over the wireless channel

Properly positioned radio transmitters can write


messages to a wireless channel

An attacker can use interference and distance to


his/her advantage by forging communication
between two parties by reordering and
selectively forwarding frames (Man in the
Middle).

Other physical level attacks include:

Water torture attack: The attacker drains the


receivers battery by sending a series of frames

The attacker jams a radio spectrum and denies


service to the network

MITIGATION MECHANISMS
In order to prevent the inception or writing of messages
by unauthorized parties, confidentiality mechanisms
and data authenticity mechanisms need to be in place.
To prevent an attacker from communicating with two
parties, replayed frames need to be detected.

Jamming attacks can be prevented by increasing the


power of signals using high gain transmission antennas
137

or increasing the bandwidth by using spreading


techniques. Currently, the police are responsible for
preventing physical attacks on networks.

WIRELESS WIDE AREA NETWORK (WWAN) - UMTS3G SECURITY REVIEW


WWAN is a very large brand of wireless network;
include networks used by mobile phone like GSM,
UMTS - 3G, 4G and Cellular Digital Packet Data (CPDP).
Due to the large brand of protocol available, and in the
difference the security is handled in every one of these,
we choose to study a particular protocol, UMTS. UMTS
is one of the bases for the 3G network, widely used
today.

GLOBAL ARCHITECTURE
The 3G protocol is based on its predecessor, GSM, but is
nonetheless very different.
The users phone will first communicate with the NODEB, which is the base station equipment. The Node-Bs
will then communicate with its RNC. The RNC manage
multiple Node-B, allowing resources and capacity for
data calls.
138

The Core network is directly inspired from GSM. There


are two signal and transport domains available, both
communicating with some databases, Circuit Switched
(CS) and packet-switched (PS). The databases are
usually regrouped into the Home Service Subsystem
(HSS) and the CAMEL Service Environment (CSE)
The CS domain contains the Serving MSC (MSC) and the
Gateway MSC (GMSC), connecting the Node and the
RNC to the rest of the Core. The VLR is the database
containing every user identity. For the PS domain, there
is the Serving GPRS Support Node (SGSN) and the
Gateway (GGSN).

SECURITY CONCEPTS IN THE UMTS-3G


PROTOCOL
3G have multiple features to ensure a strong security all
other the protocol, all that including a compatibility
with the GSM network.

SIM BASED AUTHENTICATION


This security principle is inspired from what it was in
GSM. 3G have an authentication system using a
symmetric key shared between the SIM card and the
authentication center. This key is only dependent of the
139

operator issuing the SIM card, limiting the impact of a


compromised device. If a device is hacked, only the
operator issuing the card might be impacted, leaving all
other users safe.

MUTUAL ENTITY AUTHENTICATION


UMTS provide entity authentication for both the user
and the network, ensuring security for both the
subscriber of the network, and the network itself against
illegitimate user.

DATA INTEGRITY
Data integrity is ensured using some main features:
integrity algorithm agreement and integrity key
agreement, allowing checking the data integrity
between the receiving and sending entity.

USER DATA CONFIDENTIALITY


Confidentiality of the user data is done by making an
agreement for the algorithm and key to use between
sender and receiver, allowing encrypting every data
sent and received.

USER IDENTITY CONFIDENTIALITY


140

To allow confidentiality for the user identity, a


temporary identity is associated to this user every time
he or she enters a certain zone. Once the user leave the
given zone, the identity is given to another user,
preventing anyone to know exactly for which user is
which temporary identity.

NETWORK AND APPLICATION LAYER SECURITY


UMTS use some network and application layer security.
First, an IP based protocol is used to sectioning the
network into security domains. Also, SS7 (What is SS7?)
is used for communication and operators may add more
security features.

WEAKNESSES
When reallocating the Temporary Mobile Subscriber
Identity (TMSI), it may happen that the SN or VLR fail to
associate the TMSI to the International Mobile
Subscriber Identity (IMSI). In such case, the IMSI is
requested directly to the user. It may allow an attacker
to pretend a SN to request the permanent user identity.
Also, the IMSI request this way is transmitted
unencrypted on the radio path, allowing
eavesdropping.
141

Firewall protects the network against external attacks,


but not against a compromised mobile device with
access to the network.
Data are transmitted from the mobile device to a
gateway, before being transmitted to the server. The
data are secured between the mobile and the gateway,
but not between the gateway and the server, which may
lead to insider attacks.

PROPOSED PROTECTIONS
To avoid the use of IMSI directly on the network, it is
possible to use 2 TMSI instead of one. Thus, if the first
TMSI is compromised or fail to give an IMSI, the second
one is used. If both fail, the user is not attached to the
network.
It is also possible to enable the use of an end-to-end
VPN for data transmission. Thus, the data will go
encrypted directly from the user end to the server-end,
without the need to be decrypted and eventually reencrypted on the network.

POLICIES AND BEST PRACTICES


142

It is necessary to have best practices and usage policies


in place in order to minimize the threats described in
this section. Following are steps that be taken to ensure
that administrators have performed everything in their
power to prevent unauthorized actions.

Change the manufacturers SSID to a secure


SSID: Changing the default SSID from tsunami
or Linksys, for example, will make it harder for
attackers to access the Wireless Application
Protocol (WAP). The new SSID should not
contain any identifiable information about the
company and should be a lengthy collection of
random characters including letters, numbers
and symbols.

Encrypt passwords: The SSID password and


WEP key stored in the registry file should be
encrypted to thwart perpetrators.

Enabling all security features: Since embedded


security features are disabled by default, it is
beneficial to deploy all security options
available. Strong authorization and encryption
143

should be in place. For businesses, this means


that either 802.11i or a VPN should be used.
Application encryption such as SSH or SSL
should be used along with WEP to minimize the
threat of network sniffing.

Dynamic Host Configuration Protocol (DHCP)


should be disabled: DHCP provides the IP
address to anyone regardless of whether they
are authorized or not. Static IP addresses should
be used instead.

Employ a closed network: The SSID should be


invisible on network lists and should be typed in
by users wanting to connect.

Controlling/Dismantling Rogue Access


Points: Rogue Access Points are installed
without the knowledge of the administrator.
Surveys of the site should be conducted
routinely to track APs. APs should be kept away
from interfering electronics such as elevators
and microwaves. They should be placed in a
place where access is strictly controlled.

NETWORK MONITORING
144

The network must constantly be monitored for unusual


happenings. A Network-based Intrusion Detection
System (NIDS) should be in place along with an antivirus software. All threat detection and prevention
systems should be constantly updated and logs should
be analyzed.

BACKUP DATA
Regardless of how many safety measures are in place, it
is always important to have data backed up.

EFFICIENT SECURITY ARCHITECTURE


In this section, we will try to describe what could be
some current efficient security architecture. First, we
will discuss an open network with internal VPN
architecture, followed by user- authentication
architecture, to finalize with a shared-key mechanism
architecture.

OPEN NETWORK WITH VPN


In this situation, we assume that all data on the network
are untrusted, and neither authentication nor
encryption is done before anyone access the network

145

(i.e. we consider anyone can access the network). Inside


this network is our LAN, with sensitive information.
This architecture as some advantages, very simple,
cheap, and the security principle is very easy: no traffic
is trusted.
Considering such architecture, there is three main ways
to protect our network. Using VLAN tagging to isolate
traffic (VLAN tagging allow every packet to be tagged
with an ID in order to identify the VLAN they belong to)
but require that the entire network support VLAN
tagging in order to be effective.
It is also possible to completely isolate the LAN,
dropping every packet from the network trying to
access it, using a firewall or a router. Finally, it is
possible to use a separate internet connection along the
network. This is the most simple and secure solution,
but also very expensive.

This type of architecture is very general, but also much


spread. Using these basic principles should prevent
major threats, but specific mitigation mechanisms
146

depending of the particular architecture remain


necessary in order to prevent a very efficient protection.

USING USER AUTHENTICATION - WPA2-ENTERPRISE


One of the most recent system, it allows the network to
authenticate every client using a unique username and
password using an LDAP, a protocol used to look for
information on a server (What is LDAP?), or Active
Directory, a special purpose database.
WPA2 propose a strong security today, even if its
implementation is complex.

USING SHARED-KEY MECHANISM - WPA2-PSK


As discussed in the best practices, using a shared key
mechanism can prevent unwanted traffic to come on
the network. The idea is to give every authorized client
a key (preferably a WPA2-PSK, which is the most
secure, but avoid WEP which is not secure. Then, only
allow a traffic which use the keys, and drop any other
traffic.
The disadvantage of such method is the difficult key
management, and the problem caused by a lost or
stolen key, enabling outsider attacks.
147

AN ATTACK PERFORMED: EAVESDROPPING


In order to prove the danger of non-encrypted LAN
network, and non-secure website connection (e.g.
without HTTPS), we tried to perform a Man in the
Middle attack on a Wi-Fi network. We wanted to prove
that it is possible for an attacker to steal a login and
password from someone when using an unencrypted
Wi-Fi.

ATTACK ENVIRONMENT
Our attack was very simple. Our Victim (V), was surfing
on the Internet using an unencrypted public Wi-Fi, and
wanted to create a discussion forum to discuss about
network security matters.
Our Attacker (A) was monitoring all the Wi-Fi traffic in
the area using Wireshark, and was able to see the traffic
of V. As the Wi-Fi was unencrypted, and the website
unsecured, A was able to see all the internet traffic of V.

ATTACK RESULT
The attack allowed us to discover the password and email used to create the forum. As both the Wi-Fi and
website connection were unencrypted, we were able to
148

read the packet containing the FORM data used to send


the forum creation request.

It is easy to eavesdrop data from a Wi-Fi connection


when Wi-Fi is unencrypted, such as most public Wi-Fi
available in restaurant and other places. Especially, this
attack shows the importance of using a secured
connection on internet, and using an encrypted
connection every time valuable data are transmitted on
the network.

CONCLUSION
In this section, we have looked at various wireless
technologies from smaller wireless local area networks
to extensive wireless wide area network. Each
technology has its own weaknesses and administrators
need to know the ins and outs of the technology used in
order to thwart attackers and ensure network security.
Laxness in security can lead to devastating
consequences for individuals and business entities
since almost all tasks performed today are over the
internet.

149

Merely relying on the minimal default security provided


by manufacturers is not enough and administrators
need to go above and beyond to properly moderate risk

150

CHAPTER 8. OPERATING SYSTEM


SECURITY
The field of operating system security is complex and
requires constant renewal. We detail the recommended
procedures for system administrators in securing
endpoints when they are added to the network and in
the future, as well as tools to help in this process, with
the goal being to reduce the amount of time any given
end user system remains unprotected. We then discuss
the methods of building an operating system with
security in mind. A system can be used to enhance
kernel survivability in the event of a compromised
application attack. We conclude with remarks on this
approach, as it pertains to security and future viability.
The practice of information security, though commonly
thought to only deal with preventing access to sensitive
data, actually encompasses many other fields of
computer science and software engineering. Similarly, it
is not only about protecting key informational assets
such as databases, servers, and websites; it is also
concerned with securing any entry point into the
151

organization. User systems, the personal computers and


other devices (which are often referred to as endpoints)
play a critical part in the security of an organizations
information because they are ubiquitous, heavily used,
and are not under the complete control of the
administrators at all times.
Indeed, securing endpoints can be very difficult, as their
position as end user systems means administrators
must carefully balance the needs of the organization to
have secure data against the needs of employees to
efficiently do their jobs. A high security policy may
simply be to prevent all network access to end user
systems, but this makes it nearly impossible for those
users to accomplish their jobs in a timely manner in
todays vastly interconnected workplace. Conversely,
allowing any endpoint unmitigated access to any
resource may allow all employees extremely simple
access to needed resources, it also exposes all resources
to any person using any computer, a critical
vulnerability. Operating system security is about
providing both qualities in the necessary amounts for
the organization.
152

Operating system security can be approached from


several perspectives: from that of a systems
administrator using commercial or open-source
operating systems to create functional end-user
distributions, and from that of an operating system
developer/vendor, creating a product that is secure out
of the box. While the systems administrator must ask
the question, How can we configure the operating
system and its applications to be secure for the
organizations needs? the operating system developer
asks How can we build our operating system to ensure
that it is secure from exploits regardless of how it is
used? We will discuss the specific issues and
techniques used by both.

OPERATING SYSTEM SECURITY FOR SYSTEMS


ADMINISTRATORS
The process that systems administrators must
undertake to ensure that all endpoints are secure can be
separated into two major periods: ensuring security
when configuring new machines to be added to the
153

organization, and maintaining security of existing


systems over time.
General procedures for ensuring new systems are
secure before delivering to the user:
1. Install the operating system and patch it. This
is the most vulnerable period for an endpoint in
this procedure; the installation disk or image
must be fixed in time, and generally only
includes the most recent major service release, if
any patches at all. The patching process can take
several hours, during which the machine must be
connected to the internal update server if it
exists or to the internet so it can download the
patches from the vendor.
2. Configure the operating system to maximize
security. This consists of several steps:
a. Removing any bundled software which is
not needed by the user to perform their
responsibilities, which reduces the
possibilities of unused software
presenting a vulnerability that should not
exist;
154

b. Setting up any user accounts needed, as


well as assigning appropriate privileges,
in order to prevent users from having
access to resources they shouldnt; and
c. Setting up other access controls not
covered by user account settings, to
maintain the principle of least privilege.
3. Install additional security software. While
most operating systems ship with some form of
firewall or antimalware suite, these products are
generally intended for personal use and are
either not sophisticated enough or not licensed
for use in a large, possibly commercial,
organization. They may not be updated as
frequently or tuned to counter the right threats,
and they may not be able to be configured and
managed separately. Replacing these with
enterprise-grade products as soon as possible
aids in integrating the system into the greater
network.
4. Test the final build. It is advised that systems
administrators test the build with modern
security tests to ensure that it has been
155

configured appropriately. It is far easier to fix a


hole on one machine than on every machine in
the organization, and testing ahead of
deployment prevents the chance that end user
data is on the machine if it is compromised.

Additionally, it is highly important to plan any


installation procedure, in order to help prevent issues
or inconsistencies in machine configuration.
Both authors also propose similar procedures for
maintaining endpoint security over time. These
processes must be performed regularly by systems
administrators, and the specific techniques must be
kept up to date in order to ensure maximum security:
1. Keep the operating system up to date. Create
policies to ensure the operating system
automatically installs patches, usually after
review by the systems administrator. Vendors
produce patches in order to fix vulnerabilities
discovered by internal security teams or by
hackers, and leaving endpoints unpatched
156

increases the risk that security may be


compromised.
2. Backup data regularly. User data and system
configuration can be lost for many reasons, and
minimizing time during which a user is not able
to do their job is critical to information security.
To that end, regular backups of user data should
be taken and stored securely, generally in
multiple places to increase the chance that data
will survive even severe disasters.
3. Continue to test systems for vulnerabilities.
Even though the OS vendor may produce patches
to fix vulnerabilities they find, these patches are
by no means comprehensive. In many cases,
security testing suites may be able to detect
vulnerabilities before vendors can, and in some
cases can lead to reporting those vulnerabilities.
These tests can also reveal vulnerabilities related
to configurations, which patches may not be able
to fix.
4. Monitor system logs. Suspicious or even
malicious behavior can reveal itself
inadvertently through logged activity. Regularly
157

collecting and analyzing logs from endpoints can


reveal ongoing or attempted attempts to gain
access to the network. This can be an important
last line of defense for administrators.

These steps are general, and can be applied to any


operating system in use in an organizational setting, but
they do not provide a comprehensive overview of
specific steps to take. Many guides exist for securing
specific operating systems; Security Highlights of
Windows 7, Mac OS X Security Configuration, and
Guide to Secure Configuration of Red Hat Enterprise
Linux 5 are examples of product-specific guides to
taking advantage of specific security settings and
provisions present in those operating systems. It may
also be a good practice to create and maintain
organizational guides with specific information for
colleagues or future administrators.
In addition, there exist several tools and techniques to
help automate this process. Imaging software, such as
DeployStudio or Symantec Ghost, enable the
administrator to set up one machine and then capture
158

its image and store it. Clients can boot over the network
and download these images. When the system boots for
the first time, it will be fully patched and secured. Using
Sysprep, these images can even be made hardwareagnostic, allowing for large-scale rollouts across
organizations. Centralized configuration such as group
policy enables administrators to change the settings of
every endpoint at once by requiring them to check with
the active directory server at boot or login. Similarly,
roaming profiles can centralize user accounts
themselves, allowing end users to have all their
documents and personalization settings on any machine
they log into, while simultaneously ensuring user
permissions are in sync throughout the organization.
In general, it is important to remember that the security
of organizations end-user computer systems is an
ongoing process that should itself be kept up to date.
Ensuring that all systems are protected according to the
most recent practices minimizes the likelihood of users
compromising data security from inside or outside
attackers doing the same.

159

OPERATING SYSTEM DEVELOPMENT


Building an operating system has its own concerns with
regards to security. The operating system is often the
most complex piece of software running on a computer,
and the only software with components that have full
permissions and access to hardware. While there are
many sub-divisions of operating system security that
must be paid attention to in order to produce a secure
system, security of the kernel is perhaps the most
critical, because an exploited kernel can exploit any
application running on the system trivially.
The importance of kernel security is a result of the
kernels need to run with full access to all system
resources. While most kernels implement a ring model
of security which grants the least amount of privileges
required, the kernel runs in ring 0 which is fully
privileged. The result is that, even though an exploited
application may not be able to do what an attacker
wishes, if that application can exploit the underlying
kernel with which it must interact, the attacker has
gained full access.
160

This interaction takes place in the form of system calls,


which applications invoke in order to do many
functions such as access IO devices, talk to other
processes, or modify kernel memory. Due to its opensource and community-based nature, the Linux kernel
has been the focus of much security research in the
computer science field (although it is unknown just how
much security research is performed at Microsoft on the
NT kernel.) Researchers and developers have faced
many challenges due to the original design of UNIX
Dennis Ritchie has been quoted as admitting that UNIX
was not developed with security in mindas well as
the POSIX standard which Linux, though it does not
comply fully, maintains ties with in order to increase
compatibility with other UNIX operating systems.
As it turns out, a focus on system calls enables a method
of enhancing kernel survivabilitythat is, the ability
for the kernel to function normally in the event of an
attack.
There are common methods of exploiting a kernel. They
note that these exploits include (1) the buffer overflow,
(2) the integer overflow, and (3) the format string
161

vulnerabilityall involve the use of system calls to use


an already compromised application on the system to
exploit and take over its kernel. The problem with
conventional fault-tolerance routines, they note, is that
these processes involve replaying operations that lead
to the fault after isolating the actual hazard. In the case
of a security attack, even if the fault is isolated, the
processes cause the attack to be repeated.
Instead, the authors propose a three-part system
designed to roll back the kernel without replaying the
attack. In order to detect that an attack on the kernel is
being performed, the authors allocate a number of bits
for every word in kernel memory to be write protect
bits. Memory is then configured such that each buffer
where data is to be written is surrounded by writeprotected memory words. If a write operation attempts
to exceed these bounds, the operation will try to write
to protected kernel memory, and an exception will be
raised.
To isolate this attack, the handler for the raised
exception will be able to use kernel checkpoints
(described below) to bring the kernel back to a known
162

good state. Rather than just replaying the actions,


however, the offending process will be terminated, and
another process will be retrieved from the scheduler
and assigned to run in its place. This prevents the attack
from being replayed, while minimizing the amount of
time the kernel spends in recovery mode; since there
are other processes waiting and the offending process
wont be allowed to execute any more (it is considered
compromised,) scheduling some other process and
continuing as normal is feasible.
To recover from the attack, a system of checkpoints will
need to be in place. These checkpoints are created just
as a system call is invoked, before that call is handled
and results returned. The contents of the checkpoints
involve the state of the kernel as it relates to the process
making the system call, as with each call there exists a
chance the process will attempt to use the call for
malicious purposes. When the exception is thrown,
these checkpoints can be used by the handler to restore
kernel state and to identify the compromised process,
so that it may be terminated.

163

For future work, the authors discuss the fact that,


though the checkpoints were successful in preventing
the mentioned exploits from bringing the system down,
the sophistication of the survivability modifications
must be greatly enhanced, to cover a wider variety of
situations. Specifically, the authors mention that a
similar scheme could be applied to I/O operations. The
authors also mention applying these modifications to
kernels other than Linux in the future.

CONCLUSION
This research represents an infinitesimal fraction of the
greater field of operating system security. Securing an
operating system is a difficult process; it requires the
cooperation of several parties and a constant routine of
maintenance. While operating system vendors are
responsible for many aspects of the security of their
products, it is impossible for them to guarantee
security. Systems administrators must play an equally
large part in securing end user systems, by establishing
policies both technical and organizational in nature.
Even the end users themselves must be careful not to
breach security, even inadvertently. Once an attacker
164

has a path into an organization, they can defeat many


security policies and procedures and compromise the
security of information. Though new techniques are
constantly being developed at all levels, it will be years
or even decades before these ideas are ready to be
shipped with mainstream software, if they ever get
there. Only through proven techniques and careful
application can an organization ensure their
information is kept confidential, uncorrupted, and
available to those who need it.

165

CHAPTER 9. DATABASE SECURITY


This section outlines the various security mechanisms
that are available to protect databases, and explores
them in detail. New threats emerge in the security field
every day, and as a database administrator, or security
analyst for a company, it is up to them to delve into
these technologies and ensure that the data is getting
the appropriate security. Specifically, we will look at
seven methods in which to secure data. Lets discuss
technical overview into each security mechanism, and
in some cases will provide visual examples of code and
diagrams. Each of these methods will have their
corresponding threats, solutions, advantages, and
disadvantages associated to them. It is important to
know what tools are available when dealing with
threats as well as how to assess risk. Planning for an
attack and minimizing the loss plays a vital role in
security. We will also discuss the basics of disaster
recovery, and the tools available to ensure that a
companys information is not vulnerable to malicious
behavior from both users and hackers.
166

Also, we will discuss in detail some of the vulnerabilities


that affect database security and how those
vulnerabilities are dealt with in todays society.
Databases are generally designed to promote open and
flexible access to data. Companies use databases to
dynamically create their websites, and with ever
growing information access it are crucial for businesses
to have strict database access, or specialized client
software to view data. However, the design traits that
make databases so appealing can also be the greatest
vulnerabilities. Various new information procedures
and technology are created daily, and for every new
product and procedure vulnerability arises. It is
important for a company maintaining a database to be
prepared for any threats to their systems, and have
mechanisms in place to mitigate the impact of these
threats in addition to having a disaster recovery system
in place. The topic of database security is very broad,
and entails such things as moral and ethical issues
imposed by society, and legal issues such as how to
protect stored information for loss or unauthorized
access, theft, or destruction.
167

An interesting and relevant situation that affects many


companies and organizations that use databases is:
Should users in a multiuser database system be
allowed to grant database privileges to other users, and
if they do, what happens to these privileges if the first
user quits or has their own privileges revoked? A
smaller database would make the issue trivial, as each
user could be traced and revoked as connected to the
source. But how difficult does this process become on a
large hundred or thousand user multi user database
system?
One of the first ideas might be to implement a password
scheme, and allow the creator of the database to
selectively distribute the password to the users that
need specific data privileges. Unfortunately, this method
can be compromised by brute force password guessing,
and in the event of revoking any specific employee or
members rights, the password would have to be
changed and redistributed to the list of permitted users.
This is impractical, and a different approach is
necessary.

168

The solution to the authorization problem comes in the


form of two tables in the database, SYSAUTH and
SYSCOLAUTH. SYSAUTH contains the UserID, table
name, and type (whether a table is a relation or view)
along with columns for each grantable privilege such as
read and insert. One specific column worth noting is the
GRANTOPT column that shows if a user is allowed to
grant the privilege to others. Each table that has users
performing actions has up to 2 rows in SYSAUTH- one
for grantable privileges and one for non-grantable
privileges.
In the Update column of SYSAUTH, the value may either
be ALL, NONE, or SOME to indicate which privileges
may be updated. The SYSCOLAUTH indicates which
columns may be updated in the event of the SOME field
being filled. By issuing a grant command, a new row is
inserted into SYSAUTH that records the privileges that
have been transferred.
The REVOKE command is used in order to remove
privileges that have been previously granted. Suddenly,
the two tables of SYSAUTH and SYSCOLAUTH are
insufficient in remembering who has granted each
169

privilege to a user. The immediate and obvious problem


that occurs is when a user that grants a privilege loses
the privilege themselves and the receiver of the
privilege must also be revoked. A further problem is if
that receiving user has been granted the privilege by a
different user- the system must know whether to retain
or remove the privilege.
For the sake of brevity in this topic for the section, the
solution comes in the form of a revocation algorithm
that seeks and checks privileges if they have been
granted by other sources before being removed.
Another part of the solution involves assigning labels to
the grants in order to distinguish and find them on
demand. An authorization module is used to check
whether a user has particular access authorization
before a command is issued.
The issue of managing database permissions and access
rights across users is an important issue in major or
large scale database ventures. As employees or users
often leave the registered area of database usage, access
rights must be revoked for security purposes. This is a
major issue in database security. Users can be granted
170

database access privileges that exceed the requirements


of their job function, these privileges may be abused for
malicious purpose. This is a problem of larger scale
operations where the systems administrators do not
have the time to define and update granular access
privilege control mechanisms for each user, and as a
result, all users or large groups of users are granted
generic default access privileges that far exceed specific
job requirements. There are other problems that extend
to smaller databases as well, however, such as
encryption issues.

DATABASE VIEWS
In addition to the method in granting and revoking user
privileges, there is another layer of security for
database users called views. These views are merely
queries that are assigned to specific users or groups of
users. When a user in one of these restricted groups
queries a database the view dynamically created a
virtual table. The view represents a subset of the data
contained in the base table restricting access, and
limiting the degree of exposure the user gets to the data
depending on their level of access. Rather than creating
171

sub-tables in the database these query based views


provide a level of security and utility to a database. They
can be used to simplify multiple tables into a single
virtual table, or hide the complexity of a database from
users. From a security standpoint an administrator
could easily prevent delicate information from being
viewed or changed, while allowing the intended use of
the tables.

DATABASE ENCRYPTION
No matter what size a database may be, there is a strong
possibility that important information is being stored,
such as credit card information, user credentials for a
sensitive system(online banking), or mass listings of
personal contact information. Placing data into a
database does not make the data safe- it only gives it
space for storage.
Database encryption is a necessity to shield data from
prying eyes that could intercept data transactions
through various spying and intruding techniques. The
encryption is utilized at the Database Management
System (DBMS), and can vary between the different
172

types of DBMS. One well known type of database


encryption is called transparent data encryption (TDE)
and is used by the Oracle DBMS.
Transparent data encryption enforces data-at-rest
encryption in the database layer. It functions as a keybased access control system using a single master key
that encrypts all encrypted columns. Table keys are
encrypted using the master key, and are stored in a
dictionary table within the database. The master key is
kept in a secure location within an external security
module that only the security administrator may access.
In transparent data encryption, data is taken from the
involved tables as clear text, and goes through the
external security module containing the master key that
encrypts the data into cipher text, which when read,
would give inaccurate symbols and will securely mask
data. The decryption process mirrors the encryption
process, again utilizing the external security module
and the protected master key.
Another method of database encryption focuses on the
multi user database. This method encrypts the entire
173

database and does not provide the


encryption/decryption key to the service provider. This
means that the user must download the tables from the
database and then decrypt them on their own before
being able to utilize the results. The database needs to
function in its encrypted form.
The user queries the encrypted database, which is
executed server-side and returns the encrypted results
for the user to decipher. The database owner may enjoy
the safety of having all their data encrypted while
letting the user client assume the responsibility of
decrypting the query responses.
A systems administrator might want to implement
whole disk encryption on vital information and servers
can help prevent information from being easily readable
in the event of the physical device being stolen weather
it is from a server room, or a company laptop while
traveling. This type of encryption only protects the data
while the device is powered off, once powered on the
device will prompt the user for a decryption key, or
password. Once the password is validated all of the data
on the machine is then available. This method of
174

encryption is highly effective at mitigating theft and


loss, and physical data theft. As with most security there
can be a drawback in terms of speed, and accessibility.
It can take a little longer to write to a disk that has
whole disk encryption implemented. Also if the
password gets lost then the encrypted data is lost with
it.
Database encryption is necessary in the current
technological age where vital and important
information must be protected from malicious users
and thieves. Database encryption is a form of defense
against intrusions, with one example of intrusion being
SQL injection.

SQL INJECTION
The topic of SQL injection is known to many individuals
who specialize in either web programming or security.
SQL injection is an exploit that can read sensitive data
from database, modify database data
(insert/update/delete) execute admin operations on
the database including shutting down the Database
Management Systems (DBMS). The attacks are
175

performed using user input fields within a program on


the web. It can affect any entry fields that require user
input into a database.
An SQL injection attack can involve a system that
extracts information from the input boxes and places
them into queries. This sort of attack can be used to
trick the system and gain access without having proper
information. This problem occurs when software is not
intelligently designed to either parse the input,
removing certain symbols (such as =). SQL injection can
be prevented by parsing out symbols, and
parameterizing queries.
Parameterizing queries is a good coding practice that
programmers should naturally adopt and use when
dealing with database input and queries.
Parameterizing queries requires developers to define all
SQL code before passing in each parameter to the query
at a later point, allowing the database to make a
distinction between code and data, despite whatever
user input was given.

176

An example of parameterized queries in Java would


look something like:

String custname =
request.getParameter("customerName");
// This should REALLY be validated too
// perform input validation to detect
attacks
String query = "SELECT account_balance
FROM user_data WHERE user_name = ? ";
PreparedStatement pstmt =
connection.prepareStatement( query );
pstmt.setString( 1, custname);
ResultSet results =
pstmt.executeQuery( );

(Source: OWASP)
As seen above, the customer name field is
parameterized before entering a dynamically
constructed query. This would successfully protect the
system from an attempted SQL injection attack.

177

Another option in averting SQL injection is the use of


stored procedures. Stored procedures are very similar
to parameterized queries in terms of effectiveness, and
they eliminate the use of dynamically constructed
queries. Stored procedures involve the developer prepreparing SQL statements before passing in parameters.
The procedure is stored within the database and then
called by the DBMS, allowing security by not allowing
any tampering with the procedure.

PHYSICAL DATABASE THREATS


While it is important to focus and neutralize virtual
threats on the virtual database system, one must not
forget that the database is physically stored on a server
that can be prone to damage, whether through
intentional malicious behavior, or natural damages
from the elements. This section will address the
physical threats against database security, as well as
counter-measures and prevention tactics that can help
prevent or mitigate physical server damages.
RAID, or Redundant Array of Independent Disks, is a
data management technique that utilizes multiple disks
178

to house data so that if one disk fails, the data will not
be lost without recovery. RAID commonly uses five
different distinctions and each offer unique benefits.
These RAID types are:
1. RAID 0: RAID 0 provides data striping (a process
that spreads out blocks of each file across
multiple disks) and does not contain data
redundancy, improving performance but does
not offer fault tolerance.
2. RAID 1: RAID 1 provides disk mirroring.
3. RAID 3: RAID 3 is similar to RAID 0 however
reserves a dedicated disk for error correction
data, providing good performance and a
moderate level of fault tolerance.
4. RAID 5: RAID 5 allows data striping at the byte
level as well as stripe error correction
information, giving excellent performance and
good fault tolerance.
5. RAID 10: RAID 10 is a combination of RAID 0
and RAID 1.
The RAID system is a good way to protect information
in the event of electrical damage to one or more disk
179

drives, or in the event of data corruption (through


either virus or physical injury). RAID offers trade-offs in
terms of security versus efficiency, as data access times
may become longer depending on how spread out the
data is stored.
In more general ways of protecting information, there
are two techniques that can preserve original data in
the event of failure. Data backup is a standard
procedure of periodically copying the contents of a
database and storing the log file in an offline form of
storage media that can be accessed and copied from if
the original data is lost. Data journaling involves
maintaining a log file (journal) of changes that have
been made to a database to allow for recovery in the
event of failure. Journaling can provide automatic
replays of previous committed updates in a journal file,
and can identify processes that are active and may be
damaged when the system failed.

DATABASE USERS
Even with all of these security mechanisms and
recovery mechanisms in place it is important that the
180

users of these systems get the proper training needed to


perform their duties in a secure manner. Even at the
database administrator level the admin can
paradoxically pose the greatest risk to information
security. For example, malicious software from poor
browsing habits could compromise the login
information of a database administrator. Security
training is common among most companies of varying
size. It teaches best practice in the industry. Every
company who uses technology is at risk of threats from
the outside. Having an untrained staff can put a
companys databases and other systems at risk. Security
awareness in the workplace covers many topics
including social engineering, email best practice, secure
password generation, and a number of others.
Employees are the weak link in your network security,
and introducing them to security topics will help
reinforce a businesss first line of defense.

DATABASE SECURITY: CONCLUDING THOUGHTS


It is important for a database administrator of any scale
to know what it is that they are protecting, and why
they are protecting it. Implementing security can be
181

costly, and depending on the methods used can take


away efficiency of the systems in question. It takes a
user time to type in a password, and it takes a system
time to encrypt the data, and depending on the volume
of data being encrypted it could take a long time. Is it
worth the extra time to ensure the data is secure? In
some cases it is worth accepting some risk, because
threats to your database are inevitable. The
implementation of security mechanisms must match the
value of the data. It is also important to know, and
understand the enemy, knowing what threats are
present in the security field and what solutions are
available to a business to prevent these vulnerabilities
can make all the difference. In most cases technology
isnt the issue - it is the users. They are part of the
solution and a major cause for the problems. Proper
security training and disaster recovery techniques can
ensure that any malicious behavior will not cause long
term damage to a company's integrity.

182

CHAPTER 10. COMPUTER AUDITING


Security auditing is important within businesses to
ensure that sensitive information stored within the
system is kept safe. This should be a very important
consideration to any organization, especially with the
huge developments in computers and related
technologies. Security auditing is the process of
recording, analyzing, and reproducing any or all
security-related events.
To help ensure that organizations have practices in
place to keep their computer systems secure, there are
regulations such as the Sarbanes-Oxley Act as well as
compliance auditing. Compliance is also necessary to
create good relationships with other businesses as well
as customers because they know that a minimum
standard is met.
Auditing aims to keep people on the inside and outside
from gaining access to data that is outside of their
normal viewing capabilities. There are several ways
that this can be tracked, some of which include checking
183

log in attempts, tracking normal use times and


durations, and alerting the security personnel when
someone wanders outside of their prescribed area of
work. (Anderson, 1980)

THREATS
A threat is the possibility that an unauthorized user
could access a systems information and/or change it. If
an unauthorized user tries to access a system, an attack
has occurred, and if they are successful, then the system
has been penetrated. The possibility that information
could be leaked due to an unknown problem in the
system is a risk. Similarly, a vulnerability is due to a
known problem with the system.
There is always the threat that a computer system may
be attacked. Computer auditing helps us to prevent this
by attempting to detect suspicious or unwanted
behaviors. This could happen by two types of users:
external and internal. An external user is a person who
is not affiliated with an organization or an employee
who does not have computer privileges who tries to
access computer resources. An internal user is someone
184

who has at least some computer privileges and tries to


misuse computer resources.
The biggest threat to an organizations computer
system comes from the inside. This is a problem
because most computer security systems are not able to
detect misuse from internal users because the system
sees the user as legitimate whether or not it is true.
There are three types of internal users: masquerader,
misfeasor, and clandestine. The masquerader is one
who logs in as another user and pretends to be someone
else. A misfeasor is a legitimate user who misuses the
privileges given to them. And a clandestine user is one
who finds access to supervisory privileges and is able to
escape detection by the system.
To detect a user who is trying to penetrate the system
several measures can be used to identify unwanted
activity. External users may be detected through
unsuccessful attempts to login to the system. To detect
internal users, masqueraders may be found through
comparing use patterns to what is normal by creating
rules for what is considered normal and flagging any
user who breaks any of them. Lastly, a clandestine user
185

may be found through monitoring any changes made to


the auditing system or privileges. Also tracking
computer activity through monitoring CPU, disk and
memory may be useful.

COMPUTER USE
Monitoring computer use is the foundation of a security
audit. Anderson establishes the basic unit of computer
use as a session or job, which is defined by four
parameters. These include the user identifiers, the
program and data files that are accessed, time a job or
session is initiated, and its duration. In some cases, it is
more useful to monitor files, devices, or programs; the
parameters used to monitor this are the user identifiers
and programs that access a particular resource, the
records read and written to, and whether or not a
program is being executed or being read. These audit
logs allow for a statistical analysis of which user
accounts were accessing which system resources, as
well as when and for how long. Once a baseline for
normal use is established, the audit trail enables the
detection of anomalies that could indicate a potential
intrusion.
186

A more modern standard for defining and monitoring


basic computer use is explored in the 1995 NIST
Handbook. As stated in this publication, audit data can
be broken up into two categories: event logs and
keystroke monitoring. Keystroke monitoring is a
powerful tool in limited applications, such as
determining what an intruder inputted into a system
after the fact, or determining when a legitimate user is
misusing the system. Audit trails established by event
logs are sorted into three different categories: system
level trails, which usually monitor items like logons,
timestamps, and accessed resources or applications;
application level trails, which monitor items such as
data files accessed, modifying or reading records, or
printing; and user trails, which monitor items such as
direct user commands, accessed files and resources, and
logon or authentication attempts.

SURVEILLANCE
Collecting audit data to monitor computer use is merely
the first step in Andersons proposed form of
surveillance. The data analysis is what allows the audit
data to be useful. A pattern of normal behavior can be
187

established by compiling audit data on individual users


or on user sets. Anderson suggested that determining
baseline behavior is more useful when gathered from
user sets, because the presumption is that the session or
job referring to the same file sets can be considered to
belong to the same population and similar statistical
properties. With normal use established, individual user
audit trails can be analyzed for deviation from that
norm. Deviations can either be determined by
predetermined criteria of abnormal use (i.e. arbitrary
deviations), or they can be discovered when data for a
given parameter is too far outside the average value for
the parameter -- determined by the standard deviation
and the mean.
Once the idea of using audit data to detect abnormal
and possibly dangerous behavior became an accepted
security tool, people began to look for a better way to
implement such analysis. Early audit data analysis was
cumbersome due to the large amount of raw data and
the lack of automated systems to sift through the
information, and couldnt detect problems in real time.
Much of the analysis had to be done by the systems
188

security officer, whose job was to flip through huge


printouts of audit trails looking for problem areas. The
quantities of data were too big for a human to
effectively process, especially since most of the data was
collected for accounting purposes rather than security.
This problem spawned a great deal of research into
what would come to be called intrusion detection
systems (IDS), which continuously monitor data to
detect problems in real time.

SYSTEM DESIGN
Essentially, there are the main parts of a computer
auditing system: a security surveillance subsystem and
trace subsystem. The surveillance subsystem looked for
unwanted or abnormal behaviors and returns reports
based on what was found. The trace subsystem allows
one to search a users activity based on a given
timeframe. These functionalities continue to be present
in intrusion detection systems.
Computer auditing techniques generally fall under two
categories: intrusion detection systems (IDS) and
intrusion prevention systems (IPS). An intrusion
189

detection system is designed to detect unusual activity


and/or suspicious packets on the network. An intrusion
prevention system, on the other hand, is supposed to
detect and block unusual activities. This may be done
through combining IDS with a firewall. A firewalls job is
to stop data that could be harmful to the system.
While there is a great deal of variety in how various
intrusion detection systems work, there are some
elements of design that are common to most of them.
The process of intrusion detection begins with the
collection of audit data, which is stored prior to
processing. The processing step is the major detection
step, when the IDS actively analyze the audit data, using
whatever detection strategy is implemented in the
system. In systems with multiple detection strategies,
there are independent processing elements for each.
The configuration data is how the IDS are controlled,
determining items such as what data is analyzed and
what kind of response is made to detected problems.
The reference data is where the system holds items
such as normal use profiles and intrusion signatures,
and is what the processing references to determine if
190

something is a problem. The active/processing data is


the data currently being processed at any given time.
The alarm is the part of the system that reports the
results of the processing stage to the relevant parties.

INITIAL DEVELOPMENT
Implementation of the idea of using audit trails as a
security tool began in the early 80s. In 83, Dr. Denning
and SRI International worked on a project that analyzed
audit data from government computers to create user
activity profiles, which subsequently led to both SRIs
development of the first functional IDS, called IDES, and
to the publication of Dr. Dennings An Intrusion
Detection Model, which became a basis for ID.
IDES (Intrusion-Detection Expert System) was designed
with the goal of providing real-time monitoring and
analysis to detect potential problems immediately,
using pre-established rules of automatically suspicious
behavior as well as abnormal behavior detection. The
IDES model is composed of the following components:
1. Types of information and their positions in the
audit record must be known in advance so that
191

information is processed properly by the


intrusion detection mechanism.
2. Profiles: Profiles are used to characterize
expected normal behavior on a computer
system. Typical types of information in these
profiles are login activity and file access.
3. Anomaly records: Alarms that are created
whenever certain behavior that is observed does
not match the profiles.
4. Activity rules: These are programs that describe
what action should be taken when an alarm is
triggered.
A significant weakness of the early intrusion detection
systems was that, as observed with the Haystack
project, there [were] very few known recorded
instances of system penetration; most scenarios used
for testing have been generated by software developers
based on their own understanding of system
weaknesses, making the accuracy and effectiveness of
early systems difficult to determine.

COMMERCIAL EXPANSION
192

Marking the beginning of the commercial expansion


period was the commercial release of Stalker in 1989,
which was an incarnation of the military IDS Haystack.
Haystack was designed to reduce enormous quantities
of generally obscure audit data to short summaries of
interpreted information for further investigation;
unlike IDES, it did not provide real time analysis.
Several other systems, such as the Computer Misuse
Detection System (CMDS) and the Automated Security
Measurement System (ASIM), were released in the early
90s, and the growing number of viable systems led to
the commercial success of the intrusion detection
market in the late 90s, which in turn led to rapid
growth in the field.
This period of growth developed concurrently with
research into new systems and methods of data
analysis. One such idea was the application of early
artificial intelligence to learn sequential patterns of user
behavior that appear erratic to standard statistical
analysis and develop more accurate rules about
suspicious behavior. Another route of investigation was
the use of neural networks, which have more flexibility
193

than the rules-based expert model used by IDES or the


standard statistical analysis method in developing in
creating user profiles. Model based reasoning, i.e. using
models of predicted intrusive behavior to predict which
pieces of audit data are relevant to which suspicious
actions, was another avenue being explored in the early
90s.

MODERN TECHNIQUES
Intrusion detection systems are now a staple of
computer security, and have a wide variety of
classifications and detection techniques. There are two
general ways of classifying IDS: by location (host-based
vs. network-based) or by general detection philosophy
(signature-based vs. anomaly-based). Within these
categories are subspecies of IDS made distinct by their
intrusion detection paradigms.
One of the broadest ways of categorizing modern
intrusion detection systems is by their location: an IDS
is either host-based, in which the system monitors a
single computer (the host), network-based, in which the
system monitors network traffic within a network, or a
194

composite, in which both network-based or host-based


monitoring is used. Host-based systems have the
advantage of providing more personalized protection
and provide more information on the effects of an
attack, but require more storage, are not easily ported
between computers, and must wait until a computer is
infected to detect an attack. Network-based systems
have the advantage of early detection, space efficiency,
and security, because they only have one instance
within the network and can prevent intrusions before
they reach computers within the network.
Signature-based detection is the current commercial
staple for IDS. They use a library of known malicious
traits, i.e. signatures, that are then compared to the
packets, programs, or files that a computer or network
is accessing. Poston notes that the advantages for such
systems include "the relative ease of creation, low false
positive ratio, [and] absolute identification", plus the
monetary rewards for creators from users needing to
buy constant library updates. Disadvantages include the
amount of memory they use, constant updating, and
vulnerabilities to undiscovered malware. The four
195

primary signature-based techniques include pattern


matching, which look for specific sequences, stateful
pattern matching, which use the same fixed sequences
but allow the sequences to be broken up between
packets, protocol-decode based analysis, which has a
list of function protocols that are checked for violations,
or contextual signatures, which use data from
attempted attacks to determine the nature of the
attacker.
Anomaly-based detection, while predominantly only
used for research, has many advantages, and a wide
variety of implementations. Anomaly-based IDS uses a
profile of what is normal for a system or computer and
flags items that don't fit within that profile, providing a
system with greater flexibility, less memory space, and
better responses to new threats. Unfortunately,
accurate profiles, false positives, and a lack of certainty
are a problem with these systems. The modern
anomaly-based detection techniques Poston discusses
can be summarized as followed:

196

1. Cluster Analysis: This uses predetermined


attributes to group data based on similarities of
those attributes.
2. Stepping Stone Analysis: This assumes
hackers are using a chain of computers, and
analyzes the time delays between requests and
replies.
3. Statistical Analysis: Suspicious actions are
given a weight, as a computer operates, and the
total weight of suspicion is calculated and
compared to a threshold.
4. Non-Stationary Models: This is an expansion
of statistical analysis that uses assumptions on
the probability of an action occurring, based on
time elapsed since the last time the action was
performed.
5. Heuristic-Based Analysis: This uses algorithms
to determine if a set of actions indicates that the
system is currently being attacked.
6. Neural Networks: This uses a web linking
normal events to other likely events, and flags
sequences that don't fit within that web.
197

7. Entropy-Based Analysis: This analyzes how


uniform a data set (actions, packets, etc.) is,
recalculating with each new item, and flagging
the ones that are not uniform.
8. Genetic Algorithm/Immune-based Analysis:
This technique mimics a biological immune
system that uses sets of positive and negative
detectors to determine abnormal use of the
system.

CONCLUSION
As we have seen there are many different ways that a
system can be compromised by unauthorized users.
There are also many techniques that have been to
prevent this as well as regulations in place to create a
minimum standard for organizations to adhere to.
Progress in the field continues to propose new solutions
that solve known weakness in the current approaches
and develops new tools to implement those solutions.

198

SUMMARY
As noted in the introduction, this book was written in a
broad manner as a primer. This brief guide was to help
introduce InfoSec topics and (hopefully) will be the
doorway to future studies for the CISSP, CISM or CCNASecurity realms. As a primer to InfoSec, this eBook does
cover many (not all) of the CISSP, CISM and CCNASecurity aspects and also discusses many Information
Security (InfoSec) topics in general. Following this, you
should be well versed and prepared to dive into more
detailed InfoSec work.
If you are considering certification, my recommendation
is to get the Certification Vendors suggested book (for
example, if you plan on taking the CCNA-Security test,
get the Cisco book). Also, consider purchasing a test
engine with frequent downloads. While you can get
copies of tests for various certs online for free, many
have errors or some tests were completed with errors.
You dont need to study errors

199

Good luck to you. And, if by chance Ive made an error


or if a particular topic needs clarification or expansion,
please let me know. My plan, especially since this is an
eBook, is to provide continuous improvements to this
work.
Thank you, Ron

200

REFERENCES
Anderson, J. P. (1980). Computer security threat
monitoring and surveillance.
Andress, J. (2011). Operating System Security. The
Basics of Information Security (pp. 131-145).
Waltham: Syngress.
Ayushi. (2010). A Symmetric Key Cryptographic
Algorithm. International Journal of Computer
Applications, 1(15). Retrieved November 10,
2013, from
http://www.ijcaonline.org/journal/number15/p
xc387502.pdf
Bhuyan, M. H., Bhattacharyya, D. K., & Kalitaya, J. K.
(2013). Detecting distributed denial of service
attacks: Methods, tools and future directions. The
Computer Journal, 56(12), 31-52.
Biermann, E., Cloete, E., & Venter, L. M. (2001). A
comparison of intrusion detection systems.
Computers & Security, 20(8), 676-683.

201

Blaze, A., Diffie, W., et al. (1995) Minimal Key Lengths


for Symmetric Ciphers to Provide Adequate
Commercial Security.
Chang, R. (2002, Oct). Defending against flooding-based
DDoS attacks. IEEE Communications Magazine.
Douligeris, C., & Mitrokotsa, A. (2003). DDoS attacks and
defense mechanisms: Classification and state-ofthe-art. Computer Networks, 44(5), 643-666.
Fulvio, R. (2007). Kerberos Protocol Tutorial. Retrieved
December 2013 from
http://www.kerberos.org/software/tutorial.htm
l
Griffiths, P., Wade, B, (1976). An Authorization
Mechanism for a Relational Database System.
ACM Transactions on Database Systems (TODS),
3(1), 242 - 255.
Group, Lark. (2003, 17 Oct). Secure SQL Server:
Encryption and SQL injection attacks.
TechRepublic. Retrieved from
http://www.techrepublic.com/article/securesql-server-encryption-and-sql-injection-attacks/

202

Guttman, B., & Roback, E. A. (1995). An introduction to


computer security: the NIST handbook. DIANE
Publishing.
Harrison, O. & Waldron, J. (2008). Practical Symmetric
Key Cryptography on Modern Graphics
Hardware. In USENIX Security Symposium.
Innella, P. (2001). The evolution of intrusion detection
systems. SecurityFocus2001.
Jiang, X., & Solihin, Y. (2011). Architectural Framework
for Supporting Operating System
Levine, J. (2004). A methodology to characterize kernel
level rook it exploits that overwrite the system call
table. Published manuscript, School of Electrical
and Computer Engineering, Georgia Institute of
Technology, Atlanta, Georgia, Retrieved from
http://users.ece.gatech.edu/owen/Research/Co
nference Publications/levine_secon04.pdf
Liu, J., Wang, X., Jiao, D., Wang, C. (2012). Research and
design of security audit system for compliance.
International Symposium on Information
Technology in Medicine and Education, 2, 905909.
203

Lunt, T. F. (1988). Automated audit trail analysis and


intrusion detection: A survey. In In Proceedings
of the 11th National Computer Security
Conference.
Lunt, T. F. (1993). A survey of intrusion detection
techniques. Computers & Security, 12(4), 405418.
Loscocco, P., & Smalley, S. (2001). Integrating Flexible
Support for Security Policies into the Linux
Operating System. USENIX ATC, 2001, 1-62.
Retrieved December 2, 2013, from the USENIX
database.
Lupescu, G. (2013). Accelerating Encryption Using
Commodity Hardware. Retrieved
from:https://koala.cs.pub.ro/redmine/attachem
ents/download/1775/Prezentare_Lupec_Grigor
e_R1.pdf
Mahajan, D., & Sachdeva, M. (2013). DDoS attack
prevention and mitigation techniques.
International Journal of Computer Applications,
67(19).
Malayeri, A., & Abdollahi, J. (2009). Modern Symmetric
Cryptography methodologies and its
204

applications. CoRR, 1092. Retrieved November


2013, from http://arxiv.org/pdf/0912.1092.pdf
Maniscalchi, Jag (2012). The Benefits of Full Disk
Encryption. DIGITALTHREAT. Received from
http://www.digitalthreat.net/2012/01/thebenefits-of-full-disk-encryption/
Mirkovic, J., Prier, G., & Reiher, P. (2002). Attacking
DDoS at the source. Network Protocols, 312-321.
doi: 10.1109/ICNP.2002.1181418
Natarajan, Ramesh. (2010, Aug 10). RAID 0, RAID 1,
RAID 5, RAID 10 Explained with Diagrams. The
Geek Stuff. Retrieved from
http://www.thegeekstuff.com/2010/08/raidlevels-tutorial/
Neustar. (2013). 2012 annual DDoS attack and impact
survey. Retrieved from
http://www.neustar.biz/enterprise/resources/
DDoS-protection/2012-DDoS-attacks-report
Neves, S. (2009) Cryptography in GPUs. (Masters
thesis). Retrieved from
http://eden.dei.uc.pt/~sneves/pubs/
Oracle Database Advanced Security Administrators
Guide. (2012). Oracle. Retrieved from
205

http://docs.oracle.com/cd/B19306_01/network.
102/b14268/asotrans.htm
Poston III, H. E (2012). A Brief Taxonomy of Intrusion
Detection Strategies. In Aerospace and
Electronics Conference (NAECON), 2012 IEEE
National (pp.255-263). IEEE.
Provos, N. (2003). Improving Host Security with System
Call Policies. USENIX Security Symposium, 12, 130. Retrieved December 2, 2013, from the
USENIX database.
Saroiu, S. (2004). Measurement and analysis of spyware
in a university environment. (Master's thesis,
University of Washington)Retrieved from
http://static.usenix.org/events/nsdi0/tech/full_
papers/saroiu/saroiu_html/
Shulman, Michal (2006). Top Ten Database Security
Threats. Imperva, Inc.
Smaha, S. E. (1988, December). Haystack: An intrusion
detection system. In Aerospace Computer
Security Applications Conference, 1988, Fourth
(pp. 37-44). IEEE.

206

Stallings, William & Brown, Lawrie, (2012). Computer


Security: Principles and Practice. New Jersey:
Pearson.
Szor, P. (2013). The art of computer virus research and
defense. In P. Szor (Ed.), The process of computer
virus analysis. Retrieved from
http://computervirus.uw.hu/ch15lev1sec4.html
Tariq, U., Malik, Y., Abdulrazak, B., & Hong, M. (2011).
Collaborative peer-to-peer defense mechanism
for DDoS attacks. Science Direct: Procedia
Computer Science, 5, 157-164.
Teng, H. S., Chen, K., & Lu, S. C. (1990, May). Security
audit trail analysis using inductively generated
predictive rules. In Artificial Intelligence
Applications, 1990, Sixth Conference on (pp. 2429). IEEE.
Thimbleby, H. (1999). A framework for modeling
Trojans and computer virus infection. Computer
Journal, (41), 444-458. Retrieved from
http://wwwusers.cs.york.ac.uk/~pcairns/papers/Modelling
Trojans.pdf
207

Todd, R. 2013, Fed 28. 12 types of DDoS attacks used by


hackers. Retrieved from
http://blog.rivalhost.com/12-types-of-DDoSattacks-used-by-hackers/
Trappe, W. (n.d.). Symmetric Cryptography:DES and RC4.
Unpublished Lecture Slides. Electrical and
Computer Engineering Department, Rutgers
University, New Brunswick, New Jersey United
States Department of Commerce (2001)
Advanced Encryption Standard(AES) (FIPS 197).
Retrieved November 2013, from
http://csrc.nist.gov/publications/fips/fips197/fi
ps-197.pdf
Qijun, G., & Peng, L. (n.d.). Denial of service attacks.
Unpublished manuscript, School of Information
Sciences and Technology, Pennsylvania State
University, University Park, PA, Retrieved from
http://s2.ist.psu.edu/paper/DDoS-Chap-GuJune-07.pdf
United States Department of Commerce (1999) Data
Encryption Standard(DES). (FIPS 46-3).
Retrieved November 2013, from
208

http://csrc.nist.gov/publications/fips/fips463/fips46-3.pdf
Vaccaro, H.S., Liepins, G.E. (1989) Detection of
anomalous computer session activity. IEEE
Symposium on Security and Privacy, 280-289.
Wing, W. (2004). Analysis and detection of metamorphic
computer viruses. (Master's thesis, San Jose State
University)Retrieved from
http://www.cs.sjsu.edu/faculty/stamp/students
/Report.pdf
Wood, C. (2011, July) Chaos-Based Symmetric Key
Cryptosystems. Paper presented at Rochester
Institute of Technology Symposium, Rochester,
New York.
Yang, J. , & Goodman, J. (2007, December). Symmetric
Key Cryptography on Modern Graphics
Hardware. AsiaCrypt 2007. Lecture conducted at
International Association for Cryptographic
Research, Kuching, Sarawak, Malaysia.
Yuan, J., & Mills, K. (2005). Monitoring the macroscopic
effect of DDoS flooding attacks. IEEE
Transactions on Dependable and Secure
Computin, 2(4).
209

Zhang, X., Li, C., Zheng, W. (2004) Intrusion prevention


system design. The Fourth International
Conference on Computer and Information
Technology, 386-390. Guide to the Secure
Configuration of Red Hat Enterprise Linux 5.
(n.d.). National Security Agency. Retrieved
December 2, 2013, from

210

ABOUT THE AUTHOR

Ron McFarland is a dedicated writer. He has written


for academic and technical journals. He has also
written several Information Technology books for
students of Information Technology, Computer
Security, and Computer Forensics. Additionally, he
has served as a technical editor for several technical
book publishers.
Most importantly, because of Rons love of the written
word, he has written hundreds of poems (several of
which are published in this book) and several short
stories.
His blogs (technical and fiction/poetry writings) are:

Poetry and brief prose:


http://www.cowboyhaiku.com
Fiction Writing (short stories, etc.):
http://www.rottonronnie.com
A contemporary discussion of the Spirit:
http://www.buddahcat.com
211

Information Technology, Information Security,


and Digital Forensics articles, eBooks, etc.:
http://www.wrinkledbrain.com

Please stop by, read and comment. He would love to


hear from you.

212

OTHER BOOKS BY RON MCFARLAND


TECHNICAL BOOKS

How to start a Computer Forensics Business:


A Small Business Success Guide:
http://amzn.to/2gZEjwR

Information Security Basics: Fundamental


Reading for InfoSec Including the CISSP,
CISM, CCNA-Security Certification Exams:
http://amzn.to/2gTCcMk

Introduction to Software Development: A


Prelude to Creating Applications:
http://amzn.to/2glZSbg

Personal Information Security: An Introduction


for the Individual and Small Business Owner:
http://amzn.to/2gA63Ls

An Introduction to Using BitCoin and other


eCoin Options: The Revolution of Electronic
Currencies: http://amzn.to/2gTCNNV

How to Create a Makerspace: A Real-World


Case-Study: http://amzn.to/2iiKmxG

FICTION, NON-FICTION AND POETRY BOOKS


213

Happy, I am: Lessons from a Near Death


Experience: http://amzn.to/2gNmhyK
Love and Silence: Selected poems of Ron
McFarland: http://amzn.to/2gzUnse

214

One Last Thing...


If you enjoyed this book or found it useful Id be very
grateful if youd post a short review on Amazon. Your
support really does make a difference and I read all
the reviews personally so I can get your feedback and
make this book even better.
If youd like to leave a review, then all you need to do
is click the review link on this books page on Amazon
here: http://amzn.to/2h5R5hl
Thanks again for your support!

215

Вам также может понравиться