Вы находитесь на странице: 1из 18

ANALYSIS OF INCIDENT RESPONSE TACTICS

FOR THE EXTREME INSECURE WEBSITE


Abstract
Incident response is a complex subject in network security. The steps taken to protect and recover vital information
is clouded in the abyss of security essentials as a whole. This publication will detail the steps taken in an incident
response procedure, as well as discuss the importance of data protection and identity management. This research
will provide the information necessary to understand identity management, particularly in the area of when
information is stolen via a cyber-attack. The guidelines provided will simplify the intricacy of network security, and
highlight the importance of a solid incident response plan, which will hopefully enlighten the public and spread
forth a new profound understanding of the subject.
Keywords: incident response, data protection, network security, identity management

Introduction
The world is a growing, changing place. Technology now rules the landscape, connecting everyone on the planet to
one another in the matter of seconds. We, as the human race, have come a long way from the Dark Ages. We have
evolved, and with every evolution comes its pros and cons. As technology evolves, the leaps forward in science and
life become larger and larger, but with these leaps come heavy risks. Network security has never been in a higher
demand, and with this demand we must prepare for the best and worst of this newfound area of information
technology. In this report we will discuss the exact steps of a response to a breach in security, at times using the
website Extreme Insecure that was used in a lab for our class. With this report we will outline our discoveries and
unveil the intricacies of incident response tactics, as well as dissect an incident response, and discuss the importance
of access control and data protection. This paper is a research in progress, as we will continue to research and
complete more hands-on lab activities throughout the class. At the conclusion of this semester we will present our
final findings on the subject.

Incident Response
Incident response is an organized approach to addressing and managing the aftermath of a security breach or attack
(also known as an incident). The goal is to handle the situation in a way that limits damage and reduces recovery
time and costs (Rouse, n.d.). According to the Computer security incident handling guide recommendations of the
National Institute of Standards and Technology a computer security incident is a violation or imminent threat of
violation of computer security policies, acceptable use policies, or standard security practices. This section of the
paper will detail why incident response is a critical section of every companys security practices.
No company wants to realize that they arent prepared for a security breach when an incident occurs. These
incidents can place a tremendous amount of pressure on the security staff, and without a structure incident response
plan a company has to make spontaneous decisions that can actually make the breach worse and end up making the
company lose both man hours and sensitive data. The Dell Secure Works incident response plan states that there are
three main sections that a security team must move through in order to discover, contain, and recover from a security
incident. Having a comprehensive incident response plan can significantly reduce the impact of any security breach.

To create this incident response plan, companies can review different examples online, different publications by
Computer Security Incident Response (CSIR), or bring in outside consultants.

Outside Consultants
Many security managers are hesitant to hire an outside consultant as they feel as though they are the ones that know
their network the best, however hiring an outside consultant can provide the security team with a new unbiased
report. Security professionals also dont always have the technical expertise that a consultant could bring to the
network. The security consultant firm, Silva Consultants, details some things that they could have more experience
could be such as designing security for a new location, and preparing a request for proposal for a new video
surveillance system are often best done with the help of a consultant who is an expert in these areas. They may also
be able to have more of an influence over senior management.
Sometimes an outside consultant can often have more credibility with your senior management team and
can often sell ideas that the security manager alone cannot. Rightly or wrongly, a consultant and security
manager can present the same idea, but senior management will accept it when presented by the
consultant - while they would reject it if it was presented by the security manager (Silva Consultants,
2014).
Many security consultant firms can also perform penetration testing for your network that would detail
vulnerabilities in your network.
Penetration Testing
The main point of penetration testing is to attempt to identify specific risks in your network, that once addressed can
greatly improve the security of your network. During the testing process, the security consultant firm will perform
multiple tests to the system to identify the weaknesses, as well as actually penetrating the system to determine the
overall risk to the business if that vulnerability were to be exploited. After completing the testing, the security firm
will deliver a meticulous report. The report will summarize the security risks found during the testing including the
impact of the issue and the risk for the business. For every security risk covered in the report, a comprehensive
explanation of mitigating actions and recommendations are supplied for review by the security team. The security
team will provide any technical recommendations and attempt to identify any root causes of any risks/issues they
discover (SECforce, 2013). A security firm can choose to use one of two penetration techniques -- black box
penetration test or white box penetration test, and can perform a web application, external, or internal penetration
test.
Black Box Penetration Test
In black box penetration tests, the security firm is given no network information, and it is as though an uninformed
attacker is attempting to penetrate the network. This is beneficial as it simulates an actual outside attack, however
some areas of your network may remain untested as they do not have a complete network diagram (SECforce,
2014).
White Box Penetration Test
In white box penetration tests, the security firm uses knowledge of the network to perform comprehensive tests. The
security firm is usually given network diagrams, source code for applications, and infrastructure details. The main
benefit of a white box test is to provide as much information as possible to the tester so that the firm can elaborate
additional tests based on the results. White box penetration testing has some obvious benefits over black box testing.
White box is much more thorough, the testing time is much longer, and it can ensure that the entire network is
tested. The main disadvantage is that this does not simulate a realistic attack, as the tester is not in the same position
as an uninformed outsider attacking the network (SECforce, 2013).
Web Application Penetration Test

Most web application penetration tests are based off of the Open Web Application Security Project (OWASP)
methodology. This means that things such as software infrastructure/design weaknesses, XSS attacks, SQL injection,
password hacking, vulnerabilities to the database and privacy (HackLabs, 2011). Unlike black and white box testing,
application testing is not used to provide a detailed security evaluation, instead they focus on just highlighting the
areas of higher risk and identify vulnerabilities that are discovered during the test. They serve as a cost-effective way
to identify vulnerabilities in a selected application, usually though that either attackers are more likely to exploit, or
that would cause the greatest damage if attacked (VSR, 2013).
External Penetration Test
External penetration tests perform tests on all nodes accessible from the Internet. First a list of accessible systems
and, when known, their respective services, is created in order to obtain as much information about all Internet
accessible nodes as possible. With this information vulnerability scanners are employed to identify risks. These
automated scans can identify known and common vulnerabilities, but they are not as skilled as a security
professional at detecting complex risks, vulnerabilities specific to the system being tested, or validating any findings
already reported. After these automated scans, manual testing and verification is started. Once these vulnerabilities
are reported, the security team attempts to exploit them. This is done to not only verify the vulnerabilities exists, but
also to determine the level of damage that could be caused if an attacker were to exploit the vulnerability
(Praetorian, 2014).
Internal Penetration Test
An internal penetration test is not as risky as an external penetration test as it simply examines internal IT systems
for any vulnerabilities that could be exploited to affect the confidentiality, integrity, or availability of the companys
network. They scan the internal network, any ports, perform automated probing as well as manual vulnerability
testing, strength of user passwords, and testing on various services. This allows an organization to see what services
could be affected if an attacker had either internal access or credentials that equated to internal access. The internal
network is vulnerable to external intruders if they have already breached the perimeter defenses, such as firewalls, as
well as insiders attempting to gain access to areas they do not currently have permission to access or damage
sensitive information the company may have on both employees and customers (HackLabs, 2011).

Computer Security Incident Response Teams (CSIRTs)


Outside consultants are very important to computer security response. However, it is also crucial to have a Computer
Security Incident Response Team (CSIRT). The European Union Agency for Network and Information Security
defines a CSIRT as a team that responds to computer security incidents by providing all necessary services to solve
the problem(s) or to support the resolution of them. The Gramm Leach Bliley Act of 1999 requires all financial
institutions to have customer privacy policies and an information security program. CSIRTs were born because of
this law. The easiest way to ensure security is to have an on-site team that deals with these issues. These teams
became a necessity for companies due to the increase in cyber attacks and the recent reliance on internet-based
systems. CSIRTs are the first line of defense against computer security incidents.
Forming a CSIRT
There are many reasons that a company would choose to establish a CSIRT. The most common reason is an increase
in the number of computer security incidents being reported. Many companies do not establish a CSIRT until they
have problems. Another motivator for forming a CSIRT is an increase in computer security incidents in similar
companies. When breaches like the Target breach of 2013 happen, other companies of a similar nature (in this case,
retailers) tend to be more proactive about security. The most effective motivator comes without a breach. Many
companies are becoming more forward-thinking when it comes to security. These companies have come to the
realization that systems and network administrators alone cannot protect their assets (CERT, 2014).

Once there is a motivator for forming a CSIRT, the company actually has to recruit staff for the team. It is
imperative to select the best qualified employees for the job. Though IT professionals will make up the bulk of the
team, there can also be members of other departments. Working from a top-down perspective, the first member of
the team will be the the CSIRT team leader. The team leader should have knowledge of both IT security and risk
management.The next person on the team is called the Incident Lead. This person coordinates the response to an
incident. Depending on the size of the incident, there can be more than one incident lead. The Incident Lead should
have knowledge of IT security as well as the specific area of in which the incident occurred (eg. - servers, firewalls,
etc.). The last cog in the CSIRT machine is the CSIRT Support Members. This is where the team will vary from
company to company. The support members can come from many departments. There is usually at least one IT staff
member; but there could also be members from management, legal, and/or public relations (Tracy, 2011).
What does a CSIRT do?
Even though all of the CSIRT members have individual responsibilities, they have even more functions as part of the
CSIRT. All members of the team are assigned parts of security procedure to review. The best way to improve an
system or procedure is to have a new set of eyes inspect it. The CSIRT members may see things that were
overlooked by the authors of the original plan. As with any other emergency, companies must perform drills and
audits for IT security. The CSIRT also acts as the central hub for communication whenever an incident occurs. They
are the people that get the calls when something goes wrong(Tracy, 2011).
Benefits of a CSIRT
One of the most important functions of a CSIRT is in the name: response. CSIRTs are designed to be reactive; they
are the first responders to security incidents. A well-established CSIRT is crucial for an effective response to a
security incident. Having a CSIRT allows for a rapid and focused response. That means that all of the staff on the
team has a specific role when it comes to combating the incident. Instead of the incident causing panic and
irreparable damage, the team can quickly identify the breach and react to it accordingly.
One of the most underappreciated aspects of a CSIRT is its proactive nature. After the team deals with an incident, it
establishes a precedent. Just like a human immune system, CSIRTs establish a procedure for dealing with attacks of
a certain nature so they are ready if it happens again. CSIRTs are always working to improve the security of their
systems. If there is no incident occurring, then they perform vulnerability assessments and develop security policies.
Even though they are primarily thought of as response teams, CSIRTs are always working to keep their systems safe
(Zajicek, 2004).
Process versus Technology
There are many important technologies in IT security (firewalls, anti-virus, etc.). However, when it comes to
incident response, the process is more important. The process involves the formation of a plan of action. The plan
has to integrating existing processes with the organizational requirements of the system. The main objective of the
plan is to strengthen the clients management of computer security events. The team is not always on site; so the
client has to be able to react to computer security events. The process also establishes several processes for the
organization to follow. These processes come in three stages: notification and communication; collaboration and
coordination; and analysis and response. The notification and communication stage establishes the procedures for
contacting the CSIRT. The collaboration stage involves the CSIRT contacting other organizations and coming up
with the most appropriate solution. The analysis and response stage is how the team actually deals with the security
incident. If the process is followed properly, then a response plan is created.

Benefits of an Effective Plan


Having an effective plan that is frequently reviewed and updated can ensure that your security teams knows what to
do once an attack is detected, therefore limiting the damage caused (Bailey et al., 2013).

Improve Your Teams Decision Making


Establishing a hierarchy within your security team, particularly establishing which team members would have rights
to make decisions during the incident, can allow the team to quickly respond to the attack at the appropriate scale
based on the estimated value of damage the attack could cause (Bailey et al., 2013).
Improve Your Internal Communication
Although the incident will mostly be handled by the security team, they must be able to effectively communicate
with not only each other, but both the security and senior management. Having all contact information for everyone
necessary as well as having easily accessible documentation that both types of management would be able to
understand, makes sure that greater efficiency can be achieved during the incident (Bailey et al., 2013).
Improve Your External Communication
Third parties can be very useful during a security incident. These third parties can include law-enforcement agencies
that can be used to retrieve any information or finances lost in the attack, as well as breach-remediation and
forensics experts. If you do not have a set list of third parties to contact during an incident this could lead to precious
time wasted allowing more damage to occur. It is also very important that you know when contracts expire with
third parties, otherwise attempting to find another company or re-establish a connection with the same company can
be very costly (Bailey et al., 2013).
Limiting Damage Caused
Ensuring that minor events dont escalate into major damage is one of the biggest things that an incident response
plan should cover. This can include little things such as ensuring that all anti-virus programs have the latest virus
definitions, quarantining any threats that dont meet specific threat levels in order to safely log what kinds of threats
your network is experiencing, etc (Bailey et al., 2013)

What Makes a Strong Incident Response Plan


To develop a strong incident response plan, the security team must have proper preparation, know how to identify,
contain, and eliminate the threat, recover any lost data or recover the affected systems back to full working mode. It
is also important your team know what information to document, and how to best document it for easy review. To
understand a little more about the actual creation of an incident response plan our team reviewed the SANS Incident
Handlers Handbook by Patrick Kral.

Phase 1 - Preparation
Making sure that your security team is properly prepared for a security incident is key. Not only should you have a
plan in place, but you should be updating it with the most recent information, having training every now and then to
ensure that the team is able to complete the necessary steps, and obtaining any new hardware or software as the old
ones become outdated (Kral, 2012). The team should have the latest updated contact information for every member
of the security team, everyone in security and senior management, and anyone that could possibly detect a security
incident so that the team can get as much initial information as quickly as possible (Grance & Kent, 2004). Your
team must have a documentation procedure in place so that the team can review past incidents, the documentation
for the present incident can be presented to the proper authorities if the incident is found to be criminal, and so that
your team can better protect themselves against any similar attacks in the future (California Institute of Technology,
n.d.).

Phase 2 - Identification and Containment


The security team should document all information as it is discovered. For instance, where the incident occurred,
who reported the incident, how it was discovered, etc. These answers are important to determine as it can help the

team find out what areas of the network and/or what hardware systems have been compromised and therefore what
the scope of the incident could be (Kral, 2012). Determining what systems have been compromised can tell the team
which logs to review in order to find the attackers, and can also enable to team to move towards containing the
threat. Any IP addresses discovered to be associated should immediately be blacklisted (Grance & Kent, 2004). If
the systems can be quarantined, the team should do so at the first possible opportunity. Copies of the affected
systems should be made so that the security team can analyze them, as well as any law-enforcement if criminal
charges are going to be filed. If the affected systems can be taken offline, then do so and move onto eliminating the
threat, if not then remove all malware and harden the system from any future attacks until the system can be
reimaged (Kral, 2012).

Phase 3 - Elimination and Recovery


To start with eliminating the threat, team members should reimage the system, harden it with various
countermeasures to reduce future risk, and remove all malware at the earliest opportunity. When possible, sysetems
should be put back into production to reduce the financial damage. Any tools used to monitor the network should be
updated to include alerts for the latest attack and should be put into action as soon as possible. All copies of evidence
of the attack and subsequent damage should be handed over to law-enforcement (Kral, 2012). If it is suspected that a
password sniffer was used, users should be required to change their passwords. To guarantee the quickest response
possible for future incidents, proper documentation of the incident is essential (Grance & Kent, 2004)..

Identity and Access Management


Access control is one of the most important pieces of the security puzzle. It is made up of three solid components:
identification and authentication, accountability, and authorization (Exploring Access Control for Security +
Certification, 2014). Each of these three essential services is crucial to the high performance of the network security.
When anyone can successfully run a command, its because there are no permissions set, or permission for those
commands is included in use by the world entity in permissions, defined as every other user besides the owner and
specific groups. We need to make it impossible to run those commands unless specifically allowed. This can be done
by defining one or more specific groups to which particular permissions can be granted based on their role in the
organization. Use of groups such as admin, and user are commonly represented. Give these new groups the
appropriate commands and take all but the simplest/least powerful commands and remove them from the world
entity.

Permissions
Permissions cant be utilized without the entities to which their associated. As of right now the only specific user of
our site is the owner, and everyone else in the world entity. In order to manage multiple users identities and their
permissions, a user database is going to be necessary. This will allow our site to differentiate between users by
assigning them IDs and passwords stored in their individual user file in the user database. When a user accesses the
site, if the appropriate permissions are to be applied, the system needs to know which group he/she falls under. The
user provides such authentication by providing the ID and password to be checked against the user file. If consistent,
the user is authenticated and thus granted a particular set of permissions associated with his group. Of course the
user database is also subject to vulnerability. A strong set of both access control and user authentication policies are
imperative to data integrity in the user database, without which the assigned IDs and permissions are purposeless.

Firewalls
The use of a network-based application firewall is helpful in controlling access to the applications and services
themselves on the server. An application firewall is one that controls input, output, and access to applications and/or
services. Its generally placed in the application layer of the protocol stack, and thus can inspect the contents of
traffic, and potentially blocking content like malware, software logic exploits, and the like. Its location in the

application layer means it can also, unlike stateful network firewalls, manage network traffic to specific applications
or services (Web Application Firewall, 2014).

Access and Identity


As additional users are given access to the website, a plan to manage those users must be developed. Access to the
website must be managed carefully to ensure protection of resources, and that they are not used in ways that are
unauthorized. Identity and access management is defined as the people, processes, and systems that are used to
manage access to enterprise resources by assuring that the identity of an entity is verified and then granting the
correct level of access based on this assured identity (Stallings & Brown, 2015, pg 191). Two aspects of identity
and access management are identity provisioning and the federated identity management scheme. Identity
provisioning involves identifying users and assigning a level of access to each user. As users separate from the
organization, identity de-provisioning is conducted to remove access from a users profile.

Methods of Access Management


There are a several different ways an access management plan can be implemented. For our purposes, it was
important to determine how we wanted to manage control of user access. Methods of access management include
role-based access control (RBAC), discretionary access control (DAC), and mandatory access control (MAC). Each
has a different approach and a different focus, and each was reviewed to determine which would be most
appropriate.

Role-Based Access Control


This method of access control uses a collection of groups to allow users to access only the things they need access
to. Each group has a predefined set of access rights, and users are added to groups according to their access needs,
rather than according to their identity. This approach allows security administrators to create groups based on job
functions, instead of each individual, and add and remove users to and from groups without having to set security
privileges for individual users. Role-based access control requires the arrangement of the following entities into a
hierarchical system:
User: The user is the base entity that is the building block for establishing user access. Each user is assigned an ID
and password that enables them to access the computer system.
Role: Business rules are reviewed and a role is created for each authorized function or job needed to be performed.
A description is written for each role, defining the access and functions of said role. Users are added to the roles that
give them the access they need.
Permission: Attributes given to each role are considered permissions. Access rights can be added or removed
independently for each user or each role. Removing access rights results in less permission.
Session: When applying a set of access rights to a user, a connection is made between the user entity and the set of
access rights, or role entity. This mapping is considered a session that is run between the user and role entities.
Each of the above entities must interact with one another according to the hierarchical design established by the
security administrator. With the above considerations, role-based access management appears to be a solid option for
managing access control on our website. However, there are still two other methods to consider.

Discretionary Access Control

This method of access management is based on the identity of each user, and gives certain users the ability to grant
access to other users once their identity is authenticated. This approach is typically used together with an access
matrix, and enables each subject, or user, to be cross-referenced with organizational resources. This allows security
administrators to clearly see the level of access each subject has to each resource. In addition to an access matrix, an
access control model may be created that assumes a set of subjects, a set of objects, and a set of rules that govern
the access of subjects to objects (Stallings & Brown, 2015, page 120). With an access control model, access to each
resource is more clearly defined by arranging each resource into one of the following groups:
Processes: A process is an instance of a program, or a task. Processes can be created, started, halted, finished, or
exited, and the ability to manipulate each must be carefully controlled.
Devices: Computers and servers can have multiple drives, and each drive must be broken down into devices to allow
access rights to be assigned to each.
Memory locations or regions: Physical memory can be divided and subdivided into regions with varying security
levels and restrictions for each. With an access control model, subjects can be granted permissions to read, write,
and own memory locations.
A discretionary access control model is shown below (Blakes Table) that identifies access levels of subjects to each
carefully defined object.

Software Security and Trusted Systems (Buffer Overflow)


Description and Common Errors
Data can be corrupted through many different hacking attacks but a very common error is an attack known as buffer
overflow. Programmers are entrusted with creating secure programs that both function properly and are free from
errors. An attempt to cause a buffer overflow is more often than not, a result of faulty programming in application
code. Buffer overflow, also referred to as buffer overrun or buffer overwrite, is an error in the code when an input to
large is allowed to be entered into the program causing the data storage to be overflowed and possibly overwrite
other information that had previously been entered. Buffer overflow is a serious threat and can ultimately lead to
system failure.
The possible errors that a buffer overflow attack can cause are data corruption in the program, a transfer of control
privileges in the program, illegal memory access, total program failure, and system failure. In order to protect a
program and reveal any potential buffer overflow errors the programmer must figure out what a hacker needs to
exploit these potential errors. The hacker needs to both identify the error by using externally sourced data that can
trigger this event, and a keen understanding of how the program being attacked stores its buffer in the memory.
Luckily, through identifying all the possible causes of buffer overflow many countermeasures can be put in place
that prevent this issue.

Countermeasures and Prevention Practices


Since there are a large amount of buffer overflow attacks several defense techniques have been created and they fall
into two categories: compile-time defenses and run-time defenses. Compile-time defenses are protection measures
that are put in the program that are used to boost the security of the programs code to resist any potential attacks.
Run-time defenses are countermeasures that help identify an attack in progress and remove the threat. Although the
buffer overflow threats are known and the protection techniques are available, implementing them is a difficult

process as a lot of buffer overflow attacks happen in system level software therefore it is hard to roll out as new
threats arise.

Compile-Time Defenses
Several popular compile-time defenses include coding with a high-level language that does not allow the program to
compile with buffer overflows, coding standards that implement a certain level of safety, and adding code that is
able to detect any threat or error on the stack frame. The choice of a higher level programming language when
applicable is an extremely reliable option because it simply does not allow the program to compile when there are
buffer overflow errors. The only downside to this is that it comes at a price with higher resource use because of the
extra protection code that runs checks for buffer overflow errors.
However, using safe coding techniques alongside using a high level programming language can both increase your
level of protection and allow for a fast and efficient running program. A common practice of checking for buffer
overflow errors is a detailed and thorough auditing process that the programmer uses to check the code for common
coding errors that can be overlooked if not careful. Lastly, a not as common but highly recommended compile-time
defense is to use a compiler that checks for range capacity on variable in the code that are common places for buffer
overflow errors to occur. This would ensure that none of the functions in the program allocate user input that is too
large to fit on the available space in the stack frame. However, just like high level programming languages this
comes at a cost to performance.

Run-Time Defenses
Although compile-time defenses are supposed to eliminate any errors that would have occurred had the code not
been checked for buffer overflow errors, there are still times where there needs to be protection from errors that are
not caught and need to be dealt with immediately. Run-time defenses deal less with the coding of the program and
more of the memory management in the virtual space that the program is assigning its variables. Several run-time
defenses include executable address space protection, address space randomization, and the placement of guard
pages in the memory.
The run-time defense practice of executable address space protection is a process in which a block is put on the
execution of code on the stack with support of virtual memory being flagged as non executable. This would prevent
any buffer overflow attacks from occurring as the stack would not allow the program to write over any pre existing
data. This is often considered one of the most secure ways in protecting against buffer overflow attacks. Address
space randomization involves creating a fake address for the location of the targeted buffer, a crucial piece of
information that a hacker needs in order to perform an attack. This technique and guard pages, placing guards
between critical points in the memory are also common practices that are best used in addition to executable address
space protection. Although each have their benefits and weaknesses, each compliments eachother in providing the
maximum amount of security, although sacrificing some performance features, against buffer overflow threats.

Blakes table
Mandatory Access Control
Mandatory access control is based on a comparison of security labels. Each subject entity within a security system is
assigned a security clearance based on the functions is will need to perform. Likewise, each system resource, device,
and file is assigned a security label based on the information it contains. The system then does a comparison of the
subjects security clearance and the resource security label and allows appropriate access. This kind of access
control is called mandatory because an entity that has clearance to access a resource may not, just by its own

volition, enable another entity to access that resource (Shirey, R. 104). Mandatory access control is based on the
satisfaction of the following two properties:
No read up: A subject may not view system resources that have an equal or lower security label than the security
clearance of the subject.
No write down: A subject may write to only those resources with equal or greater security levels than the security
clearance of the subject.
When both of these security properties are satisfied the subject is allowed access to the resource, otherwise access is
restricted.

Access Management Chosen Method


Each of the above methods of controlling access has its own benefits and limitations within various security
environments. It is necessary to have a good understanding of the organizational needs, the information contained
within, and the ways in which it the information will need to be used before implementing the appropriate access
control method.
After taking these three methods into consideration, we feel it is most appropriate to utilize the role-based access
control model. Given the large number of users accessing our website, it will be difficult to assign security privileges
to each individual; therefore, discretionary access control is not the best option. Assigning security labels to our one
or two system resources, and a security clearance to each subject, would not be a very effective approach either;
therefore, mandatory access control is not an appropriate option either.
The role-based access control model for our website will work as follows. As users are created, each will be
assigned a user ID and password. This will help our system authenticate the user and verify they are an authorized
user. At this point the user will be added to one of our security groups in accordance with the subjects access needs
that will apply the appropriate security privileges to the user profile. If the user needs to be deleted, the ID will be
removed from the security group, and access will then be restricted.

Identification and Authentication


Identification and authentication does exactly what it says: it identifies the users and tells the system who he or she
is. This is obviously crucial due to the fact that a real, authenticated user would have to meet a certain set of
requirements in order to pass through security and into the system (Exploring Access Control for Security +
Certification, 2014). This would be a hackers dream if he knew the credentials, because he or she would be able to
access anything that system has to offer, which is usually important information.

Authorization
The second part of the access control puzzle is authorization. Access management is handled through various steps
of authentication. Typically, organizations will develop an access management model that involves each of the
following steps:
1. Users will first encounter an authentication function that will verify the users identity.
2. If the users identity is verified, they will encounter an access control function.
3. The access control function will cross reference the authorization database to determine what level of access has
been provisioned in the user profile.
4. At this point the appropriate access is granted to the user.
5. The appropriate database and organizational resource access are now made available to the user.

The authorization step also provides with three parts of its own as well: read, write, and execute. The read portion of
the authorization process reads the content which the files have, as well as list of directories that the content goes to.
This organization provides a system with an easy way to pull out and get to information. A hacker would be able to
disorganize, copy, or even destroy any information if he so chooses if he were able to get through this step.

Accountability
Lastly, the third and final step of the access control process is non-other-than accountability. In order to get really in
depth into accountability, we will have to simplify the process into different parts, which creates the entire step.
Accountability traces the actions of a user, and makes sure that the user doing these actions is in fact the original
user (Exploring Access Control for Security + Certification, 2014). To put this in more simple terms, if a hacker
were to copy the exact procedure that a user would do, the system could tell it is not the original user because he or
she may do actions using personalized shortcuts, etc. Most systems that use this type of fail-safe give at least three
attempts before locking out the user for a limited time. However, managing access control is important, and tricky,
but thankfully there are several ways to manage it so that the system and the vital information for which it contains
is safe.

Access Control Matrix


One method of managing access control is to make use of an access control matrix. The matrix will allow the
security administrator to reference each user according to their identity, cross check the files that user is able to
access, as well as what level of access (Own, Read, Write) that user has been given.
Another method of access control is conducted using an authorization table. Once users are verified to have access,
an authorization table will allow the security administrator to determine the individual permissions the user has
regarding each file. These, and many other methods[smho2] can be used together to ensure identity and access
management within an organization.

Data Protection
The concept of data protection is one that is imperative to any kind of business or technical operation of a website,
network or system. Data protection is most simply described as Computer Security focused only on the data stored
itself, rather than including software, hardware, and communications (Stallings & Brown, 2015, pg. 140). So then
data protection would be the conservation of Confidentiality, Integrity, and Availability of data.

Relevance to Incident Response


Data protection is an imperative complement to incident response, just as incident response is imperative to data
protection as a whole. Without proper use of data security, there would be multiple if not many attacks and incidents
which would require a cumbersome amount of incident response cycles every day. Without incident response, data
protection would never evolve and thus would inevitably fall behind to a dangerous degree in the arms race of
cybersecurity & cybercrime.

Extreme Insecure Vulnerabilities - PHP


The lab concerning the Extreme Insecure website uncovered vulnerabilities in PHP scripts, among others. The PHP
scripts were easily manipulated via code injection using interpretation errors inherent in Shell injections to run
commands via misinterpreted user input (Shirey, 2012). There are many different forms of code injection
exploitation similar to the shell injection used in the lab, some of which will be reviewed later. For now, however, let
us focus on the problems at hand.

PHP Exploit General Concept

The PHP scripts in question are being exploited by using the text field located on Products.htm to issue commands
where normally plain user data would be expected as input. This is possible because the interpretation of the input
can correlate with functional/syntactical meaning. If the PHP scripting doesnt include constraints on user input, then
if its given in the form of a string with meaning, like a command starting with or including operative punctuation, it
will be interpreted as such. Using this method of inputting code illicitly is a form of Code Injection. (Golem
Technologies, N.D.)

Shell Injection
One type of injection used in the Extreme Insecure lab was specific to UNIX shell features, known as a Shell
Injection. Shell features are operators that link commands in a multiple of different ways. They are represented by
different forms of punctuation, such as $, &, and | etc. A shell injection is when you use shell features to cause a
text-entry form to misinterpret text user input as a shell command (Golem Technologies, N.D.). This is
accomplished by adding the shell feature just after the input to append an illicit command, following the shell
feature. This changes the result of the input to the return/result of the appended shell command just after the return
of the original input. For example if in a URL field we put www.plaintext.com/plaintext.txt which lead to a page
containing the string plaintext, then added a shell feature, www.plaintext.com;ls would return plaintext
<ListOfFiles> to show the files I had in that directory, as requested by the ls command. The ; symbol runs the
appended command just after the previous parameter or command, which explains why its listed just below the web
pages string.

Command Injection Prevention Techniques


ESCSHELLCMD
The primary source of vulnerability in regards to these PHP exploits is the use of text field input to misinterpret
simple input as code. There are multiple ways to prevent or mitigate the chance of exploiting user inputs in such a
fashion. To amend the shell injection-specific vulnerabilities, use of PHP functions such as escshellcmd() (Achour et
al., 2014). This function takes input data and escapes any characters in the string such as the semicolon, ampersand,
and pipe symbol that may be used to execute shell commands through data input. It does this by adding backslash
before the symbols, to encode them so as not to have special/functional meaning for code injection.

Prepared Statements
Another such method is the use of an API that supports parameterized queries such as Prepared Statements. Prepared
statements are different from normal data queries because normal queries send the query and the data for the query
in one statement. This allows the data (in this case malicious) to modify the query in an attack such as SQL injection
(Boronczyk, 2011). Prepared statements separate the process of sending your query and the data for the query, this
way the data cant be used to modify the query itself. It does this by replacing the data in the query itself with a
placeholder, then defining that placeholder in the next separate statement, thereby thwarting any attempts to use the
first instance of data to modify the query.

Blacklisting
The use of input validation such as blacklisting would restrict the use of those special characters that initiate the
special modification of simple data input. Blacklisting is the use of a particular list of characters that arent allowed
for use in a given input, all other characters are allowed. Input encoding could also be used for similar interpretationbased vulnerabilities. Examples of such encoding are striptags() to remove HTML tags for similar exploitation, or
escape functions such as the escapeshellcmd() or mysql_real_escape_string() for SQL injection protection
(Boronczyk, 2011).

Whitelisting

If input validation is your concern, and your user input text fields have relatively simple input parameters, you could
use a whitelist to only allow certain characters. For example, You have a profile creation or management page in
which you need name, age, gender, etc. You could use a whitelist to only allow alphanumeric characters (a-z, 0-9),
that way no punctuation that may carry some functional meaning, whether it be SQL or Shell injection techniques,
can be input to those text fields.

Network-Layer Firewall
As important as securing the particular mechanisms of the webpage, its also important to secure general network
access to the server with use of a network layer firewall and/or other network security features. Essentially these
control the network traffic, or data packet flow, to and from the server. If network traffic to a webserver isnt
managed, unbridled access can have devastating consequences. You can specify the firewalls exclusions by
changing its configuration to best suit your preferences on what is allowed to pass and what isnt. Network firewalls
can be stateful or stateless, depending on whether you prefer more context to your firewall policies or whether you
prefer speed of processing.

Stateful Firewalls
Stateful firewalls hold context about active sessions, and use it to speed packet processing. This context can be
described by several properties such as the source/destination IP address, UDP/TCP ports, and the current stage of
the connection (session initiation, handshaking, etc.) (Gattine 2014). If the packet in question doesnt match these
components of the current connection, held in the firewalls state table, its treated as a new connection with its own
rule set, otherwise its allowed. Stateless firewalls require less memory, and as such are more efficient with
processing. They may also be required for stateless network protocols, where a stateful firewall wouldnt be used.
(Gattine 2014).

Symmetric Encryption
When implementing data protection methods into a system or file it is important to not just think about what can be
done to protect assets against an intrusion, but also what can be done to protect the data even once it is in the hands
of the hacker. Using Symmetric encryption to further protect the data would prevent the hacker from being table to
view the contents of the file even if managed to retrieve it. Symmetric encryption uses a random symmetric key
shared by the creator and anyone else who is supposed to receive the file along with an encryption algorithm to
encrypt the file. This means that the only way that the hacker could view the file if he retrieved it would be by
having knowledge of the secret key or by running large amounts of tests to maybe figure out the key to decrypt the
file. (Stallings & Brown, 2015).

SQL Injection
This attack occurs when someone enters a fragment of SQL code as a value in someones URL form. This attack is
potentially extremely harmful because someone has the ability to delete an entire database by using a drop database
command via SQL injection (Boronczyk, 2011). The use of PDO prepared statements can be used to prevent a SQL
Injection attack from harming a database. PDO prepared statements allow the SQL statements to be separated from
the data. This prevents a SQL injection attack from occurring because there is no place to enter a SQL statement; the
space has already been filled by the predefined line of code (Shirey, 2012).

XSS (Cross Site Scripting)


Cross site scripting is the injection of code into PHP script output that causes. This error is in large caused by
validating user data that is invalid due to faulty PHP coding that allows harmful input. An example of this would be
where some code accepts user input, and instead of the intended user input malicious code is instead inserted. The
best and most effective way of preventing cross site scripting attacks is to validate user input and escaping the

following output. Validating the data is simply implementing code into a PHP program that checks to see if the
program is receiving correct data to run properly (Shirey, 2012). Escaping the data is the process of further securing
the data before it is processed and sent to the end user. This prevents any invalid access attempts on the file because
the output is now secure from the end user.

Source Code Revelation


Source code revelation is the release of private information in the event of a system failure of the Apache web
server. PHP scripts that are normally not able to be viewed from the non-server side of things can now see these
scripts as a result of the failure (Shirey, 2012). The PHP scripts can sometimes contain sensitive information.
Directory structure is extremely important in an application design therefore sensitive files should be kept in a
directory that is not accessible to the public. Implementing this in the PHP source code design prevents the code
from becoming publicly viewable plain text in the event of a server error.

XSS Reflection
The attacker uses script injection to send compromised user-submitted data to a site. If the content remains
unchecked as others view that content, theyll execute the script assuming its trusted data available for use with any
service, program, or function on the network. Highly trafficked sites that display content from user input to a wide
user base can be exploited and affect thousands of users in an alarmingly short amount of time if user input is not
properly checked (Stallings & Brown, 2015, pg. 367). A strong input validation policy is the best defense against
this type of attack.

Regular Expressions
When validating user input to prevent injection exploits, the idea is to ensure that user-submitted data are consistent
to the kinds of data you want for that form field, while avoiding input that could carry additional meaning or
function. You can achieve this by either comparing user-submitted data with wanted input, or comparing with bad or
unwanted input. These comparisons are done with regular expressions (Stallings & Brown, 2015, pg. 369). They are
lines in your script that express valid input in the form of character sequence patterns that are either allowed or
denied. Regular expressions can consider characters literally, where the characters must match exactly, or with
special meanings, allowing alternative character sets, classes of characters, and repeated characters.

Input Fuzzing
When testing for the plethora of nonstandard input types that can be exploited, taking the time to try each one can
take far too long and thus compromise your inputs validity in the meantime. There exists a technique called Input
Fuzzing that uses software to randomly generate data as program input (Stallings & Brown, 2015, pg. 370). The
configurability of this tool allows all different input types to be tried automatically at a rate far superior to manual
human input. This will lead to many more results of vulnerable input types so that a system administrator can
highlight those input types and devise a way to deal with them.

Remote File Inclusion


Remote file inclusion is exactly what it sounds like, the inclusion of remote files in a PHP application. This file
injection is dangerous due to the risk of the file containing code that is unwarranted in the application it was added
into. The attack is usually silently performed and when someone accesses the website with the added code file.
The program executes and runs the malicious code (Shirey, 2012). In order to prevent this there are two flags,
allow_url_fopen and allow_url_include, in the PHP initialization that need to be switched from their default, to
secure file access and inclusion.

Remote File Inclusion Prevention Techniques

The PHP function allow_url_fopen controls access to URL object files using URL-aware fopen wrappers, by default
encompassing HTTP and FTP protocol. The PHP function allow_url_include controls whether or not someone can
include a file with their PHP input. If youre allow_url_include is set to 0 and your access control is otherwise
secure, you can leave allow_url_fopen set to 1. Otherwise, if your website doesnt need the features of fopen its
generally best to have it set to 0. As of PHP 5.2.0 the allow_url_include function is set to 0 by default, so any sites
using previous versions of PHP need update or seek further configuration (Achour, 1997).

Lab Procedure
Over the course of this semester, Team Pending worked on several different lab procedures. These procedures
enumerated all of the topics that we have discussed in this document. In summary, we experimented with piping
Unix commands to attack a vulnerable website, as well as cross-site scripting with PHP scripts. This section will
describe, in detail, what these labs were and how they pertain to this literature review.

Part A
To present an example of analyzing the difficulties and intricacy of network security, we will provide an example of
how we as a group were able to access virtual directories, and were able to move around files, download and install
plug ins, open new files and manipulate a web page. To the average reader this seems like a benign activity. In
reality, hackers rely on normal users being unaware of the minute details.To an experienced, technologically saavy
person, it is not very difficult to take advantage of these oversights.
In our Lab assignment, we were told to download and unzip certain files in order to access a certain IP address and
get to a secret page that could only be seen by hacking. What we had to was download files and unzip them in Unix
in order to access a special Apache server. With the right commands were able to successfully download special
tools and files that would enable us to access this web page. In order to get to this point however, we had to move
certain files to the specific directories that they were intended to go in and to do this we needed administrative
rights. Using the file we called extremeinsecure which was opened onto the server to give us the path to the
hidden address, we were able to upload it to the apache server and begin installing the zip files. Once were able to
unzip and open the files using the special administrative commands in the command prompt and were able to hack
the website using the the commands that were given to us as well as the files that were downloaded in order to
access the page. In addition to the command that was explicitly given to us, we were encouraged to try other Unix
commands in order to see how they would work in the website environment. This incident is an example of hacking
and its something that can happen to anyone at anytime at anyplace. This is an example of why network security is
so important and so vital to discuss. As technology changes, it becomes more and more dangerous, and so do we. We
must be able to identify threats and counter them to protect our most vital information.

Part B
In another part of a lab that we did, we were able to conduct a XSS attack in a virtual machine setting. There were
many steps that were required in order to do this, but the first was of course to make sure that the proper files were
put in their correct directories. After this was executed, we then had to make the proper file changes and also change
certain commands by giving them administrative rights so we could then use the nano text editor in the command
prompt to alter the files we just downloaded. Our main goal is to set a cookie on a page and to steal it. We took the
two files we downloaded, and used the nano text editor to change certain parts of the file. For the first file that we
used, which was called malURL.htm, we had to change the process by which the user gets to the link by making it
run two different scripts when the page asked for a username. What we did was alter the text and make the file run
two script, which were called setgetcookie and stealcookie.php. At this point we moved on to the second part of
the paper which was to edit the second file called redirectpage.htm in order to literally redirect the user to a page
which claims that the cookie has been stolen. While being navigated to this page, the cookie had then been directed
to our log.txt file. To put this in perspective, this type of attack could happen on any website that has accounts. Say

for example someone was able to edit the pages for email websites or Facebook. One could then steal email
accounts and passwords and the user wouldnt even notice. These types of attacks are what make information
security so scary as well as so important and fun to be a part of. Being able to protect information that is precious to
you by being aware of the potential threats and how they work is something that all people should be educated
about.

Conclusion
Technology will forever continue to grow, and as it grows, so must we. With every leap in technological innovation
comes the risks of the newest technology falling into the wrong hands, or hindering everyday life as we know it.
With network security, and a planned out incident response procedure, we are able to prepare for the future and
protect the most important information to us. This publication has proved the importance of data protection as well
as the step by step plan of an incident response that was used in the case of a real life hack. Identity protection has
also been revealed to be more important and more complex than we could have predicted prior to writing this report.
We could never be too safe when it comes to the things we care about the most, and hopefully with the further
understanding of network security, we will not have to worry about being safe anymore.

References
Grance, T., & Kent, K. (2004). Computer security incident handling guide recommendations of the National
Institute of Standards and Technology. Gaithersburg, Md.: U.S. Dept. of Commerce, Technology Administration,
National Institute of Standards and Technology
California Department of Technology. (n.d.). Incident Response Plan Example. Retrieved October 15, 2014, from
http://www.cio.ca.gov/ois/government/library/documents/incident_response_plan_example.doc
TechNet, Microsoft. (2014, January 1). Responding to IT Security Incidents. Retrieved October 17, 2014.
Shirey, D. (2012, October 15). PHP Master | Top 10 PHP Security Vulnerabilities. Retrieved October 20, 2014.
Boronczyk, T. (2011, September 2). PHP Master | Migrate from the MySQL Extension to PDO. Retrieved October
20, 2014.
Achour, M., Betz, F., Dovgal, A., Lopes, N., Magnusson, H., Richter, G., ... Vrana, J. (2014, October 17). PHP
Manual. Retrieved October 20, 2014, from http://php.net/manual/en/index.php
Stallings, W., & Brown, L. (2015). Access Control. In Computer security: Principles and practice (Third ed.). Upper
Saddle River: Pearson Education.
Web Application Firewall. (2006, September 7). Retrieved October 20, 2014, from
https://www.owasp.org/index.php/Web_app_firewall
Gattine, K. (2014, September 18). Types of firewalls: An introduction to firewalls. Retrieved October 20, 2014.
Shell Injection and Command Injection. (N.D.). Retrieved October 20, 2014, from
https://www.golemtechnologies.com/articles/shell-injection
Silva Consultants. (2014). Security Tips from Silva Consultants. Retrieved November 12, 2014, from
http://silvaconsultants.com/joomla1/index.php/why-would-an-experienced-security-manager-hire-a-securityconsultant.htm
SECforce. (2013). Penetration Testing. Retrieved November 16, 2014, from http://www.secforce.com/penetrationtesting/penetration-testing.php
HackLabs. (2011). Web Application Security Testing - HackLabs. Retrieved November 17, 2014, from
http://www.hacklabs.com/web-app-penetration-testing/
VSR. (2013). Web Application Penetration Testing. Retrieved November 15, 2014, from
http://www.vsecurity.com/services/apt
Praetorian. (2014, January 1). Network Security Testing. Retrieved November 15, 2014, from
http://www.praetorian.com/network-security/external-penetration-testing
Bailey, T., Brandley, J., & Kaplan, J. (2013, December 1). How good is your cyberincident-response plan? Retrieved
November 23, 2014, from
http://www.mckinsey.com/insights/business_technology/how_good_is_your_cyberincident_response_plan

Kral, P. (2012, January 1). The Incident Handlers Handbook. Retrieved November 17, 2014, from
http://www.sans.org/reading-room/whitepapers/incident/incident-handlers-handbook-33901
Create a CSIRT. (n.d). Retrieved November 23, 2014 from
http://www.cert.org/incident-management/products-services/creating-a-csirt.cfm?
European Union Agency for Network and Information Security. (n.d.). Retrieved November 23, 2014, from
https://www.enisa.europa.eu/activities/cert/support/guide2/introduction/what-is-csirt
Zajicek, M. (2004, February 20). Creating and Managing CSIRTS. Retrieved November 23, 2014, from
https://www.apricot.net/apricot2004/doc/cd_content/23rd%20February%202004/06%20-%20MTF%20%20Creating%20and%20Managing%20CSIRTS%20-%20Mark%20Zajicek/Creating-and-Managing-CSIRTsnotes.pdf

Вам также может понравиться