Академический Документы
Профессиональный Документы
Культура Документы
Web Application
Security
Iberic Web Application Security Conference
IBWAS 2009
Madrid, Spain, December 10-11, 2009
Revised Selected Papers
13
Volume Editors
Carlos Serrão
ISCTE-IUL Lisbon University Institute
OWASP Portugal Ed. ISCTE
Lisboa, Portugal
E-mail: carlos.serrao@iscte.pt
Vicente Aguilera Díaz
Internet Security Auditors
OWASP Spain
Barcelona, Spain
E-mail: vicente.aguilera@owasp.org
Fabio Cerullo
OWASP Ireland
OWASP Global Education Committee
Rathborne Village, Ashtown, Dublin, Ireland
E-mail: fcerullo@owasp.org
ISSN 1865-0929
ISBN-10 3-642-16119-7 Springer Berlin Heidelberg New York
ISBN-13 978-3-642-16119-3 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
springer.com
© Springer-Verlag Berlin Heidelberg 2010
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper 06/3180
Preface
IBWAS 2009, the Iberic Conference on Web Applications Security, was the first
international conference organized by both the OWASP Portuguese and Spanish chap-
ters in order to join the international Web application security academic and industry
communities to present and discuss the major aspects of Web applications security.
There is currently a change in the information systems development paradigm. The
emergence of Web 2.0 technologies led to the extensive deployment and use of Web-
based applications and Web services as a way to develop new and flexible information
systems. Such systems are easy to develop, deploy and maintain and they demonstrate
impressive features for users, resulting in their current wide use. The “social” features
of these technologies create the necessary “massification” effects that make millions
of users share their own personal information and content over large web-based inter-
active platforms. Corporations, businesses and governments all over the world are also
developing and deploying more and more applications to interact with their busi-
nesses, customers, suppliers and citizens to enable stronger and tighter relations with
all of them. Moreover, legacy non-Web systems are being ported to this new intrinsi-
cally connected environment.
IBWAS 2009 brought together application security experts, researchers, educators
and practitioners from industry, academia and international communities such as
OWASP, in order to discuss open problems and new solutions in application security.
In the context of this track, academic researchers were able to combine interesting
results with the experience of practitioners and software engineers.
The conference held at the Escuela Universitaria de Ingeniería Técnica de Tele-
comunicación of the Universidad Politécnica de Madrid (EUITT/UPM) was organized
for the very first time and represented a step forward in the OWASP mission and
organization. During the two days of the conference, more than 50 attendees enjoyed
different types of sessions, organized around different topics. Two renowned keynote
speakers, diverse invited speakers and several accepted communications were pre-
sented and discussed at the conference. During these two days, the conference agenda
was distributed in two major abstract panels, industry and research sessions, organized
according to the following topics:
• Secure application development
• Security of service-oriented architectures
• Threat modelling of Web applications
• Cloud computing security
• Web application vulnerabilities and analysis
• Countermeasures for Web application vulnerabilities
• Secure coding techniques
VI Preface
On the final day of the conference, a panel discussion was held around a specific
topic: “Web Application Security: What Should Governments do in 2010.” From this
discussion panel a set of conclusions were reached and some specific recommenda-
tions were produced:
1. Challenge governments to work with organizations such as OWASP to in-
crease the transparency of Web application security, particularly with respect
to financial, health and all other systems where data privacy and confidential-
ity requirements are fundamental.
2. OWASP will seek participation with governments around the globe to de-
velop recommendations for the incorporation of specific application security
requirements and the development of suitable certification frameworks within
the government software acquisition processes.
3. Offer OWASP assistance to clarify and modernize computer security laws,
allowing the government, citizens and organizations to make informed deci-
sions about security.
4. Ask governments to encourage companies to adopt application security stan-
dards that, where followed, will help protect us all from security breaches,
which might expose confidential information, enable fraudulent transactions
and incur legal liability.
5. Offer to work with local and national governments to establish application
security dashboards providing visibility into spending and support for appli-
cation security.
Although organized together by the OWASP Portugal and Spain chapters, IBWAS
2009 was a truly international event and welcomed Web application security experts
from all over the world, supported by the OWASP open and distributed community.
We, as organizers of the IBWAS 2009 conference, would like to thank the different
authors who submitted their quality papers to the conference, and the members of the
Programme Committee for their efforts in reviewing the multiple contributions that we
received. We would also like to thank the amazing keynote and panel speakers for
their collaboration in making IBWAS 2010 a success.
Finally, we would like to thank the EUITT/UPM for hosting the event and for all their
support.
Programme Committee
Chairs Aguilera Díaz V., Internet Security Auditors, OWASP Spain, Spain
Cerullo F., OWASP Ireland, Ireland
Serrão C., ISCTE-IUL Instituto Universitário de Lisboa,
OWASP Portugal, Portugal
Abstracts
The OWASP Logging Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Marc Chisinevski
Papers
A Semantic Web Approach to Share Alerts among Security Information
Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Jorge E. López de Vergara, Vı́ctor A. Villagrá, Pilar Holgado,
Elena de Frutos, and Iván Sanz
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
The OWASP Logging Project
Marc Chisinevski
Digiplug, France
marc.chisinevski@gmail.com
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 1, 2010.
© Springer-Verlag Berlin Heidelberg 2010
SQL Injection - How Far Does the Rabbit Hole Go?
Justin Clarke
SQL Injection has been around for over 10 years, and yet it is still to this day not truly
understood by many security professionals and developers. With the recent mass
attacks against sites across the world, and well publicised data breaches with SQL
Injection as a component, it has again come to the fore of vulnerabilities under the
spotlight, however many consider it to only be a data access issue, or parameterized
queries to be a panacea. This talk explores the deeper, darker areas of SQL Injection,
hybrid attacks, SQL Injection worms, and exploiting database functionality. Explore
what kinds of things we can expect in future.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 3, 2010.
© Springer-Verlag Berlin Heidelberg 2010
OWASP O2 Platform - Open Platform for Automating
Application Security Knowledge and Workflows
Dinis Cruz
In this talk Dinis Cruz will show the OWASP O2 Platform, which is an open
source toolkit specifically, designed for developers and security consultants to be
able to perform quick, effective and thorough 'source-code-driven' application
security reviews. The OWASP O2 Platform (http://www.owasp.org/index.php/
OWASP_O2_Platform) consumes results from the scanning engines from Ounce
Labs, Microsoft's CAT.NET tool, FindBugs, CodeCrawler and AppScan DE, and
also provides limited support for Fortify and OWASP WebScarab dumps. In the
past, there has been a very healthy skepticism on the usability of Source Code
analysis engines to find commonly found vulnerablities in real world applications.
This presentation will show that with some creative and powerful tools, it IS pos-
sible to use O2 to discover those issues. This presentation will also show O2's
advanced support for Struts and Spring MVC.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 5, 2010.
© Springer-Verlag Berlin Heidelberg 2010
The Business of Rogueware
Luis Corrons
The growth and complexity of the underground cybercrime economy has grown sig-
nificantly over the past couple of years due to a variety of factors including the rise of
social media tools, the global economic slowdown, and an increase in the total num-
ber of Internet users. For the past 3 years, PandaLabs has monitored the ever-evolving
cybercrime economy to discover its tactics, tools, participants, motivations and vic-
tims to understand the full extent of criminal activities and ultimately bring an end to
the offenses. In October of 2008, PandaLabs published findings from a comprehen-
sive study on the rogueware economy, which concluded that the cybercriminals be-
hind fake antivirus software applications were generating upwards of $15 million per
month. In July of 2009, it released a follow-on study that proved monthly earnings
had more than doubled to approximately $34 million through rougeware attacks dis-
tributed via Facebook, MySpace, Twitter, Digg and targeted Blackhat SEO. This ses-
sion will reveal the latest results from PandaLabs’ ongoing study of the cybercrime
economy by illustrating the latest malware strategies used by criminals, examining the
changes in their attack strategies over time. The goal of this presentation is to raise the
awareness of this growing underground economy.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 7, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Microsoft Infosec Team: Security Tools Roadmap
Simon Roses
The Microsoft IT’s Information Security (InfoSec) group is responsible for informa-
tion security risk management at Microsoft. We concentrate on the data protection of
Microsoft assets, business and enterprise. Our mission is to enable secure and reliable
business for Microsoft and its customers. We are an experienced group of IT profes-
sionals including architects, developers, program managers and managers.
This talk will present different technologies developed by Infosec to protect Micro-
soft and released for free, such as CAT.NET, SPIDER, SDR, TAM and SRE and how
they fit into SDL (Security Development Lifecycle).
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 9, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Empirical Software Security Assurance
Dave Harper
By now everyone knows that security must be built in to software; it cannot be bolted
on. For more than a decade, scientists, visionaries, and pundits have put forth a multi-
tude of techniques and methodologies for building secure software, but there has been
little to recommend one approach over another or to define the boundary between
ideas that merely look good on paper and ideas that actually get results. The alche-
mists and wizards have put on a good show, but it's time to look at the real empirical
evidence.
This talk examines software security assurance as it is practiced today. We will
discuss popular methodologies and then, based on in-depth interviews with leading
enterprises such as Adobe, EMC, Google, Microsoft, QUALCOMM, Wells Fargo,
and Depository Trust Clearing Corporation (DTCC), we present a set of benchmarks
for developing and growing an enterprise-wide software security initiative, including
but not limited to integration into the software development lifecycle (SDLC). While
all initiatives are unique, we find that the leaders share a tremendous amount of com-
mon ground and wrestle with many of the same problems. Their lessons can be ap-
plied in order to build a new effort from scratch or to expand the reach of existing
security capabilities.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 11, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Assessing and Exploiting Web Applications with the
Open-Source Samurai Web Testing Framework
Raul Siles
Taddong, Spain
raul@raulsiles.com
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 13, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Authentication: Choosing a Method That Fits
Miguel Almeida
Through the last five years, we, in the security field, have been witnessing an increase
in the number of attacks to (web) application user's credentials, and the refinement
and sophistication these attacks have been gaining. There are currently several meth-
ods and mechanisms to increase the strength of the authentication process for web
applications. To improve the user authentication process, but also to improve the
transaction authentication. As an example, one can think of adding one-time password
tokens, or digital certificates, EMV cards, or even SMS one-time codes. However,
none of these methods comes for free, nor do they provide perfect security. Also, one
must consider usability penalties, mobility constraints, and, of course, the direct costs
of the gadgets. Moreover, there's evidence that not all kinds of attacks can be stopped
by even the most sophisticated of these methods. So, where do we stand? What
should we choose? What kind of gadgets should we use for our business critical app,
how much will they increase the costs and reduce the risk, and, last but not least, what
kind of attacks we’ll be unable to stop anyway? This presentation will focus on ways
to figure out how to evaluate the pros and cons of adding these improvements, given
the current threats.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 15, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Cloud Computing: Benefits, Risks and Recommendations
for Information Security
Daniele Catteddu
ENISA, Greece
Daniele.Catteddu@enisa.europa.eu
The presentation “Cloud Computing: Benefits, risks and recommendations for infor-
mation security” will cover some the most relevant information security implications
of cloud computing from the technical, policy and legal perspective.
Information security benefit and top risks will be outlined and most importantly,
concrete recommendations for how to address the risks and maximise the benefits for
users will be given.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 17, 2010.
© Springer-Verlag Berlin Heidelberg 2010
OWASP TOP 10 2009
Fabio E. Cerullo
The primary aim of the OWASP Top 10 is to educate developers, designers, archi-
tects and organizations about the consequences of the most important web application
security weaknesses. The Top 10 provides basic methods to protect against these high
risk problem areas –and provides guidance on where to go from here.
The Top 10 project is referenced by many standards, books, tools, and organiza-
tions, including MITRE, PCI DSS, DISA, FTC, and many more. The OWASP Top 10
was initially released in 2003 and minor updates were made in 2004, 2007, and this
2010 release. We encourage you to use the Top 10 to get your organization started
with application security.
Developers can learn from the mistakes of other organizations. Executives can start
thinking about how to manage the risk that software applications create in their
enterprise.
This significant update presents a more concise, risk focused list of the Top 10
Most Critical Web Application Security Risks. The OWASP Top 10 has always been
about risk, but this update makes this much more clear than previous editions, and
provides additional information on how to assess these risks for your applications.
For each top 10 item, this release discusses the general likelihood and consequence
factors that are used to categorize the typical severity of the risk, and then presents
guidance on how to verify whether you have problems in this area, how to avoid
them, some example flaws in that area, and pointers to links with more information.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 19, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Deploying Secure Web Applications with OWASP
Resources
Fabio E. Cerullo
Secure applications do not just happen – they are the result of an organization decid-
ing that they will produce secure applications. OWASP’s does not wish to force a
particular approach or require an organization to pick up compliance with laws that
do not affect them as every organization is different.
However, for a secure application, the following at a minimum are required:
• Organizational management which champions security
• Written information security policy properly derived from national standards
• A development methodology with adequate security checkpoints and
activities
• Secure release and configuration management
Many of the tools, documentation and controls developed by OWASP are influ-
enced by requirements in international standards and control frameworks such as
COBIT and ISO.
Furthermore, OWASP resources can be used by any type of organization ranging
from universities to financial institutions in order to develop, test and deploy secure
web applications. This presentation will introduce you to some of the most successful
projects such as:
- OWASP Enterprise Security API which can be used to mitigate most com-
mon flaws in web applications;
- OWASP ASVS which is intended as a standard on how to verify the security
of web applications;
- OWASP Top 10 which helps to educate developers, designers, architects and
organizations about the consequences of the most important web application
security weaknesses;
- OWASP Development Guide which shows how to architect and build a se-
cure application;
- OWASP Code Review Guide which shows how to verify the security of an
application; source code;
OWASP Testing Guide which shows how to verify the security of your running
application.
Finally, as OWASP believes education is a key component in building secure ap-
plications, some of the initiatives being carried out by the OWASP Global Education
Committee are going to be highlighted.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 21, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Thread Risk Modelling
Martin Knobloch
How secure must an application be? To take the appropriate measures we have to
identify the risks first and think about the measures later. Threat risk modelling is an
essential process for secure web application development. It allows organizations to
determine the correct controls and to produce effective countermeasures within
budget. This presentation is about how to do a Tread Risk Modelling. What is needed
to start and where to go from there!
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 23, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Protection of Applications at the Enterprise in the Real
World: From Audits to Controls
Javier Fernández-Sanguino
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 25, 2010.
© Springer-Verlag Berlin Heidelberg 2010
A Semantic Web Approach to Share Alerts among
Security Information Management Systems
1 Introduction
Security is an important issue for Internet Service Providers (ISP). They have to keep
their systems safe from external attacks to maintain the service levels they provide to
costumers. Security threats are identified at routers, firewalls, intrusion detection
systems, etc. generating several alerts in different formats. To deal with all these inci-
dents, ISPs usually have a Security Information Management System (SIMS) [1],
which collects the event data from their network devices to manage and correlate the
information about any incident. A SIMS is useful to detect intrusions at a global level,
centralizing the alarms from several security devices.
A step forward in this type of systems would be the distribution of alerts among
SIMS from different ISPs and different vendors for an early response to network inci-
dents. Thus, mechanisms to communicate security notifications and actions have to be
developed. These mechanisms will let the collaboration among SIMS to share informa-
tion about incoming attacks. For this, it is important to homogenise the information the
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 27–38, 2010.
© Springer-Verlag Berlin Heidelberg 2010
28 J.E. López de Vergara et al.
SIMS are going to share. A data model has to be defined to address several problems
associated with representing intrusion detection alert data: alert information is inher-
ently heterogeneous, some alerts are defined with very little information and others
provide much more information; and intrusion detection environments are different,
the same attack can contain different information. Current solutions provide a common
XML format to represent alerts, named IDMEF (Intrusion Detection Message Ex-
change Format) [2]. Although this format is intended to exchange messages, it is not a
good solution in a collaborative SIMS scenario, as each SIMS would flood the other
SIMS with such messages. It would be better that a SIMS asks other SIMS about cer-
tain alerts, and later infers what is its situation based on that information. However,
IDMEF has not been defined to query for an alert set.
A way to solve this is to use ontologies [3], which have been precisely defined to
share knowledge. Ontologies have been previously proposed to formally describe and
detect complex network attacks [4, 5, 6]. In this paper we propose to define an ontol-
ogy based on IDMEF, where the alerts are represented as instances of Alert classes in
that ontology. The use of an ontology language also improves the information defini-
tion, as restrictions can be specified beyond data-types (for instance, cardinality).
With this ontology, each SIMS can store a knowledge base of alerts, and share it us-
ing semantic web interfaces. Then, other SIMS can ask about alerts by querying such
knowledge bases through semantic web interfaces. As a result, a SIMS would be able
to share their knowledge with other domain SIMS. The knowledge would include
policies, incidents, actualizations, etc. In a first phase, this sharing has been con-
strained to share alert incidents.
The rest of the paper is structured as follows. Next section presents the architecture
of collaborative SIMs based on knowledge sharing. Then, IDMEF ontology is ex-
plained, showing the process followed in its definition, as well as how to query it.
After this, an implementation of the system that receives IDMEF alerts and stores
them in a knowledge base is described. Results obtained in the different modules are
also provided. Finally, some conclusions and future work lines are given.
Instance
SIMS1 SIMS2
generator
IDMEF alert query
generator
IDMEF instance
SPARQL
query
Alert Alert
knowledge knowledge
base 1 base 2
Semantic Web
interface
3 IDMEF Ontology
IDMEF format provides a common language to generate alerts about suspicious
events, which let several systems collaborate in the detection of attacks, or in the
treatment of the stored alerts. Although IDMEF has some advantages (integration of
several sources, use of a well supported format), it has also drawbacks (heterogeneous
data sources led several alerts of a same attack which do not contain the same
information).
To solve the identified problems, we have defined an alert ontology based on the
IDMEF structure. In this process it is worth remarking that IDMEF has been defined
following a model of classes and properties, making easier the ontology definition,
with a more or less direct mapping. The ontology has been defined using OWL [11],
leveraging the advantages of the semantic web (distribution, querying, inferencing,
etc.), and also the results of [12]. Several class restrictions have been defined (cardi-
nality, data types) by analyzing the IDMEF definition contained in [2].
The following conventions have been taken to define the IDMEF ontology:
• Class names start with a capital letter and it is the same as the IDMEF class name.
• Property names starts with a lower-case letter and has the format do-
main_propertyName, where domain is the name of the class to which the property
belongs, and propertyName is the name of the property.
The following rules have also been taken:
• Each class in an IDMEF message maps to a class in the IDMEF ontology.
• Each attribute of an IDMEF class is mapped to a data-type property in the corre-
sponding ontology class.
• Classes that are contained in other class are mapped in general to object-type prop-
erties. An exception to this are aggregated classes that contain text, which have
been mapped to data-type properties.
• A subclass of an IDMEF class is also represented as a subclass in the ontology,
inheriting all the properties of its parent class.
• When an IDMEF attribute cannot contain several values, it is mapped to a func-
tional class.
30 J.E. López de Vergara et al.
• When an IDMEF attribute can only have some specific values, the ontology define
them as the allowed values.
• Numeric attributes are represented as numeric data-types properties, dates are repre-
sented as datetime data-type properties, and the rest as string data-type properties.
Following the rules above, the ontology has been defined. Fig. 2 shows a represen-
tation of the Alert class, its child classes (OverflowAlert, ToolAlert and Correlation-
Alert), and other referred classes (Classification, AdditionalData, Target, Source,
Assessment, CreateTime, AnalyzerTime, DetectTime, Analyzer). This figure has been
generated using the Protégé [13] ontology editor. The boxes represent the classes and
the arcs can be inheritance (in black, labelled isa) and aggregation (in blue, labelled
with the property names) relationships. A UML (Unified Modelling Language) repre-
sentation could also be provided, using the UML profile for OWL [14].
Our definition enables a mapping from IDMEF messages to IDMEF ontology in-
stances. In this way, the information contained on each IDMEF message is translated
to an instance of Alert, with instances of Target, Source, etc. as this information is
contained on each message. The ontology includes other additional classes, so any
IDMEF message can be represented in the ontology.
With respect to a plain XML IDMEF message, the ontology provides several ad-
vantages. For instance, the information can be restricted as defined in the IDMEF
definition [2]. Moreover, query languages such as SPARQL can be used to query all
the information contained in the knowledge base, and it is not limited to the scope of a
concrete XML document, which would be the case of IDMEF messages.
To query the knowledge base, SPARQL has been chosen, given that is has been re-
cently recommended by the W3C as the RDF/RDFS and OWL query language [9].
Using such language a query can be defined as follows:
The query starts with PREFIX clauses, to define the namespaces to be used to
identify the queried classes and properties. After this, the variables alert, id and tar-
get_address that meet a set of conditions are requested: alert variable is of type Alert,
which has the properties alert_messageid and alert_target. Then, alert_target prop-
erty refers to an instance with an address value, identified with the variable
target_address.
A Semantic Web Approach to Share Alerts among SIMS 31
4 Implementation
The architecture proposed in section 2 has been implemented. Apart from the compo-
nents provided by existing semantic web implementations (mainly Joseki server), we
have implemented the module that stores the IDMEF alerts in the knowledge base
(instance generator), as well as the module that queries alerts of an external knowl-
edge base (query generator). Subsections below present such implementations, pro-
viding later some results in section 5.
A module has been developed to map the IDMEF messages to ontology instances.
This module has been developed in Java, taking advantage of the libraries that this
language provides for parsing XML documents and ontologies. Fig. 3 shows the steps
that have to be performed to generate and save instances in the knowledge base:
− Create models.
− Read and write models.
− Load models in memory.
− Query a model: look for information inside the model.
− Operations on models: union, intersection, difference.
Models can be stored in many ways, including OWL files, as well as representations
of the ontology on a relational database. In this last case, there are several storing possi-
bilities, depending on the library used to represent the ontology on the database. Pre-
cisely, SDB is a Jena library specifically designed to provide storage in SQL databases,
both proprietary and open source. This storage can be done through the SDB API.
The Knowledge base, where the alerts are stored, can be queried through semantic
web interface by other SIMS. For this, another module has been developed, which
performs SPARQL queries to a Joseki server through HTTP. This server accesses the
Knowledge Base and it obtains the results of that query. These results are then re-
ceived by the query module.
To connect the query module to Joseki, it is necessary to use the ARQ library [15],
which is a query engine for Jena. The query module can execute any SPARQL query.
For most habitual queries, we have implemented a program which does the query
depending on a series of parameters. For instance:
• All alerts depending on the time:
− Alerts in the last week.
− Alerts in the current day.
− Alerts in a day.
− Alerts in an interval of time.
• Alerts queried using other parameters:
− Source IP address.
− Target IP address.
− Source port.
− Target port.
− Alert type.
− Target of the attack.
− Source of the attack.
− Tools of the attack.
− Overflow Alert.
− Analyzer.
• Assessments of the attacks: impact, actions, etc.
5 Results
The implemented modules, presented above, have been tested to know their perform-
ance. All the results have been obtained in a computer equipped with an Intel Core2
Duo E8500 processor at 3.16 GHz with 6 MB L2 Cache and 2 Gbyte RAM. Previous
tests with older computers provided worse results.
34 J.E. López de Vergara et al.
To evaluate the generation of instances, IDMEF messages available in [2] have been
used. Table 1 shows the times measured in milliseconds.
These times are measured after the database is created and the ontology model is
represented on the database. If the database and the model have to be created, there
are two possibilities:
• Use of JDBC (Java Database Connectivity), with a time of around 1.900 s.
• Use of SDB library, with a time of around 1.125 s, faster than the previous case.
Both JDBC and SDB libraries facilitate the connection to databases containing on-
tologies from Java application independently of the operating system. These libraries
are also compatible with different databases. In addition, SDB is a Jena component
designed specifically to support SPARQL queries and it provides storage in both
proprietary and open source SQL databases.
Once the database has been created, there are three alternatives to insert the in-
stances on the ontology database: JDBC, SDB and SPARQL/Update [16]. With re-
spect to the last alternative, SPARQL/Update is an extension to SPARQL that lets a
programmer the definition of insert clauses, whereas JDBC and SDB can insert data
in the ontology by creating ontology data structures in memory that are later stored.
From our experiments, the best measurements are obtained if the language
SPARQL/Update is used to insert the instances. They are approximately a 60% of the
time when SDB library is used, and a 50% compared to when plain JDBC is used. In
the case of the Assessment message there is an exception, because it contains charac-
ters that cannot be used in the SPARQL/Update sentence. In this case, the SDB li-
brary should be used instead.
Some measurements have also been taken with respect to the time that it takes to per-
form a concrete query from the query module to a test knowledge base with 112 alerts
A Semantic Web Approach to Share Alerts among SIMS 35
through the Joseki server. Simplified versions of the queries used for the experiment
are shown below (they also included other variables that could be useful about other
alert properties):
• Alerts depending on a time interval:
where time1 and time2 are properly replaced to query for a concrete period of time.
Table 4. Knowledge base query times depending on the target IP of the alerts
As shown, the time to retrieve the results is dependent on the number of alerts that
match the query, but not on the query itself. Further tests have to be performed with
larger knowledge bases.
A Semantic Web Approach to Share Alerts among SIMS 37
6 Conclusions
This work has assessed the applicability of semantic web technologies in security
information management systems, providing a way to semantically share information
among different security domains. For this, an ontology based on IDMEF has been
defined, which can hold all the information of any IDMEF message. To test this on-
tology, we have also defined and implemented a semantic collaborative SIMS archi-
tecture, where each SIMS stores its IDMEF alerts in a knowledge base and can query
other SIMS knowledge bases using a SPARQL interface.
The test performed to store alerts showed the times to save such alerts, which can
be acceptable for a prototype but not for a production system that receives tens of
alerts per second. Thus, some approaches have been done to improve these times. On
the one hand, Jena SDB library has been used to optimize the storage of the ontology
in a database. On the other hand, the use of SPARQL/Update has been proposed, to
limit the saving time to that information contained on each alert. Another improve-
ment has been the parsing of alerts continuously, to avoid launching a Java process
each time an IDMEF message arrives the instance generator. In this way, we could
reduce the storing time to a half from the initial approach.
With respect to the query modules, we have done preliminary tests with good re-
sults. We will generate further tests, modifying the size of the knowledge base to
check how the system performs with larger data sets. It is also important to note that
the instances of old alerts are periodically deleted from the knowledge base. This
avoids its size grow ad infinitum.
As another future work, we will study how to do inference with the information
contained in the knowledge bases.
Acknowledgements. This work has been done in the framework of the collaboration
with Telefónica I+D in the project SEGUR@ (reference CENIT-2007 2004,
https://www.cenitsegura.es), funded by the CDTI, Spanish Ministry of Science and
Innovation under the program CENIT.
References
1. Dubie, D.: Users shoring up net security with SIM. Network World (September 30, 2001)
2. Debar, H., Curry, D., Feinstein, B.: The Intrusion Detection Message Exchange Format
(IDMEF). IETF Request for Comments 4765 (March 2007)
3. Gruber, T.R.: A Translation Approach to Portable Ontology Specifications. Knowledge
Acquisition 5(2), 199–220 (1993)
4. Undercoffer, J., Joshi, A., Pinkston, A.: Modeling computer attacks: an ontology for intru-
sion detection. In: Vigna, G., Krügel, C., Jonsson, E. (eds.) RAID 2003. LNCS, vol. 2820,
pp. 113–135. Springer, Heidelberg (2003)
5. Geneiatakis, D., Lambrinoudakis, C.: An ontology description for SIP security flaws.
Computer Communications 30(6), 1367–1374 (2007)
6. Dritsas, S., Dritsou, V., Tsoumas, B., Constantopoulos, P., Gritzalis, D.: OntoSPIT: SPIT
management through ontologies. Computer Communications 32(1), 203–212 (2009)
7. Joseki – A SPARQL Server for Jena, http://www.joseki.org/
8. Jena – A Semantic Web Framework for Java, http://jena.sourceforge.net/
38 J.E. López de Vergara et al.
9. Prud’hommeaux, E., Seaborne, A.: SPARQL Query Language for RDF. W3C Recommen-
dation (January 15, 2008)
10. SDB - A SPARQL Database for Jena, http://jena.sourceforge.net/SDB/
11. McGuinness, D.L., van Harmelen, F.: OWL Web Ontology Language Overview. W3C
Recommendation (February 10, 2004)
12. López de Vergara, J.E., Vázquez, E., Martin, A., Dubus, S., Lepareux, M.N.: Use of on-
tologies for the definition of alerts and policies in a network security platform. Journal of
Networks 4(8), 720–733 (2009)
13. Gennari, J.H., Musen, M.A., Fergerson, R.W., Grosso, W.E., Crubézy, M., Eriksson, H.,
Noy, N.F., Tu, S.W.: The evolution of Protégé: an environment for knowledge-based sys-
tems development. Int. J. Hum.-Comput. Stud. 58(1), 89–123 (2003)
14. Object Management Group: Ontology Definition Metamodel Version 1.0. OMG document
number formal/2009-05-01 (May 2009)
15. ARQ - A SPARQL Processor for Jena, http://jena.sourceforge.net/ARQ/
16. Seaborne, A., Manjunath, G., Bizer, C., Breslin, J., Das, S., Davis, I., Harris, S., Idehen,
K., Corby, O., Kjernsmo, K., Nowack, B.: SPARQL Update, A language for updating
RDF graphs. W3C Member Submission (July 15, 2008)
WASAT- A New Web Authorization
Security Analysis Tool
1 Introduction
Nowadays web applications handle more and more sensitive information. As a conse-
quence, web applications are an attractive target for attackers, who are able to perform
attacks causing devastating consequences. Therefore, the proper protection of these
systems is very important and it becomes necessary for the site administrators to as-
sess the security of web applications.
In addition, these days, most of network-capable devices, including simple con-
sumer electronics such as printers and photo frames, have an embedded web interface
for easy configuration [1]. These web interfaces can also suffer a large variety of
attacks, therefore they should also be protected [1].
This paper presents a tool for the assessment of the security of different web au-
thentication schemes.
Usually, some web application areas have restricted access. Authentication allows
to verify the identity of the person accessing the web application.
Our tool is able to analyse the security of web applications using two HTTP au-
thentication schemes, namely Basic Authentication and Form-Based Authentication.
The Basic Authentication is a challenge-response mechanism that is used by a
server to challenge a client and by a client to provide authentication information. In
this scheme the user agent authenticates itself providing a user-ID and a password
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 39–49, 2010.
© Springer-Verlag Berlin Heidelberg 2010
40 C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez
when accessing to a protected space. The server will authorize the request only if it
can validate the user-ID and password for the protection space corresponding to the
URI of the request.
The Form-based Authentication is the most widely used authentication scheme.
When the client accesses a protected service or resource, the user is required to fill in
a form entering a username and a password. These credentials are submitted to the
web server, where they are validated against the database containing the usernames
and the passwords from all users registered in the web application. The access will
only be allowed if the credentials are present in the database.
Further information about these HTTP authentication schemes is presented in
Sec.2.
WASAT can be applied against any web application having an authentication
mechanism. This tool can mount dictionary and brute force attacks of varying com-
plexity against the target web site. User and password files can be configured to be
used as search space. Variations on the passwords can be generated using an easy
special syntax in the password file, which allows to perform exhaustive searches.
Also, low-signature attacks can be developed with this tool, in order to avoid detec-
tion. Several strategies can be used to generate low-signature attacks, like distributing
the requests of a user in several time periods.
The number of threads used by the application can be configured by the user in or-
der to improve the speed of the program. Also, a list of proxies can be specified by the
user in order to make the request anonymous.
The configuration session data can be stored in a file and opened later, making eas-
ier to initialize a new session. Moreover, the process can be paused and continued later.
WASAT also has a useful and complete help file for users.
The rest of the paper is organized as follows. Section 2 reviews different authenti-
cation schemes. In Sec. 3 several mechanisms that can be used by web servers to
detect brute force attacks are exposed. Section 4 refers to related work. In Sect.5 the
features and the behavior of WASAT are explained. Section 6 exposes the future
work and finally, in Sec.7, the conclusions of this work are captured.
This is the most common authentication scheme, used in web servers with thousands
and even millions of users. It consists of the creation of a database table storing user-
names and passwords of all users. When the protected service or resource is to be
accessed, the user fills in a form with the corresponding username and password.
These credentials are submitted to the web server, where they are validated against
the database. If the username and password exist in the database, access is granted,
otherwise, the user is rejected.
The HTML web form includes at least two input text fields: the username and the
password. Additionally, many other fields, usually included in the form as hidden
fields, may be present. Moreover, the authentication process may require the presence
of certain cookies and HTTP headers, such as “Referer:” and “User-Agent:”.When the
user is successfully logged in, it can happen that a token or a cookie are issued to the
user or that a memory space is assigned in the server, which will identify the user in
future requests without asking again for validation.
4 Related Work
There are several popular tools similar to our application, such as Crowbar [3], Brutus
[4], Caecus [5], THC-Hydra [6] andWebSlayer [7].
All these tools have been tested and several of their features have been considered.
The importance of some of these features was explained in the section 3.
The considered features are the following:
• Multi-Threading. It refers to the ability to establish different connections with the
server concurrently and speed up the process.
• Proxy Connection. Using proxies make possible to establish anonymous connec-
tions to the server.
• Password Generation. Automatic password generation allows the user build many
password combinations without writing a huge wordlist.
• Inter-Request Time. It refers to the minimum time interval between attempts with
the same username.
• Restore Sessions. The use of sessions let the user restore previous aborted sessions.
• Multi-Platform. It means the tool can run in any platform, thus the application is
not platform-dependent.
Proxy Connection and inter-Request-Time make possible to avoid IP-based and
time-based anti-brute-force mechanisms respectively.
In Table 1, these tools are compared against WASAT, according to the selected
features.
A experimental comparison regarding the time required for brute force attacks has
not been included in this paper as it depends on the bandwidth and the server load.
5 Application Description
WASAT offers the possibility to specify the configuration of the target web applica-
tion and the desired authentication method to be used. The program preferences can
also be configured by the user. After specifying the configuration, the analyses can
start. It can also be paused or stopped. The configuration parameters of every session
can be saved in a file and a configuration file can be loaded as well.
The current version of WASAT can be downloaded from http://www.iec.csic.es/wasat.
A snapshot of the main window of WASAT is presented in Fig. 1.
Target Definition. The target web application is defined by the URL and the port.
The URL should refer to the login page of the web application. Usually, this
parameter corresponds to the string in the “action” part of the HTML <FORM>label.
It is important to note that the URL should be correct and complete for the analysis to
be done properly. The port number is used to establish the HTTP connection. This
information usually can be gathered from the form definition. Its default value is 80.
Next, the type of authentication used to protect the page is selected. The Basic or
the Form-Based authentication can be chosen. Selecting the “Start/Continue from
position” option, the analysis will continue from a previously interrupted session,
exactly from the point where it was paused. Checking the “Stop if succeeded” option,
the analysis will stop after the first correct username/password pair is guessed. Oth-
erwise, the program will continue running until all possible usernames and passwords
have been tried.
44 C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez
Basic Authentication. If the basic authentication was selected in the “Target” tab, the
error code in this tab should be chosen. There are two possible values for the HTTP
Error Code: “200 OK” or “302 Object Moved”.
Request Settings
These are parameters regarding the request settings:
• Method. This is the HTTP method used in the form submission. The default value
of this parameter is “GET”.
• User ID. This parameter refers to the input text element name corresponding to the
username used in the form.
• Password ID. This parameter corresponds to the input text element name corre-
sponding to the password used in the form.
• Arguments. This parameter is optional. All other input arguments used by the form
should be written here. They are usually the hidden fields in the form. The submit
button name and value should be included too. It is important that every argument
(except the first one) is preceded by the “&” sign. Note that this text should be
HTTP coded, thus for example, no blanks or spaces are allowed and they must be
replaced by a “+” sign.
• Referer. This parameter is optional. The “Referer” header should be written here in
case the login page requires it.
• User Agent. This parameter is optional. The user can enter the “User-Agent”
header if the login page requires it.
• Cookie. This parameter is optional. The “Cookie” header can be established in this
parameter in case the login page needs it.
HTML Response
Some parameters should also be filled concerning the HTML response.It is necessary
to distinguish between the error page after an unsuccessful login attempt (the creden-
tials are wrong and the request failed) and the welcome page after a successful at-
tempt (the credentials are correct and the request succeeded). WASAT provides two
stop methods to differentiate both pages.
The first method uses words that only appears in the error page or in the welcome
page to distinguish them. The second method is based on the length of the pages in
order to differentiate both pages.
Firstly, the user should choose the method: “Search for string” or “Content-Length
comparison”.
The “search for string” method checks for the presence of a word or sentence
which only appears in the welcome page or for the absence of a word or sentence
which only appears in the error page. This option needs to retrieve the whole page to
search for the given string. In this case, the parameters are the following:
• Succeed. It is any sentence which appears only in the page reached after a valid
username/password pair has been guessed. This parameter is optional since in
many cases it is not known in advance.
WASAT- A New Web Authorization Security Analysis Tool 45
• Failure. It should contain any sentence which appears in the error login page (and
never in the correct page), after an invalid username/password pair has been
checked. This parameter is mandatory.
The “content-length comparison” method checks the length of the error and welcome
pages. This method does not require to retrieve the whole page, but only the headers,
thus is much faster. If this option is chosen, the parameters are these ones:
• Succeed. This is an optional parameter. It refers to the length in bytes of the wel-
come page.
• Failure. It is mandatory. It is the length in bytes of the error page.
• Variation. This parameter is optional. This parameter can be supplied in order to
accommodate small variations due to banners or other changing elements in web
pages which may affect the total length of the page.
A snapshot of the Configuration window is shown in Fig. 2.
Wordlists. In this tag the wordlist files and the processing instructions are defined.
Wordlist files
The program reads a list of usernames from a file and for each username tries to log in
using every password defined in the password list file. In order to generate low-
signature attacks, the application also reads a list of passwords from a file and for
each password tries to log in using every username defined in the usernames list file.
Processing Instructions
There are also some processing instructions that can be used to be more specific about
the use of the password file:
• Do not process passwords with spaces. If this option is checked, passwords con-
taining spaces are ignored.
• Process all passwords as lowercase. In this case, all passwords in the password file
will be converted into lowercase before being used against the target web site.
• Minimum password length. If this option is selected, passwords containing less
than the given number of characters are ignored.
• Maximum password length. In this case, passwords containing more than the given
number of characters are not checked.
• Reverse search. If this option is selected WASAT reads the passwords and for
every password, tries to log in with every username. If this option is not selected,
WASAT reads the usernames, and for every username tries to log in with every
password.
All the information entered through the four tabs can be saved in a definition file.
Opening this file simplifies the task of initializing the program for a new brute force
session.
Syntax for Password File. The program provides a special syntax to be used in the
password file, which allows to generate variations in the passwords. Using this
syntax, more than one request per username and password can be generated. This
makes the search space bigger, thus this tool is more effective and the security
analysis more precise.
Comments in the password file are inserted preceded by “#”. Blank lines are ig-
nored. There are several keywords that can be used to modify the passwords:
• $USER: it tries the username as password.
• $REV: it tries the reversed username as password.
• $BLANK: this option tries the blank password.
• $Dn: this option tries all digits from 0 to 9 n times. Example: $D2 will try 00, 01,
. . ., 10, 11, . . . , 98, 99.
• $Ln: it tries all lowercase letters from ’a’ to ’z’ n times. Example: $L6 will try
aaaaa, aaaaab, . . . , zzzzzy, zzzzzz.
WASAT- A New Web Authorization Security Analysis Tool 47
• $Un: it tries all uppercase letters from ’A’ to ’Z’ n times. Example: $L4 will try
AAAA, AAAB, . . . , ZZZY, ZZZZ.
• $Wn: this tries numbers from 0 to 9 and all letters (uppercase and lowercase) n
times. Example: $W5 will try 00000, 00001, . . ., AAAAA, AAAAB, . . . ,
ZZZZZ, aaaaa, aaaab, . . . , zzzzy, zzzzz.
The above keywords can be used in any position or even alone. The only limitation is
that several keywords cannot be used in the same password definition.
Proxies. A list of proxies can be defined when needed. The option “Use Proxy” in the
tab “General” should be checked to use the list. Specifying a list of proxies makes the
request anonymous. The following information is needed for every defined proxy:
• Host. It refers the proxy server IP address or host name.
• Port. It is the proxy server port number.
If authentication is needed to use the proxy, then the option “Authentication re-
quired” should be checked and the following parameters entered:
• Username. It is a valid username.
• Password. It is a valid password.
Logging. In this tab the settings about the log file can be established.
• Log File. The user can check the option “Log results to file” if the results are to be
logged in a file, whose path and name must be specified too. When checking the
option “Log activity report”, general operations performed by the program, like
opening or closing files, initializing or terminating, will be logged to the file.
48 C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez
5.3 Commands
Definition File. The button “New” starts a new analysis session. All the information
entered in the configuration frame can be saved in a definition file when clicking
“Save”. When clicking the “Open” button the definition file is loaded in the
configuration. The facility of opening this file simplifies the task of initializing the
program for a new brute force session.
Analysis Execution. By clicking the “Start” button the analysis starts, using the
parameters established in the configuration and the preferences. The analysis can be
paused and later resumed or completely stopped.
6 Future Work
These days, many web applications provide captchas [8] in order to determine
whether the user is a human or a machine. The use of captchas has become a very
popular mechanism for web applications to prevent brute force attacks. To our knowl-
edge, none of the existing authentication security tools implements a means to bypass
this barrier.
As future work, we are working to include in WASAT an anti-captcha mechanism
using artificial intelligence techniques. This feature will let the application bypass the
captchas barrier, and permit the assessment for a wider range of web applications.
7 Conclusions
An intuitive and complete Web Authorization Security Analysis Tool has been
presented in this paper. This application is designed for the security assessment of
different web related authentication schemes, namely Basic Authentication and
Forms-Based Authentication. The configuration of the analysis process against the
target web application and the program preferences can be specified by the user.
The application is platform independent, and present several advantages compared
with other popular existing tools while it has hardly any of their drawbacks. First,
WASAT has features that make the authentication assesstment easier for the user, like
automatic password generation, wordlist variations, aborted sessions restoring, and a
complete and user friendly help. Second, WASAT has features that avoid time-based
and IP-based anti-brute-force mechanisms on the server side, like low signature at-
tacks mounting and proxy connections. Third, the use of multithreading improves the
efficiency drastically, making possible to perform multiple authentication attempts
simultaneously.
Acknowledgements
We would like to thank the Ministerio de Industria, Turismo y Comercio, project
SEGUR@ (CENIT2007-2010), project HESPERIA (CENIT2006-2009), the Ministe-
rio de Ciencia e Innovacion, project CUCO (MTM2008-02194), and the Spanish
National Research Council (CSIC), programme JAE/I3P.
WASAT- A New Web Authorization Security Analysis Tool 49
References
1. Bojinov, H., Bursztein, E., Lovett, E., Boneh, D.: Embedded Management Interfaces:
Emerging Massive Insecurity. In: Black Hat Technical Security Conference, Las Vegas,
NV, USA (2009)
2. Berners-Lee, T., Fielding, R., Frystyk, H.: Hypertext Transfer Protocol – HTTP/1.0. (1996),
http://www.ietf.org/rfc/rfc1945.txt
3. Crowbar: Generic Web Brute Force Tool (2006),
http://www.sensepost.com/research/crowbar/
4. Hobbie: Brutus (2001), http://www.hoobie.net/index.html
5. Sentinel: Caecus. OCR Form Bruteforcer (2003),
http://sentinel.securibox.net/Caecus.php
6. Hauser, V.: THC-Hydra (2008), http://freeworld.thc.org/thc-hydra/
7. Edge-Security: WebSlayer (2008),
http://www.edge-security.com/webslayer.php
8. Carnegie Mellon University: CAPTCHA: Telling Humans and Computers Apart Automati-
cally (2009), http://www.captcha.net/
Connection String Parameter Pollution Attacks
Abstract. In 2007 the classification of the ten most critical vulnerabilities for
the security of a system establishes that code injection attacks are the second
type of attack behind XSS attacks. Currently the code injection attacks are
placed first in this ranking. In fact Most critical attacks are those that combine
XSS techniques to access systems and code injection techniques to access the
information.. The potential damage associated with this type of threats, the total
absence of background and the fact that the solution to mitigate this vulnerabil-
ity must be implemented by systems administrators and the database vendors
justify an in-depth analysis to estimate all the possible ways of implementation
of this attack technique.
1 Introduction
SQL injection attacks are probably the most known attacks related to a web applica-
tion through its database architecture. There are a lot of researches done over this kind
of vulnerability to conclude that to establish the correct filtering levels necessary to
inputs of the systems is the development team task for preventing an attack can thus
be successful.
In the case of the attack will be presented in this article, the responsibility rests not
only on developers, but it also affects the system administrator and the database ven-
dor. This is an injection attack that affects web applications but rather than focus on
its implementation focusing on connections that are established with from the applica-
tion and the database.
According to OWASP [1] in 2007 the classification of the ten most critical vulner-
abilities for the security of a system establishes that code injection attacks are the
second type of attack behind XSS attacks. In 2010 code injection attacks are the ones
that occupy the first position of this ranking. Currently most used and most criticality
attacks are attacks that combine XSS techniques to access systems with code injection
techniques to access the information. This is the case for the so-called connection
string parameter pollutions attacks. Potential criticality of this type of vulnerabilities
and the total absence of background justify an in-depth analysis to estimate all vectors
of implementation relating to this attack technique.
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 51–62, 2010.
© Springer-Verlag Berlin Heidelberg 2010
52 C. Alonso et al.
The paper structure is presented in three main sections. The first is this short intro-
duction where the most significant aspects of the connection strings and existing
mechanisms for the implementation of the web application authentication will be
introduced briefly below. Section 2 proposes a comprehensive study of this new at-
tack technique, with an extensive collection of test cases. Finally, the article con-
cludes briefly summarizing the lessons learned from the work.
Connection strings [2] are used to connect applications to database engines. The syn-
tax used on these strings depends on the database engine to be connected to and on
the provider or driver used by the programmer to establish the connection.
One way or another, the programmer must specify the server to connect to, the da-
tabase name, the credentials to use, and the connection configuration parameters, such
as timeout, alternate databases, communication protocol or the encrypting options.
The following example shows a common connection string used to connect to a
Microsoft SQL Server database:
“Data Source=Server,Port; Network Library=DBMSSOCN;
Initial Catalog=DataBase; User ID=Username;
Password=pwd;”
As can be seen, a connection string is a collection of parameters, separated by
semicolons (;), which contains value key pairs. The attributes used in the example
correspond to the ones used in the “.NET Framework Data Provider for SQL Server”,
which is chosen by programmers when they use the “SqlConnection” class in their
.NET applications. Obviously, it is possible to connect to SQL Server using different
providers such as:
- “.NET Framework Data Provider for OLE DB” (OleDbConnection)
- “.NET Framework Data Provider for ODBC” (OdbcConnection)
- “SQL Native Client 9.0 OLE DB provider”
The most common and recommended way for connections between SQL server
and .NET applications, is to use the default Framework provider, where the connec-
tion string syntax is the same to the different versions of SQL Server (7, 2000, 2005
and 2008). This is the one chosen in this article to illustrate the examples.
There can be two ways to define an authentication system for a web application: cre-
ate an own credential system, or delegate authentication to the database engine.
In most of the cases, the application developer chooses to use only one user to con-
nect to the database. This user will represent the web application inside the database
engine. Using this connection, the web application will make queries to a custom
users table where the user credentials are managed.
As only one user can access all content of the database, it is impossible to imple-
ment a granular permission system over the different objects in the database, or to
trace the actions of each user, delegating these tasks in the web application itself. If an
attacker is able to take advantage of any vulnerabilities of the application to access the
database, this will be completely exposed. This architecture is the one used by CMS
Connection String Parameter Pollution Attacks 53
systems such as Joomla or Mambo among other very commonly used on the Internet.
The target of any attacker is to extract database users table rows in order to access the
users’ credentials.
The alternative consists in an authentication process delegation, so the connection
string is used to check the user credentials leaving all the responsibility on the data-
base engine. This system allows applications to delegate the credential management
system to the database engine.
This alternative is mandatory to be used in all those applications that manage the
database engine. This is necessary in order to connect to a system with users who
have special permissions or roles, in order to perform administration tasks.
54 C. Alonso et al.
Knowing the possibility of making these kind of injections [3] in the connection
strings, Microsoft included since Framework 2.0 the “ConnectionStringBuilder” [4]
classes. They allow to create secure connection strings through the base class
(DbConnectionStringBuilder) or through the specific classes for the different provid-
ers (SqlConnectionStringBuilder, OleDbConnectionStringBuilder, etc…). This is just
because in these classes only value/key pairs are allowed and injection attempts are
monitored by escaping them.
The use of these classes at the time when a connection string is built dynamically
will avoid the injections. However, it is not used by all the developers and, of course,
by all the applications.
The parameter pollution techniques are used to override values on parameters. They
are well known in the HTTP [5] environment but are applicable to other environments
too. In this example, the parameter pollution techniques can be applied to parameters
in the connection string, allowing several attacks.
In order to explain these attacks the current article will use as an example a web ap-
plication over a Microsoft Internet Information Services web server, running on a
Microsoft Windows Server where a user [User_Value] and a password [Pass-
word_Value] are required. These data are going to be used into a connection string to
a Microsoft SQL Server database. As shown in this example:
Data source = SQL2005; initial catalog = db1;
integrated security=no;
user id=+’User_Value’+; Password=+’Password_Value’+;
Connection String Parameter Pollution Attacks 55
As can be seen, the application is making use of Microsoft SQL Server users to ac-
cess the database engine. Taking this information into account, and attacker can per-
form a Connection String Parameter Pollution Attack. The idea of this attack is to add
a parameter to the connection string that existed previously in it. The component used
in .NET applications set up the parameter with the last value in the connection string.
This means that having two Data Source parameters in a connection string, the one
used is the last one. Knowing this behavior and with this environment the following
CSPP attacks can be done.
As can be seen in the Fig. 5, when the port is listening, as in the current example,
the error message obtained shows that no Microsoft SQL Server is listening on it, but
a TCP connection was established.
In this second case, a TCP connection could not be completed and the error mes-
sage is different. Using these error messages a complete TCP scan can be done
against a server. Of course, this technique can also be used to discover internal Serv-
ers within the DMZ in which the web application is running.
58 C. Alonso et al.
2.3.3.1 Example 3: SQL Server Web Data Administrator. This tool is a project, origi-
nally developed by Microsoft, which was made free as an Open Project. Today, it is
still possible to download the last version that Microsoft released on 2004 from Mi-
crosoft Servers [13] but the latest one, released on 2007, is hosted in the Codeplex
web site [14]. The version hosted on Codeplex is secure to this type of attacks
because it is using ConnectionStringBuilder in order to dynamically construct the
connection string.
The version published on Microsoft web site is vulnerable to CSPP attacks. As can
be seen in the following screenshots, it is possible to get access to the system using
this type of attack.
An attacker can log into the database engine and hence to the Web application to
manage the whole system. As can be seen in the following figure (Fig. 9), this is due
to the fact that all the users and the network services have access to the server.
Fig. 10 shows how the Data Source parameter, after the User ID parameter, has
been injected with the localhost value. This parameter, Data Source, is also the first
one of the connection string. In this example their values are different; however, the
one that is taken into consideration is the last one, meaning the injected one.
The same happens with the parameter Integrated Security that appears initially
with the NO value but the one that counts is the one injected in the password parame-
ter with value YES. The result is total access to the server with the system account
which the web application is running with, as can be seen in Fig. 11.
2.3.3.3 Example 5: ASP.NET Enterprise Manager. The same attack also works on
the latest public version of ASP.NET Enterprise manager, so, as can be seen in the
following login form, an attacker can perform the CSPP injection to get access to the
web application.
And as a result of it, access can be obtained, just as can be seen in the following
screenshot.
3 Conclusions
All these examples show the importance of filtering any user input in web applications.
Moreover, these examples are a clear proof of the importance of maintaining the soft-
ware. Microsoft released ConnectionStringbuilder in order to avoid these kinds of
attacks, but not all projects were updated to use these new and secure components.
62 C. Alonso et al.
These techniques also apply to other databases such as Oracle Databases which al-
low administrators to set up Integrated security to the database. Besides, in Oracle
Connection Strings it is possible to change the way a user gets connected by forcing
the use of a sysdba session.
MySQL databases do not allow administrators to configure an Integrated Security
authentication process. However, it is still possible to inject code and manipulate
connection strings to try to connect against internal servers which were used by de-
velopers and not published on the Internet.
In order to avoid these attacks the semicolon must be filtered, all the parameters
sanitized and the firewall should be hardened in order to filter not only inbound con-
nection but also outbound connection from internal servers that are sending NTLM
connection through the internet. Databases administrator should also apply a harden-
ing process in the database engine to restrict the access permits to only the necessary
users by a minimum privilege policy.
References
1. The Open Web Application Security Project, http://www.owasp.org
2. Connection Strings.com, http://www.connectionstrings.com
3. Ryan, W.: Using the Sql Connection String Builder to guard against Connection String In-
jection Attacks,
http://msmvps.com/blogs/williamryan/archive/2006/01/15/
81115.aspx
4. Connection String Builder (ADO.NET),
http://msdn.microsoft.com/en-us/library/ms254947.aspx
5. Carettoni, L., di Paola, S.: HTTP Parameter Pollution,
http://www.owasp.org/images/b/ba/
AppsecEU09_CarettoniDiPaola_v0.8.pdf
6. Cain, http://www.oxid.it/cain.html
7. ASP.NET Enterprise Manager in SourceForge,
http://sourceforge.net/projects/asp-ent-man/
8. ASP.NET Enterprise Manager in MyOpenSource,
http://www.myopensource.org/internet/
asp.net+enterprise+manager/download-review
9. PHPMyAdmin, http://www.phpmyadmin.net/
10. myLittleAdmin, http://www.mylittleadmin.com
11. myLittleBackup, http://www.mylittlebackup.com
12. myLittleTools, http://www.mylittletools.net
13. Microsoft SQL Server Web Data Administrator,
http://www.microsoft.com/downloads/details.aspx?
FamilyID=c039a798-c57a-419e-acbc-2a332cb7f959&displaylang=en
14. Microsoft SQL Server Web Data Administrator in Codeplex project,
http://www.codeplex.com/SqlWebAdmin
Web Applications Security Assessment in the Portuguese
World Wide Web Panorama
1 Introduction
One of the current computing trends is the information systems distribution, in par-
ticular using the Internet. Critical systems are constantly deployed on the World Wide
Web, where crucial and confidential information crosses the bit waves of the informa-
tion highway or it is stored in an unsecure remotely located database.
Most of these critical systems are used on a daily basis, and there is an inherent
sense of security in each of the web applications that may not correspond to their real
security status and real needs. Andrey Petukhov and Dmitry Kozlov [1] make a refer-
ence to a survey, which states that 60% of vulnerabilities actually affect web applica-
tions, emphasizing even more the concerns in the relation between web applications
and classified information. The objective of this paper is to focus in the Portuguese
web applications security panorama, which will be divided it in two major areas: gov-
ernment online public services and online banking web applications. Although these
two main areas differ from each other, they have a common front-end to communicate
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 63–73, 2010.
© Springer-Verlag Berlin Heidelberg 2010
64 N. Teodoro and C. Serrão
with people - web applications - which will allow the entire testing process and subse-
quent methodologies to be the same, or very similar.
These assessments are mostly motivated by the perception of the lack of investment
in security and the everyday growth of new attacks in the web and into critical web
applications [2]. The aim of this paper is to check for vulnerabilities/exploits and to
produce a report for each of the tested web applications in order to communicate the
testing methods, the vulnerabilities found and the necessary corrective measures that
need to be taken to establish mitigate those flaws and ultimately benefit the end-user.
This paper proposes the usage of a web‐applications security analysis methodology
to determine their security level, against some of the most identified security threats,
based on the best practices on the web application security market. This work will
make use of freely available security assessment frameworks and automated tools, to
conduct these security evaluation tests. The work to be conducted will take advantage
of the large set of tools and documents produced by the Open Web Application Secu-
rity Project (OWASP) and other similar initiatives.
The Portuguese government public services web applications were mostly created
after the 2006’s Simplex [3] program launch. Simplex is a strategic priority of the
Portuguese Government, launched with the ambitious objective of decentralizing
most of the public services, reducing the gap between citizen and public administra-
tion, reinforce the idea of a great investment in the technologies sector and make the
public sector more and more efficient. As a result, most public services offered by the
government are now supported by information systems available over the World
Wide Web. From a citizen point of view, political interests generate the preconceived
idea that these programs often overcome the usual and recommended processes for
planning their components and introduction in the market, choosing deployment
speed in detriment of quality.
The financial sector, in particular banks, have always been the target of an enor-
mous amount of effort by attackers in order to compromise client’s assets as well as
banks credibility, either from rival entities or individual attackers.
This work will be conducted in the context of a MSc work, which will conduct the
necessary security assessment and present the results and conclusions at the end of the
work. In the context of this paper, it will be impossible to present any results and
therefore it will mostly focus on the web applications selection, the methodologies
identification and the results processing mechanism.
The first approach to this work will be to perform an analysis of the different method-
ologies to conduct web application security assessments. A Web application security
assessment full analysis would involve not only an inspection to the web application
itself but also to documentation, manuals or other relevant documents and the whole
network structure where the web applications are integrated.
Thus web applications security assessment can be based on several techniques such
as manual inspection & reviews, threat modeling, code review and penetration testing.
Due to the fact that access to these web applications’s code, life cycle, and other back-
end material is not available, the focus has to be directed to penetration testing [4][5].
Penetration testing will allow the evaluation of the security mechanisms of these web
applications by simulating an attack, and capture the way the web applications react to
the attacks. The entire process, which flow is described in Fig. 1, will involve an active
analysis of the applications for any weaknesses like technical flaws or software vulner-
abilities, having as a result, reports where the tests’ data will be documented.
At this stage, it will be identified which are the most common threats to web applica-
tions security. OWASP Top 10 [6] is extremely important as it provides the necessary
guidelines to identify the most exploited vulnerabilities in web applications. The latest
ten more common vulnerabilities identified by OWASP are the following:
1. Cross Site Scripting (XSS)
2. Injection Flaws
3. Malicious File Execution
66 N. Teodoro and C. Serrão
The public administration services portals list as the work presented in this paper
progresses, can and most probably will, be extended. Through the web applications
described in this section, can be performed many critical actions, and more services
are available inside these portals, so the main entry point will be one of the portals
described here. Although, with the process of testing and further investigation, differ-
ent portals/web applications can appear, and if sufficiently relevant, will be included
in the results of this assessment.
Authentication Testing
• Credentials transport over an encrypted channel
• Testing for user enumeration
• Default or guessable (dictionary) user account
• Testing For Brute Force
Web Applications Security Assessment in the Portuguese World Wide Web Panorama 69
Authorization testing
• Testing for path traversal
• Testing for bypassing authorization schema
• Testing for Privilege Escalation
• Buffer Overflows
• User Specified Object Allocation
• User Input as a Loop Counter
• Writing User Provided Data to Disk
• Failure to Release Resources
• Storing too Much Data in Session
AJAX Testing
• AJAX Vulnerabilities
• Testing For AJAX
Authentication
• Brute Force
• Insufficient Authentication
• Weak Password Recovery Validation
Authorization
• Credential/Session Prediction
• Insufficient Authorization
• Insufficient Session Expiration
• Session Fixation
Client-side Attacks
• Content Spoofing
• Cross-site Scripting
Command Execution
• Buffer Overflow
• Format String Attack
• LDAP Injection
• OS Commanding
• SQL Injection
• SSI Injection
• XPath Injection
Web Applications Security Assessment in the Portuguese World Wide Web Panorama 71
Information Disclosure
• Directory Indexing
• Information Leakage
• Path Traversal
• Predictable Resource Location
Logical Attacks
• Abuse of Functionality
• Denial of Service
• Insufficient Anti-automation
• Insufficient Process Validation
Although there may be some overlapping between both the methodologies, this
will, of course, help to cover more efficiently web applications threats since the refer-
ences for penetration testing tests will be from these two major organizations which
have focused their efforts in this common purpose.
As a final stage, the results from the tests will be collected, including information on
how the vulnerabilities can be exploited, which the exploitation risks are and what is
the vulnerability impact on the web application, treat the data for each web applica-
tion tests and draw conclusions. Any security issues that are found will be presented
to the system owner together with an assessment of their impact and with a proposal
for mitigation or a technical solution.
In the final document, as suggested by Andres Andreu [9], should be present data
important for the target entity, which should become aware of issues like:
• The typical modus operandi of the attacker
• The techniques and tools attackers will rely to conduct these attacks
• Which exploits attackers will use
• Data they are being exposed from the web application
In order to better analyze and demonstrate to the stakeholders, if needed, the results
of these tests, the document will be structured with the following sections:
• Executive Summary – high-level vision about the tests, presenting statistics
and target’s overall standing in respect to attack susceptibility.
• Risk Matrix – quantify all the discovered and verified vulnerabilities, catego-
rize all the issues discovered, identify all resources potentially affected, pro-
vide all relevant details of the discoveries and provide relevant references,
suggestions and recommendations.
• Best Practices (every time it is possible) – provide coding or architecture
standards.
• Final Summary – sum up of the entire effort and the overall state of the target
for the penetration tests.
72 N. Teodoro and C. Serrão
The work presented here is not only bounded by technical constraints (as it was
presented on the last section), but it has to handle with legal considerations which can
pose themselves as a major blocking force to the success of this work. The following
section of the paper highlights these issues.
3 Legal Constraints
Besides the normal technical details that will need to be handled, one of the major
problems/challenges identified within this work are related with legal as-
pects/constraints. Most of the work described in this paper has to be bounded by legis-
lation. In particular, the case of penetration testing, when not properly authorized by
the target tested entity, can have harmful legal consequences.
In the process of this work, one stage will be to ask permission to the target entities
to perform these tests, which of course, can, or can’t be granted. One other issue will
be regarding the results, some entities can accept that these tests are performed,
mostly in because it is in their own interest, but demand that the results remain pro-
tected from external viewers.
From one perspective, it is somehow still a question whether this permission has to
be asked in the scope of this project. Although these tests can in some cases present a
threat to the web application itself, and consequently, to the entity holding it, the in-
tention is not to perform any criminal or bad intentioned act against it and the tests
will only rely in actions, which any external user can perform.
Nonetheless, authorizations will be asked and measures will be taken in order to
minimize possible legal problems and functional problem to the targets, when per-
forming these tests. These measures can be summarized in:
• Getting the target entity to establish and agree with us, the testers, clear
time frames for pen testing exercise;
• Getting the target entity to clearly agree that we are not liable for any-
thing going wrong that may have been triggered by our actions;
• Find if the target entity has any non disclosure agreements that have to
be signed prior to the pen tests;
• Getting the target entity relevant contacts for any unexpected situation.
As a last resource, if permission is denied, the project scope can be adapted, not
invalidating the whole project, but changing targets to more receptive ones.
4 Conclusions
The work presented in this paper defines the methodologies, techniques and tools that
will be used to conduct the Portuguese web applications security assessment. These
assessments should be considered of the highest importance by the entities, which
develop and distribute those web applications, mostly because they serve the purpose
of performing high sensitive operations.
A set of Portuguese public services and financial banking services were chosen and
a methodology was drawn, defining testing phases, processes and tools that could
Web Applications Security Assessment in the Portuguese World Wide Web Panorama 73
identify the most common vulnerabilities in web applications – bounded by the rec-
ommendations and best practices advocated by international organizations, such as
OWASP.
As an end result, it will be clearly identified, for each web applications, if they
have security flaws. Reports will be produced clearly explaining which and how tests
were performed, which were the identified vulnerabilities and solutions or work-
arounds, if found, for mitigating the problems found. It will also be provided informa-
tion on how severe those flaws were and which implications they have, or could have
for the entity holding the web application.
Although full security assessments should also be based in documentation and
code reviewing, which can reveal hidden security issues, these penetration tests
should provide a very close view on the web applications security.
This work can also be the guideline for extrapolating penetrations tests to other
web applications, which can be very important and interesting from a business point
of view, especially because these tools, methodologies and frameworks, are freely
available. Penetration testing can provide to these two sectors a huge service since the
Portuguese Government and banks obviously rely in their reputation and service
availability to maintain a certain amount of trust with clients, which many times jus-
tify investments in the security area.
In particular these assessments will allow these entities to answer questions they
probably make themselves every day: “What is our level of exposure?”, “Can our
critical applications be compromised?” and “What risks are we running by operating
on the Internet?”.
References
1. Petukhov, A., Kozlov, D.: Detecting Security Vulnerabilities in Web Applications Using
Dynamic Analysis with Penetration Testing, Computing Systems Lab, Department of Com-
puter Science, Moscow State University (2008)
2. Holz, T., Marechal, S., Raynal, F.: New Threats and Attacks on the World Wide Web. IEEE
Computer Society, Los Alamitos (2006)
3. Simplex Program, http://www.simplex.pt
4. Budiarto, R., Ramadass, S., Samsudin, A., Noor, S.: Development of Penetration Testing
Model for Increasing Network Security. IEEE Press, Los Alamitos (2004)
5. Arkin, B., Stender, S., MCGraw, G.: Software Penetration Testing. IEEE Press, Los
Alamitos (2005)
6. van der Stock, A., et al.: OWASP Top 10 the ten most critical web application security vul-
nerabilities. In: OWASP (2007)
7. Agarwwal, A., et al.: OWASP Testing Guide v3.0. In: OWASP (2008)
8. Auger, R., et al.: Web Application Security Consortium: Threat Classification. WASC Press
(2004)
9. Andreu, A.: Pen Testing for Web Applications, Wiley Publishing, Inc., 10475 Crosspoint
Boulevard Indianapolis, IN 46256 (2006)
Building Web Application Firewalls in High
Availability Environments
Abstract. Every day increases the number of Web applications and Web ser-
vices due to migration that is occurring in this type of environments. In these
scenarios, it is very common to find all types of vulnerabilities affecting web
applications and traditional methods of protection at the network and transport
level, not enough to mitigate them. What is more, there are also situations
where the availability of information systems is vital for proper functioning. To
protect our systems from these threats, we need a component acting on the layer
7 of the OSI model, which includes the HTTP protocol that allows us to analyze
traffic and HTTPS that is easily scalable. To solve these problems, the paper
presents the design and implementation of an Open Source application firewall,
ModSecurity, emphasizing the use of the positive security model, and the de-
ployment of high availability environments.
1 Introduction
Due to the large number of threats in web applications, it is essential to protect our
information systems. In that context, it is vitally important to follow a design process
with security measures that ensure the integrity, confidentiality and availability of
these resources.
Generally, most information systems have network-level protections, sophisticated
enough, to block malicious attacks in the first four layers of TCP/IP model, while the
exploitation of vulnerabilities in the application layer increases and these existing
measures, such as firewalls or intrusion detection systems in the network layer or
transport layer, are not sufficient. The security in Web applications and Web services
is a big problem due to lack of measures to protect systems from these threats.
It is important to note that the fact of introducing an application firewall in our
network topology increases the points of failure and reduces the SLA, so important in
Web environments. Therefore techniques must be implemented to ensure high avail-
ability for business continuity.
The solution is to implement an application firewall that is scalable, and responsive
to the issues we raised. To develop the project we will use open source solutions, be-
cause they offer a low cost and great flexibility to configure and set the requirements.
The open source alternative that was chosen was ModSecurity [1] because it offers a
C. Serrão, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 75–82, 2010.
© Springer-Verlag Berlin Heidelberg 2010
76 J.G. Lara and À.P. Gracia
few advantages: includes countless security features, stability, reliability, good docu-
mentation and it’s free. We will perform the configurations using this free open
source software under the GNU GPLv2 license [2] in combination with Apache,
ModProfiler [3] and the OpenBSD [4] operating system.
2 ModSecurity
ModSecurity operates as an Apache module, intercepting HTTP traffic and perform-
ing a comparison process for each request. If a request is classified as an attack, Mod-
Security would follow the actions specified in the configuration. The main function of
this solution is filtering requests, analyzing the content of HTTP requests, both in-
coming traffic and outgoing traffic. One advantage of ModSecurity over NIDS is the
ability to filter HTTPS traffic. In a scenario with a network IDS filtering requests, if
traffic was encrypted using SSL/TLS, IDS could not parse these requests, so the at-
tacks go undetected. In this case, the use of SSL/TLS, which in most cases protect us,
would be an advantage for the attacker to hide their actions. However, ModSecurity
(working embedded in the web server) processes the data once it has been deciphered.
First, mod_ssl does the decoding of the request and once in plain text, ModSecurity
analyze it correctly.
The life cycle of requests, passes through a series of steps with the goal of optimiz-
ing the search procedure anomalies and block the attack as soon as possible. With this
we will increase the performance, because if we are sure that the request is malicious
in phase 1, there is no need to analyze this request in the rest of phases.
The process comprises five phases, the first phase is the analysis of the HTTP
headers (REQUEST_HEADERS), then filtering is done in the body of HTTP requests
(REQUEST_BODY phase), in this last process are detected the highest number of
attacks. Then the RESPONSE_HEADERS phase and the RESPONSE_BODY phase
will be performed. Both would analyze requests for response to prevent information
leaks. Finally, LOGGING phase will be processed, it is the responsible for the regis-
tration (log) of the complete request, and it is very useful for future forensic analysis
in case of intrusion or other scenarios.
3 Security Models
There are two security models for the classification of systems, both can coexist. The
negative model can be used to generalize and the positive to particularize.
In the negative model, everything is allowed by default, except what is explicitly
prohibited, while in the positive model, everything that is not expressly permitted is
forbidden. The IDS/IPS systems used in Web applications, specifically ModSecurity
can operate in both modes.
On the negative security model, the system would require a black list of rules with
the goal of blocking malicious requests. When a request arrives, it starts a search
process in the database, which contains all known attacks, if a match is found, the
request will be locked. Some of these systems work in conjunction with scoring rules,
giving a score to each request and blocking those that exceed a certain threshold.
In the positive security model, you can create a template from the Web application
where to specify in detail the operations allowed in the application. Everything out-
side this template will be locked. You must carefully specify the format of all parame-
ters, so if an attacker makes changes by sending unauthorized values the access will
be blocked.
The positive model is more appropriated and provides more safety in critical envi-
ronments, than the negative model. The first mentioned, helps to protect the systems
from not known attacks or 0day exploits. These 0day exploits are programs or scripts
that exploit vulnerabilities for which there is no patch or solution for correction. The
big potential of this tool makes necessary a greater effort to create a scenario of this
type.
One tool that tries to facilitate the construction of rules with a white list approach is
REMO [5] (Rule Editor for ModSecurity), which offers a graphical interface where
the process of writing rules in the positive model it is easier, but does not support
automation.
To help configure a firewall application with the guidelines of the positive model,
there is a tool called ModProfiler [3], which analyzes the traffic passing through it.
Observe what is valid and what is not, and can define the types and the maximum size
of the parameters. This system operates under the premise of denying everything that
is not known as valid. Normally the default web applications allow any HTTP
method, number and type of parameters, although in most cases work with a smaller
number of factors.
78 J.G. Lara and À.P. Gracia
Follow this model and establish the correct configuration, gives us several
advantages:
• Prevent attacks that attempt to exploit HTTP methods other than
those permitted, which otherwise could be used by default.
• Disable the use of exploits making use of encodings not known or
not permitted by the application.
• Prevent information leaks of files that are hosted on the server but
are not part of the application and kept for oblivion in the root direc-
tory of the web server.
• Prevent use of enabled debug modes that provide much useful in-
formation to a potential attacker and block any operation outside of
the operation that is considered valid within the web application.
By using this approach we can specify for each web application files and interfaces
that will be used. Each of them, specifying the number of parameters, type and size
limits and other parameters such as encoding or HTTP methods allowed to use.
We will need three network cards to use the pfsync functionality, which will keep
the states of all active communications in high availability, mitigating the loss of con-
nection if the Master server falls and recovering all states in the Slave.
The operation of CARP protocol is very simple, it acts as a virtual interface with a
corresponding virtual IP and MAC address, i.e. that the O.S. has created this interface
for managing data and their respective counterparts. With the pfsync functionality we
can share the estates of “pf” in time with all nodes.
To configure CARP on the external virtual interface we will use the carp0 interface
and the internal virtual interface will use carp1.
For the slave computer the configuration will be similar, but we will need to modify
the value "advskew" at 100 as a weight value.
We can start the network firewall "packet filter" from the command line or by
modifying the file "/ etc / rc.conf" to start automatically every time you start the
system.
root@master:~# pfctl –d
root@master:~# pfctl –e
root@master:~# cat /etc/rc.conf | grep “pf\=YES”
pf=YES
Once we have raised our firewall we will need to configure the network interface in
each computer, in our case, are two so we will use a crossover cable. Our dedicated
interface to synchronization will be the physical interface VIC2, which will be speci-
fied in the pfsync interface, pointing to the address of the other computer.
6 Performance Charts
The following chart shows the time it took to serve 1, 100 and 400 requests on the
Web Server without protection, the results of testing an intermediate computer with-
out applying any filtering and the requests made with de ModSecurity filters enabled.
The tests were performed in a LAN at Gigabyte and the web served size is 20,000
bytes.
The setup used in the performance testing is detailed below:
The Web Server Apache has been configured for reverse proxy mode, using Proxy-
Pass and ProxyPassReverse directives:
<Location />
<IfModule security2_module
Include /path/www.example.com/modsecurity2.conf
</IfModule>
<IfModule mod_proxy.c>
Building Web Application Firewalls in High Availability Environments 81
ProxyRequests off
ProxyPass http://www.example.com:80/
ProxyPassReverse http://www.example.com:80/
</IfModule>
</Location>
8 Reverse Proxy
6
5
Time
4
3
2
1
0
1 100 400
Hits
7 Conclusions
Security in Web applications and Web services should require more than just a Layer
3 firewall. The number of attacks in these environments has increased so dramatically
that we need a firewall on Layer 7 that understands the HTTP protocol, able to protect
us against threats.
The web application firewall, as described in this article meets the expectations and
solves the problems submitted, among other features is able to analyze SSL / TLS
traffic in both modes: black list and white list.
The design and implementation has been developed in high availability environ-
ment, where it is very important having a service always available and avoid denial of
service to legitimate users.
Security is very important throughout the life cycle of software development, and
also, in the network filtering systems in each layer.
References
1. ModSecurity Open Source Web Application Firewall,
http://www.modsecurity.org
2. GNU GPLv2 License, http://www.gnu.org/licenses/gpl-2.0.html
3. ModProfiler, http://www.modsecurity.org/projects/modprofiler/
4. OpenBSD Operating System, http://www.openbsd.org
5. OWASP, http://www.owasp.org/
6. CARP and pfsync guide, http://www.kernel-panic.it
82 J.G. Lara and À.P. Gracia