Вы находитесь на странице: 1из 7

Towards a Measuring Framework for Security

Properties of Software
Riccardo Scandariato, Bart De Win, and Wouter Joosen
DistriNet, Katholieke Universiteit Leuven
Belgium
Abstract
Among the different quality attributes of software artifacts, security
has lately gained a lot of interest. However, both qualitative and quantitative methodologies to assess security are still missing. This is possibly due
to the lack of knowledge about which properties must be considered when
it comes to evaluate security. The above-mentioned gap is even larger
when one considers key software development phases such as architectural and detailed design. This position paper highlights the fundamental
questions that need to be answered in order to bridge the gap and proposes
an initial approach.

Setting the Scene

The severe damage suffered by IT infrastructures in the recent past because of


security vulnerability in software products (e.g., see the Slammer worm affecting
SQL servers) has convinced both business and technical people to invest money
and effort in security. On the business side, according to IDC, global spending
on support services for security software will grow by 49% over the next four
years. On the technical side, it is becoming more and more understood that
security has to be managed early on throughout the software development lifecycle. To this aim, several security-aware process methodologies are arising, like
the OWASP Comprehensive Lightweight Application Security Process and the
Microsoft Security Development Lifecycle.
The question that has not yet been answered, though, is: are we spending
such money and effort effectively? Answering this question without figures on
the board is difficult. Indeed, security deserves means of assessment comparable
to those employed for other quality attributes. For instance, maintainability is a
fairly well understood quality attribute that can be quantitatively estimated by
measuring properties such as size and complexity. If one now considers security
again, literature lacks a clear definition of which properties can be considered in
order to assess software security. Therefore, to fulfill this gap, the first point in
the research agenda should be the elicitation of such properties. Notice that this

paper focuses on product properties mainly, rather than processes and resources.
In fact, in the latter case significant work already exists [9].
An obvious approach is to distinguish between two types of properties. Some
properties will lend themselves to a quantitative analysis, while others will demand for a more qualitative approach. Indeed, the security of software-based
systems must be weighted up in relation to the human-based environment the
system is deployed in. For example, less secure yet hassle-free authentication
mechanisms are purposely used in home-banking web applications, in order to
make services more successful. In such cases, security properties must be assessed with qualitative trade-off analysis techniques [1]. Furthermore, security
is about threats, which depend on human factors. For instance, consider the
enforcement of the least-privilege principle in order to reduce risk of exploits in
code executed with high-level permissions. In order to assess the effectiveness
of such property, user behavior is to be considered and hence such property is
hard to be seized by figures. For the above reasons, quantitative assessment by
means of software metrics is not sufficient on its own to grasp all facets of security. Nonetheless, the value of a quantitative measuring framework is hardly
arguable. Indeed, it could be used to drive business decision on a more solid
ground and to round off the role of security in the software development process.
Hence, quantitatively assessable properties of security represent the main focus
of this paper.
On a different axis of the categorization, a distinction can be made between
component and engineering security properties. In the first case, the properties
of a component in isolation are considered. For instance, the encryption strength
of a tunneling component is measurable by the type of encryption algorithm and
the length of the keys it adopts. Measuring such properties is useful both to drive
the selection of off-the-shelf security components, as well as to assess the level of
component compliance to a security contract. On the other hand, engineering
properties refer to the software product as a whole, like, e.g., the size of the
trust domain. Due to space limitation, this paper focuses on the latter type.
Nonetheless, the authors acknowledge the same importance to both.
Back to our maintainability example, it can be observed that, in general,
quantitative properties of a quality attribute can be measured at different levels
of abstraction. For instance, complexity can be measured at code level (McCabe
cyclomatic complexity), at design level (coupling between objects, [3]), and at
architectural level (coupling between components, [6]). There is no particular
reason why security properties should not exhibit a similar behavior. As a naive
example, consider the size of the attack surface of an application. This security
property can be measured at the architectural level as the number of points
of access (user-wise). At a lower level, the same property could be measured
as the number of design classes that process user input. Finally, at code level,
the coverage of input validation routine could be considered. However, most of
the literature focuses on the low end of the spectrum, i.e., on metrics to assess
security posture after deployment. Examples of such traditional metrics are
the number of invalid login attempts, the number of detected viruses, and the
patches installation rate [2].
2

The authors are not questioning the value of the metrics mentioned above.
However, the latter are biased toward system security engineering (in contrast to
application security engineering) and consider software entities as black boxes,
i.e., they capture defects of software after deployment. In this sense, they are
operational and reactive. On the contrary, the authors are primarily interested
in metrics that can be used to assess the level of security of software artifacts
proactively, i.e., before deployment, and especially during design. For instance,
such metrics could play a lead role in defining acceptance criteria of software
artifacts during early stages of the development process. More importantly,
metrics should constitute an analysis tool to identify criticalities early on, with
remarkable impact on costs. This is of particular importance if one considers
that about 50 % of software defects are actually design errors. For the above
reasons, this paper will focus on security properties that can be seized during
the architecture/design phases, which appear to have been highly neglected in
past works.

Software Metrics for Security

The work presented here is undoubtedly in an early stage. Nonetheless, the


authors tried to give shape to the ideas presented in the previous section in
more practical terms. As outlined before, the more urgent item in the research
agenda is the definition of security properties of software that (1) are quantitative in nature with regard to assessment, (2) allow proactive estimation of
software security, especially during the architecture/design phases, and (3) can
be measured at different levels of abstraction.
As a first step towards the definition of a structured framework of properties, the authors started eliciting them from the list of security principles and
practices. A (partial) list of these principles is available in [8, 5]. In related
work, a similar approach was used to elicit process properties out of Security
Maturity Model standards like SSE-CMM and ISO 17799 [2]. In particular,
in [8] the proposed principles are mapped to the corresponding process phases
they apply. It is clear from that work that no principle can be localized at a
distinct stage. Therefore, principles and practices are a helpful starting point
to elicit properties that can be measured at different levels of abstractions. The
following sections analyze security principles that are relevant to the purpose of
unearthing security properties and propose suitable metrics to measure them.

2.1

Keep It Small and Simple

Simple mechanisms tend to have fewer exploitable flaws and require less maintenance. Furthermore, because configuration management issues are simplified,
updating or replacing a simple mechanism becomes a less intensive process.
Properties that can be used to estimate the enforcement of this principle are as
follows.
Size
3

Complexity
Size of the attack surface
2.1.1

Metrics

Size and complexity can be measured with standard software metrics at both
design and code level, e.g., as described in [6, 3]. Possible means of measuring
the size of the attack surface at several levels of abstraction have already been
discussed in the previous section.

2.2

Separation of Concerns

By obeying to this principle while designing systems, features can be optimized


independently of one another, so that failure of one does not cause others to fail
too. In general, separation of concerns makes it easier to understand, design,
and manage complex interdependent systems. For this principle, the following
property can be investigated.
Degree of security concern separation
2.2.1

Metrics

The degree of security concern separation can be measured by means of metrics


that have been defined in the aspect-oriented development research community. Among those metrics, of particular interest are the concern diffusion over
modules and the concern diffusion over operations as in [4].

2.3

Implement Layered Security

Security designs should consider a layered approach in order to address a specific


threat or to reduce vulnerability. For example, the use of a packet-filtering router
in conjunction with an application gateway and an intrusion detection system
increases the work-factor an attacker must expend to successfully exploit the
system. One single property has been defined to measure this principle.
Lines of defense
2.3.1

Metrics

Possible means to evaluate the existence of a layered security design are: (1)
the number of data validation checks per information flow, and (2) the number
of authentication/authorization checks per usage scenario.

2.4

Identify and Minimize Criticalities

Under the term critical modules we consider all entities (data or methods)
that are vulnerable to tampering attacks. A module can be rated as critical
based on several criteria, as follows: it is security related, it is located in an
untrusted environment, the module is an important asset to the owner of the
software, and the module is foundational in the design and, hence, can be a
target for Denial of Service attacks. However, also the number of fully trusted
modules, i.e., components intentionally not undergoing security scrutiny, must
be kept as low as possible. Accordingly, the relevant property to be measured
is:
Number of critical modules
2.4.1

Metrics

Identification methods could take UML diagrams as input. For example, deployment diagrams specify location-related information about an application,
and such information can be used to point out trust relationships, untrusted
deployment environments, and possible bottle necks (Dos). In order to identify
modules that are important (asset-wise), risk analysis techniques must be used.
Further metrics of interest are the number of entities to be trusted, which
have to be minimized, and the afferent coupling of components [7], which can
be used to identify foundational (hence DoS-sensitive) modules.

2.5

Some Individual Must Be Accountable

Software systems should keep a record of activities to ensure system resources


are functioning properly and that they are not used in unauthorized ways. The
audit trails could be used both for monitoring resource, as well as evidence in
case of violations of the security policy. In this case, it is interesting to assess
the following property.
Degree of accountability
2.5.1

Metrics

A naive approach to measuring the above property is represented by counting


the number of non-audited operations (at architectural level) or methods (at
design level) with respect to the total number of operation/methods. These
metrics can be cross-checked with those in Section 2.4, since identified critical
modules must be audited more carefully, due to their sensitive nature.

Where We Go From Here

This position paper tries to start off the discussion on more advanced metrics
suites that are able to cover the software development phases thoroughly. In
5

particular, the authors focused on metrics that apply to both the architecture
and design stages, since a major gap exists in that area. They also proposed
a framework methodology to elicit relevant security properties to be measured,
namely by analyzing well-known security principles and practices. Finally, an
illustration of the approach was presented and an initial set of properties listed,
along with the associated metrics.
There is no doubt that much work is yet to be done by the research community and the remaining part of this section tries to highlight some of the
priorities.
Several dimensions can be identified to classify the security properties. For
instance, the properties in Sections 2.1 and 2.2 deal with complexity and,
hence, apply to the overall software. On the contrary, the properties in
Sections 2.3 and 2.4 refer to security specific software. Finally, the properties in Section 2.5 reflect the impact of organizational/policy constraints.
These dimensions need to be refined further.
More properties must be elicited in a systematic and exhaustive way, e.g.,
by analyzing several sources of information, like the Common Criteria, the
ISO 17799 standard, and the SANS Policy Project.
Guidelines are needed in order to understand how the different metrics
must be correlated and interpreted.
The cost of measures must be low. To this aim, we acknowledge the importance of harvesting measures automatically. Some of the above mentioned
metrics require high level of expertise and high degree of manual work
(e.g., consider metrics to measure criticalities in Section 2.4). Automation could be facilitated by suitable annotations in design (and possibly
requirements) documents.
Methodologies to identify metrics of interest, like the Goal Question Metric
(GQM) by Basili, must be considered and possibly adapted to security.
For instance, security patterns employed during the design phase could
carry information about suitable metrics to be monitored.
Qualitative and quantitative approaches must be reconciled in a holistic
approach.
We expect to open the discussion on both the proposed approach and the
above priorities during the workshop.

Acknowledgments
This work is part of the SoBeNet project Software Security for Network Applications, an SBO-project of the Flemish government (see http://sobenet.cs.kuleuven.be).

References
[1] S. Butler. Security attribute evaluation method: a cost-benefit approach. In
International Conference on Software Engineering (ICSE), Orlando, USA,
May 2002.
[2] D. Chapin and S. Akridge. How can security be measured? Information
Systems Control Journal, 2, 2005.
[3] S. Chidamber and C. Kemerer. A metrics suite for object oriented design.
IEEE Transactions on Software Engineering, 20(6):476493, June 1994.
[4] A. Garcia, C. SantAnna, E. Figueiredo, U. Kulesza, C. Lucena, and A. von
Staa. Modularizing design patterns with aspects: a quantitative study. In International Conference on Aspect-Oriented Software Development, Chicago,
USA, March 2005.
[5] M. Graff and K. van Wyk. Secure coding: principles and practices. OReilly,
2003.
[6] M. Lindvall, R. Tesoriero, and P. Costa. An empirically-based process for
software architecture evaluation. Empirical Software Engineering, 8(1):83
108, March 2003.
[7] R. Reissing. Towards a model for object-oriented design measurement. In
Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE), Budapest, Hungary, June 2001.
[8] G. Stoneburner, C. Hayden, and A. Feringa. Engineering principles for
information technology security. NIST Special Publication 800-27, Revision
A, June 2004.
[9] M. Swanson, N. Bartol, J. Sabato, J. Hash, and L. Graffo. Security metrics
guide for information technology systems. NIST Special Publication 800-55,
July 2003.

Вам также может понравиться