Академический Документы
Профессиональный Документы
Культура Документы
Properties of Software
Riccardo Scandariato, Bart De Win, and Wouter Joosen
DistriNet, Katholieke Universiteit Leuven
Belgium
Abstract
Among the different quality attributes of software artifacts, security
has lately gained a lot of interest. However, both qualitative and quantitative methodologies to assess security are still missing. This is possibly due
to the lack of knowledge about which properties must be considered when
it comes to evaluate security. The above-mentioned gap is even larger
when one considers key software development phases such as architectural and detailed design. This position paper highlights the fundamental
questions that need to be answered in order to bridge the gap and proposes
an initial approach.
paper focuses on product properties mainly, rather than processes and resources.
In fact, in the latter case significant work already exists [9].
An obvious approach is to distinguish between two types of properties. Some
properties will lend themselves to a quantitative analysis, while others will demand for a more qualitative approach. Indeed, the security of software-based
systems must be weighted up in relation to the human-based environment the
system is deployed in. For example, less secure yet hassle-free authentication
mechanisms are purposely used in home-banking web applications, in order to
make services more successful. In such cases, security properties must be assessed with qualitative trade-off analysis techniques [1]. Furthermore, security
is about threats, which depend on human factors. For instance, consider the
enforcement of the least-privilege principle in order to reduce risk of exploits in
code executed with high-level permissions. In order to assess the effectiveness
of such property, user behavior is to be considered and hence such property is
hard to be seized by figures. For the above reasons, quantitative assessment by
means of software metrics is not sufficient on its own to grasp all facets of security. Nonetheless, the value of a quantitative measuring framework is hardly
arguable. Indeed, it could be used to drive business decision on a more solid
ground and to round off the role of security in the software development process.
Hence, quantitatively assessable properties of security represent the main focus
of this paper.
On a different axis of the categorization, a distinction can be made between
component and engineering security properties. In the first case, the properties
of a component in isolation are considered. For instance, the encryption strength
of a tunneling component is measurable by the type of encryption algorithm and
the length of the keys it adopts. Measuring such properties is useful both to drive
the selection of off-the-shelf security components, as well as to assess the level of
component compliance to a security contract. On the other hand, engineering
properties refer to the software product as a whole, like, e.g., the size of the
trust domain. Due to space limitation, this paper focuses on the latter type.
Nonetheless, the authors acknowledge the same importance to both.
Back to our maintainability example, it can be observed that, in general,
quantitative properties of a quality attribute can be measured at different levels
of abstraction. For instance, complexity can be measured at code level (McCabe
cyclomatic complexity), at design level (coupling between objects, [3]), and at
architectural level (coupling between components, [6]). There is no particular
reason why security properties should not exhibit a similar behavior. As a naive
example, consider the size of the attack surface of an application. This security
property can be measured at the architectural level as the number of points
of access (user-wise). At a lower level, the same property could be measured
as the number of design classes that process user input. Finally, at code level,
the coverage of input validation routine could be considered. However, most of
the literature focuses on the low end of the spectrum, i.e., on metrics to assess
security posture after deployment. Examples of such traditional metrics are
the number of invalid login attempts, the number of detected viruses, and the
patches installation rate [2].
2
The authors are not questioning the value of the metrics mentioned above.
However, the latter are biased toward system security engineering (in contrast to
application security engineering) and consider software entities as black boxes,
i.e., they capture defects of software after deployment. In this sense, they are
operational and reactive. On the contrary, the authors are primarily interested
in metrics that can be used to assess the level of security of software artifacts
proactively, i.e., before deployment, and especially during design. For instance,
such metrics could play a lead role in defining acceptance criteria of software
artifacts during early stages of the development process. More importantly,
metrics should constitute an analysis tool to identify criticalities early on, with
remarkable impact on costs. This is of particular importance if one considers
that about 50 % of software defects are actually design errors. For the above
reasons, this paper will focus on security properties that can be seized during
the architecture/design phases, which appear to have been highly neglected in
past works.
2.1
Simple mechanisms tend to have fewer exploitable flaws and require less maintenance. Furthermore, because configuration management issues are simplified,
updating or replacing a simple mechanism becomes a less intensive process.
Properties that can be used to estimate the enforcement of this principle are as
follows.
Size
3
Complexity
Size of the attack surface
2.1.1
Metrics
Size and complexity can be measured with standard software metrics at both
design and code level, e.g., as described in [6, 3]. Possible means of measuring
the size of the attack surface at several levels of abstraction have already been
discussed in the previous section.
2.2
Separation of Concerns
Metrics
2.3
Metrics
Possible means to evaluate the existence of a layered security design are: (1)
the number of data validation checks per information flow, and (2) the number
of authentication/authorization checks per usage scenario.
2.4
Under the term critical modules we consider all entities (data or methods)
that are vulnerable to tampering attacks. A module can be rated as critical
based on several criteria, as follows: it is security related, it is located in an
untrusted environment, the module is an important asset to the owner of the
software, and the module is foundational in the design and, hence, can be a
target for Denial of Service attacks. However, also the number of fully trusted
modules, i.e., components intentionally not undergoing security scrutiny, must
be kept as low as possible. Accordingly, the relevant property to be measured
is:
Number of critical modules
2.4.1
Metrics
Identification methods could take UML diagrams as input. For example, deployment diagrams specify location-related information about an application,
and such information can be used to point out trust relationships, untrusted
deployment environments, and possible bottle necks (Dos). In order to identify
modules that are important (asset-wise), risk analysis techniques must be used.
Further metrics of interest are the number of entities to be trusted, which
have to be minimized, and the afferent coupling of components [7], which can
be used to identify foundational (hence DoS-sensitive) modules.
2.5
Metrics
This position paper tries to start off the discussion on more advanced metrics
suites that are able to cover the software development phases thoroughly. In
5
particular, the authors focused on metrics that apply to both the architecture
and design stages, since a major gap exists in that area. They also proposed
a framework methodology to elicit relevant security properties to be measured,
namely by analyzing well-known security principles and practices. Finally, an
illustration of the approach was presented and an initial set of properties listed,
along with the associated metrics.
There is no doubt that much work is yet to be done by the research community and the remaining part of this section tries to highlight some of the
priorities.
Several dimensions can be identified to classify the security properties. For
instance, the properties in Sections 2.1 and 2.2 deal with complexity and,
hence, apply to the overall software. On the contrary, the properties in
Sections 2.3 and 2.4 refer to security specific software. Finally, the properties in Section 2.5 reflect the impact of organizational/policy constraints.
These dimensions need to be refined further.
More properties must be elicited in a systematic and exhaustive way, e.g.,
by analyzing several sources of information, like the Common Criteria, the
ISO 17799 standard, and the SANS Policy Project.
Guidelines are needed in order to understand how the different metrics
must be correlated and interpreted.
The cost of measures must be low. To this aim, we acknowledge the importance of harvesting measures automatically. Some of the above mentioned
metrics require high level of expertise and high degree of manual work
(e.g., consider metrics to measure criticalities in Section 2.4). Automation could be facilitated by suitable annotations in design (and possibly
requirements) documents.
Methodologies to identify metrics of interest, like the Goal Question Metric
(GQM) by Basili, must be considered and possibly adapted to security.
For instance, security patterns employed during the design phase could
carry information about suitable metrics to be monitored.
Qualitative and quantitative approaches must be reconciled in a holistic
approach.
We expect to open the discussion on both the proposed approach and the
above priorities during the workshop.
Acknowledgments
This work is part of the SoBeNet project Software Security for Network Applications, an SBO-project of the Flemish government (see http://sobenet.cs.kuleuven.be).
References
[1] S. Butler. Security attribute evaluation method: a cost-benefit approach. In
International Conference on Software Engineering (ICSE), Orlando, USA,
May 2002.
[2] D. Chapin and S. Akridge. How can security be measured? Information
Systems Control Journal, 2, 2005.
[3] S. Chidamber and C. Kemerer. A metrics suite for object oriented design.
IEEE Transactions on Software Engineering, 20(6):476493, June 1994.
[4] A. Garcia, C. SantAnna, E. Figueiredo, U. Kulesza, C. Lucena, and A. von
Staa. Modularizing design patterns with aspects: a quantitative study. In International Conference on Aspect-Oriented Software Development, Chicago,
USA, March 2005.
[5] M. Graff and K. van Wyk. Secure coding: principles and practices. OReilly,
2003.
[6] M. Lindvall, R. Tesoriero, and P. Costa. An empirically-based process for
software architecture evaluation. Empirical Software Engineering, 8(1):83
108, March 2003.
[7] R. Reissing. Towards a model for object-oriented design measurement. In
Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE), Budapest, Hungary, June 2001.
[8] G. Stoneburner, C. Hayden, and A. Feringa. Engineering principles for
information technology security. NIST Special Publication 800-27, Revision
A, June 2004.
[9] M. Swanson, N. Bartol, J. Sabato, J. Hash, and L. Graffo. Security metrics
guide for information technology systems. NIST Special Publication 800-55,
July 2003.