Вы находитесь на странице: 1из 7

Towards a Security Metrics Taxonomy for the Information and Communication Technology Industry

Reijo Savola
VTT Technical Research Centre of Finland Oulu, Finland Reijo.Savola@vtt.fi
AbstractTo obtain evidence of the security of different products or organizations, systematic approaches to measuring security are needed. We introduce a high abstraction level taxonomy to support the development of feasible security metrics, along with a survey of the emerging security metrics from the academic, governmental and industrial perspectives. With our taxonomy, we strive to bridge the gap between information security management and ICT products, and services security engineering. We believe that if common metrics approaches between different security disciplines can be found, this will advance our holistic understanding and capabilities, both in security management and engineering. Our taxonomy is based on comparing earlier taxonomy approaches and analyzing types of security metrics. Based on the survey, a discussion of future research directions is given in order to prompt advances in the field. Keywords: security metrics, taxonomy, information security

approaches are still missing. Information security management, business management and on the other hand, software security and network security engineering have been handled as separate areas. Common metrics approaches can be used to bridge the gaps in between. In order to make advances in the field of measuring, assessing or assuring security, the current state of the art should be investigated. The main contribution of this research is a proposal for a security metrics taxonomy for the ICT product industry, based on extensive literature research and analysis of security metrics characteristics. The rest of this paper is organized as follows. Section II discusses the characteristics and earlier classifications of security metrics, Section III surveys earlier work on security metrics taxonomies, Section IV presents the taxonomy proposal, Section V discusses the subject and suggests future work, and, finally, Section VI gives conclusions. II. CHARACTERISTICS OF SECURITY METRICS

I.

INTRODUCTION

Our information society is marked by a rapid expansion of the Internet and convergence of information and communication technologies. Strong convergence between devices and networks is seen especially in mobile communications. At the same time, todays complex and highly connected technical and business environments trigger myriad trust and information security concerns. How secure is a software product or a telecommunication network, or their fusion? And how secure does it need to be in order to be secure enough? We seek answers that provide evidence of how effective security solutions have been in reducing risk, or what kind of reduction in risk we expect from further security solutions. Even though appropriate security solutions can be found, their resulting security strength often remains unknown. If appropriate security metrics can be to offer a quantitative and objective basis for security assurance, it would be easier to make business and engineering decisions concerning information security. Security measurement within R&D organizations should make a move from ad hoc practices to a more systematic process, because business demands change faster. The field of defining security metrics systematically is young. The problem behind the immaturity of security metrics is that the current practice of information security is still a highly diverse field and holistic and widely accepted

It is helpful to notice the difference between metrics and measurements. Measurements provide single-point-in-time views of specific, discrete factors, while metrics are derived from comparing two or more measurements taken over time with a predetermined baseline [12]. Furthermore, according to Alger [1], measurements are generated by counting, whereas metrics are generated from analysis. The WISSSR (Workshop on Information Security System, Scoring and Ranking) of 2001 [11] provided a key venue and starting point for researchers interested in security metrics. The workshop suggested that the expression IS* (Information Security*) be used as a synonym for metric, measure, score, rating, rank or assessment result, and defined the term IS* as a value, selected from a partially ordered set by some assessment procedure, that represents an IS-related quality of some object of concern. It provides, or is used to create, a description, prediction, or comparison, with some degree of confidence. According to Jelen [12], a good metric is Specific, Measurable, Attainable, Repeatable and Time-dependent (SMART). Payne [21] remarks that truly useful security metrics indicate the degree to which security goals such as data confidentiality are being met.

A. Value and Use of Security Metrics It is a widely accepted management principle that an activity cannot be managed well if it cannot be measured. Overall, metrics provide four fundamental benefits to characterize, to evaluate, to predict and to improve. Security metrics and measurements can be used for decision support, especially in assessment and prediction. Examples of using security metrics for assessment include: Risk management activities in order to mitigate security risks, Comparison of different security controls or solutions, Obtaining information about the security posture of an organization, a process or a product, Security assurance of a product, an organization, or a process, Security testing (functional, red team and penetration testing) of a system, Certification and evaluation (e.g. based on Common Criteria) of a product or an organization, and Intrusion detection in a system, Other reactive security solutions such as antivirus software. When using metrics for prediction, mathematical models and algorithms are applied to the collection of measured data (e.g. regression analysis) to predict the security behavior of an organization, a process or a product in the future. Furthermore, we believe that if common metrics approaches between different security disciplines (e.g. information, software and network security) can be found, this will advance our holistic understanding and capabilities in security management and engineering. B. Security Metrics and Risk Management Security metrics are used for decision support and very often these decisions are actually risk management decisions aiming at mitigating, canceling or neglecting security risks. Consequently, many metrics that might be useful for different purposes in technical, operational or organizational information security management will be associated with risk analysis in a direct or indirect way. C. Properties of Security Metrics Security metrics properties can be investigated based on the following classification: Quantitative vs. qualitative metrics. In general, quantitative metrics are more desirable than qualitative ones. It is challenging to find quantitative metrics that depict information security phenomena. Objectivity vs. subjectivity of metrics. The goal of security metrics development is to find metrics that are as objective as possible. In reality, as security contains a lot of human behavioral aspects, many metrics tend to be highly subjective.

Direct vs. indirect metrics. According to ISO/IEC 9126 standard [17], a direct measure is a measurement of an attribute that does not depend upon a measure of any other attribute. On the other hand, an indirect measure is derived from measures of one or more other attributes. Static vs. dynamic metrics. Dynamic metrics involve time, and static ones do not. The time perspective is important in security metrics as the information security threat pictures are constantly changing. Absolute vs. relative metrics. Absolute metrics do not depend on other metrics, whereas relative ones do. III. RELATED WORK ON SECURITY METRICS TAXONOMIES The WISSSR workshop [11] did not propose any specific security metric taxonomy. Instead, the workshop was intuitively organized into three tracks: technical, operational and organizational. According to [11], technical metrics are used to describe, and hence compare, technical objects, e.g., algorithms, specifications, architectures and alternative designs, products, and as-implemented systems. Operational metrics are used to describe, and hence manage the risks to, operational environments, including as-used systems and operating practices. Finally, organizational metrics are used to describe, and to track the effectiveness of, organizational programs and processes. In general, there would seem to be an intuitive understanding among the workshop participants that these three tracks would provide a useful basis around which to organize a taxonomy of security metrics [24]. Vaughn et al. [31] propose a taxonomy for information assurance metrics consisting of two distinct categories of security metrics: (i) Organizational security metrics and (ii) metrics for Technical Target of Assessment (TTOA). The first category aims at providing information about the information assurance (security assurance) status of the organization. Subcategories of this include metrics for information assurance program development, commitment of personnel and support of resources towards security, operational readiness for security incidents and security effectiveness metrics. The second category (TTOA) is intended to measure the security capabilities of a technical system or a product. The authors further divide these metrics into two sub-categories metrics for measuring a TTOAs strengths and metrics for measuring a TTOAs weaknesses. As Seddigh et al. [31] conclude, this taxonomy is a valuable contribution to the field but further work is required to refine it in order to make it applicable to an IT organization. The U.S. National Institute of Information Standards and Technology (NIST) presents its security metrics taxonomy in [26] and [27]. The taxonomy is comprehensive, presenting three categories (management, technical, and operational) and 17 sub-categories. Each category is accompanied by examples. The NIST taxonomy has been written from the point of view of an organization and technical metrics category assesses the level of technical security controls in the organization rather than the technical security level of specific products, as does TTOA in Vaughn et al.s taxonomy. Seddigh et al. introduce an information assurance metrics taxonomy for IT Network assessment in [24]. Their taxonomy

divides the metrics space into three categories security, Quality of Service (QoS) and availability based on their novel definition of information assurance. Under each of these three they consider technical, organizational and operational metrics. According to them, technical security metrics include subcategories for product rating, incident statistics and security testing. Organizational security metrics include metrics for information assurance program development and resources, and operational security metrics include metrics for technical readiness, effectiveness and susceptibility. In the QoS category, Seddigh et al. only propose technical metrics, including product capabilities, network capabilities and QoS tests. The same applies to the availability metrics category, with technical metrics for redundancy and availability testing. Later work related to Seddigh et al. is presented in El-Hassan et al.s experimental research [9]. The Institute for Information Infrastructure Protection (I3P) [13] is also carrying out work on creating a taxonomy for security metrics from the process control systems perspective. Stoddard et al., in [25] propose an initial security metrics taxonomy sketch for process control systems based on the WISSSR workshop taxonomy and ISO/IEC 17799 [19] and ANSI/ISA-TR99.00.01-2004 [2] standards. This study still requires more work in order to make it better support information management in process control systems. IV. PROPOSED SECURITY METRICS TAXONOMY

B. Business Level Security Metrics The highest category (root node) of our taxonomy is the security metrics for business management (Fig. 1). Business goals steer the security and trust management work, and, accordingly, security and trust metrics should be defined in such a way that they are aligned to the business goals of a company or a collaborating value net of businesses. One way of establishing an overall metrics process is to begin with the business goals and demonstrate the alignment of lower level security management objectives within that context. Note that in a case of another kind of organization, e.g. a government organization, business goals can be replaced by major goals that are specific to that organization (e.g. defined by legislation). It is very important that all metrics are aligned to the business goals of a company or a collaborating value net of businesses. One way of establishing an overall metrics process is to begin with the business goals and demonstrate alignment of lower level security management objectives within that context. For example, Basilis [4] Goal/Question/Metric (GQM) approach can be used for establishing a metrics process (or program) beginning with the business goals. In GQM, the criteria for business success are then identified as questions and, finally, the key criteria are broken down into measures that answer those questions. Note that regardless of the methodology used, developing business-relevant metrics needs commitment from the business management. In the following sections we investigate two subcategories of Level 0 metrics: security metrics for ISM in the organization and SDT metrics for products and systems. The business management metrics can be divided into: Security metrics for cost-benefit analysis containing economic measures such as ROI (Return of Investment), Trust metrics for business collaboration trust is very important in todays complex multi-organization value net environment. Trust metrics is a quite novel field of investigation; Security metrics for business-level risk analysis, Security metrics for information security management (ISM), and Security, dependability and trust (SDT) metrics for ICT products, systems and services.

In the following we propose a high-level information security metrics taxonomy that incorporates metrics for both organizational information security management and product development. The proposed taxonomy is based on evaluating the earlier proposed taxonomies, emphasizing the needs of a typical company producing information and communication technology (ICT) products. A. Why and What Kind of Taxonomy? We believe that being able to express a high-level taxonomy of security metrics will help the actual process of developing feasible composite metrics, even for complex situations. Taxonomies are frequently used for classification of objects into a hierarchical structure, commonly displaying parent-child relationships. A hierarchical taxonomy is a tree structure of classifications for a given set of objects. At the top of this structure is a root node that applies to all objects. Taxonomies are a tool for developing an organized structure for the management of complex phenomena. We argue that a well-defined taxonomy can be used to enhance the composition of feasible security metrics all the way from business management to the lowest level of technical detail. Security and trust metrics can be obtained at different levels within an organization or a technical system. Detailed metrics can be aggregated and rolled up to progressively higher levels. As Yee [32] states, a multi-faceted or multi-dimensional security measure is needed. This measure can be composed of metrics found from different applicable levels of the metrics taxonomy, collected according to an interest profile of the metrics user. The actual definition of composite metrics and profiling is not within the scope of this study.

In the following sections, we investigate two subcategories of Level 0 metrics: security metrics for ISM in the organization and SDT metrics for products and systems. C. Information Security Management in the Organization Figure 2 shows the taxonomy of the security metrics for information security management in the organization. In principal, we here follow the taxonomy definitions of [26] and [24]. However, we do not detail categories below Level 3 in order to leave more room for future refinement and structuring. Security metrics for ISM can be divided into three subcategories: management, operational and information system technical security metrics. In most security metrics approaches the metrics used in this category are

L0

Business-level security metrics

L1

Security metrics for cost-benefit analysis

Trust metrics for business collaboration

Security metrics for business-level risk management

Security metrics for information security management (ISM) in the organization

Security, dependability and trust metrics for ICT products, systems and services

Figure 1. Business-level security metrics (levels 0 and 1 of taxonomy)

L1

Security metrics for information security management (ISM) in the organization

L2

Management security metrics

Operational security metrics

Information system technical security metrics

L3

ISM ISMResource and Susceptibility Effectiveness Technical security, process/ level awareness of operational of operational dependability program risk mgt management controls controls and trust metrics*

Technical control metrics (incl. logs)

Figure 2. Security metrics for information security management in the organization. (* = profiled subset of Level 1 metrics for products, systems and services).

L1

Security, dependability and trust metrics for products, systems and services

L2

Product/system/service life cycle management

Product/system/service security rating or assurance

Product/system/service security engineering

L3

Conceive Design Realize Service Evaluation Testing Verification Certification

System-level System-level technical security technical risk solution management

L4

DL: Design-level technical security solution: IL: Implementation-level technical security solution:

SW/HW platform security, DL

Application security, DL

Network security, DL Network security, IL

L5

SW/HW platform Application security, IL security, IL

Figure 3. Security, dependability and trust metrics for products, systems and services (Levels 1 to 3 of taxonomy, from L2 to L5 in one branch)

structured according to the security control area. Typical users of this category of metrics are the Chief Information Officers (CIOs) of an organization. ISM-level management security metrics evaluate the ISM processes or programs and resources. This includes evaluation of security controls, plans and policies, as well as certification and accreditation activities for the organization. Furthermore, human and technical resources are assessed. Human resource assessment is typically concentrated on training and security awareness polls, and evaluation of the human resource assignments [23]. Security metrics for ISM-level risk management activities can be classified to this category too. In operational security metrics we are especially interested in the susceptibility and effectiveness of operational security practices (or controls). Susceptibility metrics assess the infrastructures threats and vulnerabilities due to its existence in a certain environment [24]. Operational security metrics typically concentrate on incident response, the archiving process and the maintenance process of SW, HW and networking equipment. Furthermore, security documentation, data integrity and contingency planning are evaluated [24]. An organizations information system technical security metrics is a subcategory that is intended to assess the security of the technical products and systems used in the infrastructure and the technical security controls. Level 2 Technical SDT (Security, Dependability and Trust) metrics can be a subset or an instance of the Level 1 SDT metrics for product life cycle management. Future work is needed in both of these categories to develop feasible profiling mechanisms that create synergy between the technical part of the information security management and the product development activities. The U.S. National Institute of Standards and Technology (NIST) has recently published several guides for information security management using security metrics. NIST SP 800-26 [26] gives guidelines on security self-assessment of information technology systems based on the U.S. Federal IT Security Assessment Framework. This publication provides guidance on applying the Framework by identifying 17 control areas. In addition, the guide provides control objectives and techniques that can be measured for each area. NIST SP 80053A [22] represents assessment methods and procedures for a minimum level due diligence for organizations assessing the security controls in their information systems. NIST SP 800-55 [27] provides guidance on how an organization, by using metrics, identifies the adequacy of in-place security controls, policies, and procedures. This publication defines three types of information security metrics: (i) implementation metrics to measure the implementation of security policies, (ii) effectiveness/efficiency metrics to measure the results of security services delivery, and (iii) impact metrics to measure the business or mission impact of security activities and events. An example of an implementation metric is percentage of NIST SP 800-53A control families for which policies exist. Effectiveness and efficiency metrics are used to monitor the results of security control implementation for a single control or across multiple controls. For example, percentage of security incidents caused by improperly configured access controls relies on information from or about several controls. The

Federal Information Processing Standards (FIPS) Publication 199 [10] establishes security categories for both information and information systems. The categories are based on the potential impact on an organization if certain events that endanger the information and information systems should occur. According to [30], the potential impact can be classified as low, moderate or high. According to Lennon of NIST [20], the universe of possible metrics, based on existing policies and procedures, will be quite large. Metrics must be prioritized to ensure that the final set selected for initial implementation facilitates improvement of high priority security control implementation. Based on current priorities, no more than 10 to 20 metrics at a time should be used. This ensures that an IT security metrics program will be manageable. The Information Security Forum (ISF) [14] is a memberdriven non-profit forum established in 1989 concentrating on developing security standards and metrics. ISF currently has almost 300 member organizations. ISF has established the Standard of Good Practice (SOGP) [15] and the accompanying Information Security Status Survey. The survey measures compliance with the SOGP and ISO/IEC 1799 standards. In addition, it offers a benchmark comparison to the other members of ISF on the total or by business sector. Unfortunately, the survey is only available to members. ISF has also developed a simpler metric called Security Health Check. D. ICT Products, Systems and Services Probably the most challenging category of our taxonomy is the security, trust and dependability metrics for products, systems and services, see Fig. 3. It is of the utmost importance that here we do not limit it to security behavior but take trust relationships and dependability issues into account too, since all these phenomena are highly dependent on each other. A very good investigation of the basic concepts and taxonomy of dependable and secure computing is presented by Aviienis et al. [3]. The product, system or service life cycle management subcategory includes SDT metrics for assessment of the management of different phases of the life cycle from the information security perspective. Here we use the terminology Conceive Design Realize Service for different phases of the life cycle. During the Conceive phase, the development of security requirements defines the basis for measuring security later by comparing the requirements and actual design or system. The Design phase incorporates activities such as architectural and lower-level design, testing, analysis and validation. Many of these activities include metrics of their own, but in this category we are interested in the management of these activities. As an example of product life-cycle security metrics, Systems Security Engineering Capability Maturity Model (SSE-CMM) ISO/IEC standard 21827 [24] contains security metrics for maturity assessment of the security level of security engineering processes and results of them. The security security number product/system/service rating subcategory includes metrics for different kinds of assurance activities evaluation, testing, verification and certification. A of collaborative governmental and industrial efforts

have defined frameworks and standards for security assurance, rating and assessment of vendor products. The resulting standards are the basis of evaluations by neutral third parties besides manufacturers and procurers. The most widely known of such efforts is the Common Criteria (CC) ISO/IEC 15408 international standard [18]. During the CC evaluation process, a numerical rating, EAL (Evaluation Assurance Level), is assigned to the target product, with EAL1 being the most basic and EAL7 being the most stringent level. Each EAL corresponds to a collection of assurance requirements, which covers the complete development of a product with a given level of strictness. The CC standard is based on a combination of several other standards for information security, including TCSEC (Trusted Computer System Evaluation Criteria) [28], ITSEC (Information Technology Security Evaluation Criteria) [16], CTCPEC (Canadian Trusted Computer Product Evaluation Criteria) [8] and FC (Federal Criteria for Information Technology Security) [30]. The TCSEC model was developed for the security solutions of mainframe computer operating systems in the 1980s. Its security performance classifications were based on the security characteristics of different operating systems. The cover color of the TCSEC report was orange, and this is the reason why it is often referred to as the Orange Book. Many people only know this report, but actually there are dozens of other reports in the same rainbow series. ITSEC is a similar European report with its own color the White Book. This extensive report supports the security evaluation work in big organizations. Interpretations of the TCSEC have been published to apply them to other contexts, such as the TNI (Trusted Network Interpretation of the TCSEC) [29]. The security solution subcategory of SDT metrics for products, systems and services includes technical SDT metrics used at different abstraction levels during the development time. This collection of metrics is mainly used mainly by the system security developers and security architects. We divide the security metrics for product/systems/service security solution into two main parts: security metrics for system-level technical security solution and security metrics for system-level technical risk management. By technical risk management we particularly mean system development-time risk analysis and mitigation work based on threat and vulnerability identification and impact analysis seen from the technical system perspective. By technical security solution we mean the actual functional and non-functional technical constructs of the system. SDT metrics for system-level technical security solution can be detailed into respective design-level metrics emphasizing either (i) SW/HW platform security, (ii) application security or (iii) network security. These three perspectives should be handled together in order to security engineer the system as a whole. Design-level security engineering metrics can be detailed into appropriate implementation-level metrics mainly representing vulnerability metrics. NISTs Software Assurance Metrics and Tool Evaluation (SAMATE) project [6] seeks to help answer various questions on software assurance, tools and metrics. The metrics work being carried out in SAMATE is concentrating on metrics and measures for the software itself and SSA (Software Security

Assurance) tools. The project is also developing a common enumeration of software weaknesses and flaws. V. DISCUSSION AND FUTURE WORK

As information security practices and their management have evolved into separate disciplines for some, it is challenging to try to construct bridges between the different perspectives information security management, product security management, application security, network security and so on. However, especially in the ICT product industry, the bridges would be very beneficial in order not to waste resources for reinventing the wheel. One way to construct bridges is to start developing composite (or hierarchical) security-oriented metrics that need to be aligned with common objectives and offer feedback between different user groups. In our taxonomy there are three main categories: business management (L0), information security management (L1) and product, system and service development (L1). How can these practices be better connected w.r.t. information security issues? In an organization, the highest level where information security thinking is needed is the business management (or in other kinds of organizations, the management of the organizations defined goals). This level is common to both organizational information security management (ISM) and product security management. ISM practices in an organization manage the organizations information security processes, information systems and resources (personnel). From the technical point of view, ISM practices typically address the Service phase of the information system development life cycle. In other words, information systems are used and maintained under ISM practices, but usually not conceived, designed or realized. The information system itself might consist of networks, network equipment and various kinds of computation devices. Product, system or service development needs to security engineer the system under development in all phases of the life cycle: Conceive, Design, Realize and Service. These systems might be used as part of an organizations information system. Could metrics developed by product development teams and intended for the Service phase, be used in ISM too? Or could metrics for other phases of product development be used in ISM? Could product development learn from the experience of ISM in the form of common metrics? Looking at our taxonomy, it is easy to find similar questions that need to be answered by the information security, application security, network security, trust management and dependability research communities. The feasibility of measuring security and developing security metrics to present actual security phenomena has been criticized in many contributions. In designing a security metric, one has to be conscious of the fact that the metric simplifies a complex socio-technical situation down to numbers or partial orders. Bellovin [5] remarks that defining metrics is hard, if not infeasible, because an attackers effort is often linear, even in cases where exponential security work is needed. Another source of challenges is that luck plays a major role [7] especially in the weakest links of information security solutions.

Our future work includes further refinement and iteration of the initial taxonomy and development of security metrics offering evidence information from top to bottom and vice versa in the taxonomy. VI. CONCLUSIONS

We have presented a novel taxonomy for information security-oriented metrics especially addressing the security needs of companies that produce information and telecommunication technology products, systems or services. The taxonomy can be used as a basis for developing composite or hierarchical security, trust and dependability metrics that are aligned to the common business objectives and, on the other hand, offer realistic security evidence for different user groups business management, information security management, product, system and service security management, and technical system developers. REFERENCES
J. I. Alger, On Assurance, Measures, and Metrics: Definitions and Approaches. Proc. of Workshop on Information Security System Scoring and Ranking (WISSSR), ACSA and MITRE, Williamsburg, Virginia, May, 2001, proceedings published 2002. [2] ANSI/ISA-TR99.00.01-2004: Security Technologies for Manufacturing and Control Systems Standards. American National Standards Institute, Washington, D.C., 2004. [3] A. Aviienis, J.-C. Laprie, B. Randell and C. Landwehr, Basic Concepts and Taxonomy of Dependable and Secure Computing. IEEE Trans. on Dependable and Secure Computing. Vol. 1, No. 1. Jan/Mar, 2004 [4] V. R. Basili and D. M. Weiss, A Methodology for Collecting Valid Software Engineering Data. IEEE Transactions on Software Engineering, SE-10(6):728-738, November 1984. [5] S. M. Bellovin, On the Brittleness of Software and the Infeasibility of Security Metrics. IEEE Security & Privacy, Jul/Aug, p. 96, 2006 [6] P. E. Black, SAMATEs Contribution to Information Assurance. IAnewsletter, Vol. 9, No. 2, 2006. [7] P. Burris, C. King, C., A Few Good Security Metrics. METAGroup, Inc., Oct., 2000. [8] Canadian System Security Centre: The Canadian Trusted Computer Product Evaluation Criteria, Version 3.0e, January 1993, 233 p. [9] F. El-Hassan, A. Matrawy, N. Seddigh and B. Nandy, Experimental Evaluation of Network Security Through a Hierarchical Quantitative Metrics Model, The 3rd Int. Conf. Communication Network and Information Security (CNIS 2006), Cambridge, MA, 2006, pp. 156-164. [10] FIPS Publication 199, Standards for Security Categorization of Federal Information and Information Systems. Federal Information Processing Standards Publication, 2004. [11] R. Henning et al., Proceedings of Workshop on Information Security System, Scoring and Ranking Information System Security Attribute Quantification or Ordering (Commonly but improperly known as Security Metrics), ACSA and MITRE, Williamsburg, Virginia, May, 2001, proceedings published 2002. [1]

[12] G. Jelen, SSE-CMM Security Metrics. NIST and CSSPAB Workshop, Washington, D.C., June 2000. [13] I3P. Institute for Information Infrastructure Protection. www.thei3p.org. [14] Information Security Forum (ISF). www.securityforum.org. [15] Information Security Forum (ISF): The Standard of Good Practice (SOGP). http://www.isfsecuritystandard.com/index_ns.htm, 2005. [16] Information Technology Security Evaluation Criteria (ITSEC) Version 1.2, Commission for the European Communities, 1991. [17] ISO/IEC 9126-1:2001. Software Engineering Product Quality Part 1: Quality Model. International Organization of Standardization, 2001. [18] ISO/IEC 15408-1:2005. Common Criteria for Information Technology Security Evaluation Part 1: Introduction and General Model. International Organization of Standardization, 2005. [19] ISO/IEC 17799:2005. Information Technology Security Techniques Code of Practice for Information Security Management. International Organization of Standardization, 2005. [20] E. B. Lennon (Ed.), IT Security Metrics. ITL Bulletin, August 2003. National Institute of Standards and Technology, 2003. [21] S. C. Payne, A Guide to Security Metrics. SANS Institute Information Security Reading Room, June 2006. [22] R. Ross, A. Johnson, S. Katzke, P. Toth, G. Rogers, Guide for Assessing the Security Controls in Federal Information Systems. National Institute of Standards and Technology Special Publication 800-53A, 2006. [23] A. Sademies, Process Approach to Information Security Metrics in Finnish Industry and State Institutions. VTT Publications 544. 89 p. + app. 2 p., 2004. [24] N. Seddigh, P. Pieda, A. Matrawy, B. Nandy, I. Lambadaris, A. Hatfield, Current Trends and Advances in Information Assurance Metrics. Proc. of the 2nd Annual Conference on Privacy, Security and Trust (PST 2004), Fredericton, NB, Oct., 2004. [25] M. Stoddard et al., Process Control System Security Metrics State of Practice. I3P Institute for Information Infrastructure Protection Research Report No. 1, Aug., 2005. [26] M. Swanson, Security Self-Assessment Guide for Information Technology Systems. National Institute of Standards and Technology Special Publication 800-26, Nov., 2001. [27] M. Swanson, N. Bartol, J. Sabato, J. Hash, L. Graffo, Security Metrics Guide for Information Technology Systems. National Institute of Standards and Technology Special Publication 800-55, Jul., 2003. [28] United States Department of Defense: Trusted Computer System Evaluation Criteria (TCSEC) Orange Book, DoD Standard, DoD 5200.28-std, 1985. [29] United States National Computer Security Center: Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria Version 1; NCSC-TG-005, 1987. [30] United States National Institute for Standards and Technology and National Security Agency, Federal Criteria for Information Technology Security Draft Version 1.0, Jan. 1993, 2 volumes. [31] R. Vaughn, R. Henning and A. Siraj, Information Assurance Measures and Metrics: State of Practice and Proposed Taxonomy, Proceedings of 36th Hawaii International Conference on System Sciences (HICSS 03), 2003. [32] B. S. Yee, Security Metrology and the Monty Hall Problem. Proc. of Workshop on Information Security System Scoring and Ranking (WISSSR), ACSA and MITRE, Williamsburg, Virginia, May, 2001, proceedings published 2002.

Вам также может понравиться