Вы находитесь на странице: 1из 6

Cyber Security Models

Taws (1994) research indicates that 90% of worlds population will be living in developing countries
within the next ten years. According to Martson (2005), these developing world nation-states will not be
able to carry out regular warfare attacks against the developed world and will resort to whatever
capabilities they would have at their disposal. Furthermore, Martson (2995) believes that technology will
play great role in waging low-intensity wars. According
to Schmidt, Cyber warfare completely evens the playing
field as developing nations and large nations with a
formidable military presence can both launch equally
damaging attacks over the Web (Georgia Tech, 2008,
need page # when quoting).
Cyber warfare follows regular principles of war in several
aspects, including having an objective, carrying out an
offensive attack, maneuverability during the attack,
element of surprise, and simplicity of the operation
(Glenn, 1998). However, there are some disparity
between physical and cyber warfare. Table 1 presents compilation of key differences based on Saydjari
(2004) findings.
Dimensions Weapons

Manifestation

Physical Three

Constrained by Clear
physics

Cyber

Non-linear
damaging
effects

Hyper

Speed
Humanly perceivable

Sometimes clear to Often outside the realm of


detect and assess possible human reaction (too
damage
slow or too fast)

Similar to preparations against physical warfare, cyber defense preparations should include four stages:
1. intrusion detection;
2. possible damage assessment;
3. selection of defense strategy;
4. execution of the strategy.
However, when an attack is underway, it is too late to exercise incident response plans, and according to
Walcott, one of the participants in Cyber Storm II cyber exercise, it is important to know what escalation

plans are available (Australian Government, 2009) ahead of time. Therefore, it is necessary to prepare
for the cyber attack beforehand and have the ability to measure cyber security in place.
Singhal and Ou (2009) identified four enterprise analyses that would benefit from solid security metrics,
namely:
1) Differential Analysis establishing improvements in security configuration
2) Cost Analysis identifying the price tag for security problems
3) Cost Management determining balance between cost assessment and security risk
4) Threat Prioritization recognizing the highest vulnerability on the enterprise network
Schneidewind (2008) presents two models risk- and exponential-based, which he claims can be used a
framework for practitioners and researchers to advance the field. Merriam-Webster (2010) dictionary
defines model as a miniature representation of something. As such, there is an inherent limitation to
what the model represents. Schneidewinds risk model presents the 3-way relationship between
probability of attack, the probability of vulnerability, and the possible consequences. He further
elaborates the risk model with exponential model, including factoring in time between attacks and
variability of the attacks. However, he also points out that given the unpredictability of future attacks, it is
impossible to validate his model against real-world attack. Furthermore, Schneidewinds model was built
on randomized data, instead of empirically measured metrics.
Jennex (2008) proposes using multi-layered defense mechanism consisting of six barriers, and performing
barrier analysis in order to identify barrier placement, maintenance, and evaluation of cyber control
effectiveness.
Barrier method is an excellent graphical tool in identifying important cyber assets and placing security
mechanisms around them; however, the method does not guarantee that all necessary barriers have
been raised prior to the attack.
Wang, Singhal and Jajodia (2007) use directed attack graph to depict relationship between network
components representing the significance of a resource in terms of caused damage, reconfiguration costs,
and resistance to vulnerability. Authors further defined a general framework creating network security
measures based on attack graphs. Singhal and Ou (2009) expand the attack graph by quantifying the risk
of remaining vulnerability, as time is often required to address the root cause of the vulnerability.
Lu, Tsui, and Park (2008) present anti-spam techniques to fight social engineering phishing technique.
Authors describe seven techniques, namely spam filtering, remailer, e-postage, hashcash, sender
authentication, domain-based authentication, and per-user sender authentication that can be used to
minimize the amount of spam that reaches users; therefore, reducing the overall phishing risk.
CxT Group Michigan,2415 E.Hammond Lake DriveSte,219 BloomfieldHills,
MI 48302 Contact No:(248) 282-5599 Toll Free:(877) 439-2539

Smith, Shin, and Williams (2008) address SQL injection validation vulnerabilities metrics. They expand SQL
statement coverage that is usually used to assess function level with two SQL injection input validation
metrics target statement and input variables. Authors conclude that even though the relationship
between target statement and input variables is not established, using input variables in the computation
allows analyst to add weight to the target statement therefore assigning higher threat level to items with
multiple input variables.
Arnes (2008) addresses network monitoring and intrusion detection using sensor technologies such as
network sniffers, and intrusion detection sensors. Arnes supports the view of other security specialists
that intrusion detection mechanisms currently available present a challenge since intrusion assessment
applications provide false positives, thereby raising false alarm. In addition to that, sensor technologies
can process up to 10 GB/s of data, which creates bottleneck with pattern analysis, and protocol
reassembly.
Aime, Atzeni and Pomi (2009) developed a model-based four-step tool suite that allows to perform
what-if analysis. Their tool is compromised of system and configuration models; risk graphs; data mining
analysis; and detection of residual risk. Authors presented their Risk graph as Fault Trees with nodes
representing detectable, undetectable, and critical accidents. Each accident is further associated with
threat degradation and frequency. Aime, Atzeni and Pomi (2009) present the risk graph that does not rely
on collection of large amount of empirical data, but rather uses CPU and Memory consumption to
determine diagnoses of the network condition.
Wang, Wang, Guo and Xia (2009) use software vulnerability index to reflect the level of security. They use
Common Vulnerabilities and Exposure (CVE) and Common Vulnerability Scoring System (CVSS) to
calculate the number of vulnerabilities within software product. After that, authors apply their formula
for determining the vulnerabilities, weaknesses, and their frequency of occurrence in the software.
Finally, data undergoes average calculations and resulting percent of weakness is determined. The key to
determining the correct percentage lies in being able to identify all possible vulnerabilities within the
software product.
Measuring cyber security is a difficult task, combining various techniques discussed above into a coherent
foretelling mechanism is a daunting project. However, there is a great need for cyber security foresight
analysis in order to be able to detect, understand, and make good security configuration decisions. Table
2 presents Verendels (2009) classification of a large number of security publications between 1981 and
2008. Author points out that most publications concentrated on the economic aspects of cyber security
while issues of confidentiality, integrity, and availability and reliability are greatly under represented.

Perspective

Economic Framework System Threat Vulnerability

Confidentiality/Integrity/Availability

Economic

33

35

23

26

Reliability

16

17

Other

10

11

25

15

According to Pfleeger (2009), to date no set of coherent metrics or practical framework has been able to
provide the necessary breadth and depth of analysis required by organizations to measure their cyber
security position. The problem is further complicated by:
inconsistent technology platforms
understanding how individual metrics fit into larger framework puzzle
building measurements from available data rather than from necessary information
the need to assess organizational, human and technical vulnerabilities with some sense of
probability
the obligation to estimate the types and extent of damages of successful attacks
the need to measure the organizational ability to recover from cyber attack
Intrusion detection is the eyes of the cyber security system. There are two major types of intrusion
detection, namely signature-based and anomaly-based detection. Their role is to alert cyber security
personnel that possible attack is underway. However, since signature-based intrusion detection relies on
previously recorded signature of the attack to indicate undesirable behavior, fast moving attacks, such as
CodeRed Worm that propagated within hours (Your Dictionary, 2009), may be too fast for todays system
to be detected (Saydjari, 2004). Anomaly detection does not rely on signatures; however, the process
takes up considerable system resources. In addition, there is also a risk of hacker teaching the system
that his illegitimate activities are nothing out of ordinary (Axelsson, 2000).
Saydjari (2004) points out that knowledgeable and well-informed human create best cyber security
strategy and development focus should be shifted towards automating mundane tasks and building
decision aid tools. By simulating accurate hypothetical models of the critical systems, security specialists
can learn ahead of time strategy and tactics in defending against potential attackers. Numerous models
can be created depending on the organizational needs. There are, however, some limitations associated
with modeling and simulation of the cyber security, namely:
CxT Group Michigan,2415 E.Hammond Lake DriveSte,219 BloomfieldHills,
MI 48302 Contact No:(248) 282-5599 Toll Free:(877) 439-2539

Limited scope
models represent network and its components on a much smaller scale, Finite State Machine (FSM) is
one such popular model (Wang, 2005).
Assumptions
certain assumptions must be made in order to constraint the model. The trick lies in identifying the
correct set of assumptions. Verendel (2009) classifies assumptions into four categories:
1.

Independence mainly present in larger models, independence of events assumption is used in


order to break up the problem into smaller, workable pieces. Once independence is established,
Markov and Poisson processes can be applied to the model.

2.

Rationality by assuming that security experts will act rationally, the model takes shape of an
optimization problem.

3.

Stationarity this assumption removes time variant parameters for the system, threat, and
vulnerability; therefore, allowing analysis of limited number of instances.

4.

Additive mainly present within larger hierarchical models, it assumes that it is comprised of smaller,
simpler parts.

Validation validation allows judgment of models reasonableness with respect to the real system.
Verendel (2009) classified validation into three categories:
1) Hypothetical hypothetical results are not related to actual event.
2) Empirical is a method of systemic gathering of operational data.
3) Simulation is a method of simulating the target event or system.
4) Theoretical is a set of formal theoretical arguments to validate the results.
Verification verification ensures that model does what it is intended to do. Some models, especially
simulation models, tend to be large computer programs. Therefore, any verification techniques used
for the development of large computer program may be useful for the model. Formal verifications;
however, require formal specifications and well defined security flaws as a precondition, which is
difficult to satisfy because security flaws are hard to formalize (Wang, 2005).
By its nature, models are more abstract than the system they represent. As a result, in order to make the
model tractable, certain inaccuracy within the model definition may be necessary. Whatever model will
be created, measures extracted from it will depend on the accuracy of the models representation of the
real system. Selecting reliable and usable metrics requires certain level of trust, which is qualitative by its

nature. According to Wang (2005), there are no mathematical formulas to be applied to obtain the level
of trust.
Cyber security is a large topic, spanning technical, human and organization categories. Availability of
trustworthy, semi-automated and automated metrics and models is still in the future, making cyber
security more of art than a science and placing higher responsibility on knowledgeable and well trained
cyber security personnel for security configuration design, maintenance, and defense.

References
Aime, M. D., Atzeni, A., & Pomi, P. C. (2008). The Risks with Security Metrics. ACM , pp. 65-70.
Arnes, A. (2008). Large-Scale Monitoring of Critical Digital Infrastructures.
Australian Government Attorney-Generals Department. (2009). Cyber Storm III Fact Sheet. ag.gov.au.
Axelsson, S. (2000). The Base-Rate Fallacy and the Difficulty of Intrusion Detection. ACM , 186-205.
Dr. Marston, D. (2005, May 25). Force Structure for High- and Low-Intensity Warefare: The
Anglo-American Experience and Lessons for the Future. Retrieved March 3, 2010, from Office of te
Director
of
National
Intelligence:
http://www.dni.gov/nic/PDF_GIF_2020_Support/2004_05_25_papers/force_structure.pdf
Georgia Tech Information Security Center. (2008). Emerging Cyber Threats Report for 2009. GTISC summit
(pp. 1-9). Atlanta: GTISC.
Glenn, W. R. (1998). No More Principles of War? PARAMETERS, US Army War College Quarterly , pp.
48-66.
Jennex, M. E. (2008). Cyber War Defense: Systems Development with Integrated Security.
Lu, H. Y., Tsui, C. J., & Park, J. S. (2008). Antispam Approaches Against Information Warfare.
Merriam-Webster. (2010). Dictionary.
Pfleeger, S. L. (2009). Useful Cybersecurity Metrics. IEEE , pp.38-45.
Saydjari, O. S. (2004). Cyber Defense: Art to Science. Communications of ACM , 47 (3), pp.52-57.
Schneidewind, N. F. (2008). Cyber Security Models.
Singhal, A., & Ou, X. (2009). Techniques for Enterprise Network Security Metrics. ACM, pp. 1-24.
Smith, B., Shin, Y., & Willams, L. (2008). Proposing SQL Statement Coverage Metrics. ACM , pp. 1-8.
Taw, J., & Hoffman, B. (1994). The urbanization of insurgency: the potential challenge to U.S. Army
operations. Washington DC: Rand.
Verendel, V. (2009). Quantified Security is a Weak Hypothesis. ACM , pp. 37-49.
Wang, A. J. (2005). Information Security Models and Metrics. ACM , pp. 178-184.
Wang, J. A., Wang, H., Guo, M., & Xia, M. (2009). Security Metrics for Software Systems. ACM , pp. 1-6.
Wang, L., Signhal, A., & Jajodia, S. (2007). Toward Measuring Network Security Using Attack Graphs. ACM ,
pp. 49-54.
Your Dictionary. (2009). CodeRed I and II Worm. Retrieved March 5, 2010, from Hacker Definition:
http://www.yourdictionary.com/hacker/code-red-i-and-ii-worm
CxT Group Michigan,2415 E.Hammond Lake DriveSte,219 BloomfieldHills,
MI 48302 Contact No:(248) 282-5599 Toll Free:(877) 439-2539

Вам также может понравиться