Вы находитесь на странице: 1из 10

Goal-Based Safety Standards: Opportunities and Challenges

Tim Kelly, University of York, Heslington, York, YO10 5DD UK

John McDermid, University of York, Heslington, York, YO10 5DD UK

Rob Weaver, University of York, Heslington, York, YO10 5DD UK

Safety standards, goal-based standards, safety evidence

Abstract

Issue 3 of UK Defence Standard (DS) 00-56, published at the end of 2004, is goal-based and replaces an earlier
issue of the standard which was much more prescriptive. In civil aerospace the standard for ground-based software
(SW01) is also goal-based. This move towards goal-based standards is intended to give system suppliers flexibility
in developing systems and in showing that they are safe, and hence to reduce the likelihood that standards become
increasingly inappropriate as technology progresses. However this advantage, or opportunity, brings some attendant
challenges.

The challenges which arise include:

• Agreeing on appropriate means, and levels of evidence, for demonstrating safety, especially with new
technology;

• Contracting for a safety program where the set of safety activities and required evidence may not be
determined “up front”.

The paper discusses the opportunities and challenges of goal-based safety standards and proposes an approach to
managing safety programs which seeks to maximize the opportunities whilst containing the associated risks. The
approach is essentially evolutionary, and based on evolving the safety case through the development process.

Introduction

According to the Oxford English Dictionary (OED) a standard is “an authoritative or recognized exemplar of
correctness, perfection or some definite degree of any quality”, thus one would expect standards to be widely
accepted and stable. This is broadly true of standards for, say, physical constants, but is not the case in the area of
safety. Indeed there has been considerable evolution in safety standards over the last fifteen years. In the last five
years a small number of goal-based standards have emerged. This paper investigates this trend and asks whether or
not this might lead to establishing authoritative and stable standards.

There are many varieties of standards; at minimum we can distinguish process and product standards. In safety,
process standards are concerned with how systems are developed and assessed, e.g. methods of safety analysis and
criteria for risk assessment; product standards are concerned with particular system attributes, e.g. levels of
hardware redundancy or colors of displays. Our focus here is on process standards.

Historically many safety process standards have been prescriptive (i.e. tell people what to do) and/or proscriptive
(i.e. tell people what to avoid doing). In contrast, goal-based standards tell people what they need to achieve. At its
simplest, in safety, this is to establish safety requirements, design to meet the agreed requirements, and to show that
the agreed requirements have been met. Traditional standards are generally clear about what needs to be done, but
can be difficult to apply in novel circumstances. Goal-based standards offer more flexibility, but raise a number of
concerns. Most notably they introduce ambiguity about what is required to achieve compliance. The purpose of this
paper is to analyze the challenges and opportunities posed by goal-based standards and to ask whether the adoption
of such standards might help in a move towards the “authoritative or recognized exemplar of correctness” which the
OED says characterizes a standard.

The paper starts by considering why it is difficult to produce safety standards, then considers the evolution of a
number of standards, including Defence Standard (DS) 00-56, before considering the challenges, together with
responses to those challenges, and opportunities posed by the introduction of goal-based standards.
Why are Safety Standards Difficult?

There are large numbers of safety standards in use; Hermann [1] identifies many of the software safety standards
which were in use at the end of the 1990s. There are many more standards which apply at the system level, or which
focus on particular technologies, and there have been several new standards published since. There are many reasons
for this diversity in standards; we focus here on issues of risk and technology.

Acceptable Risk: Safety is not absolute; a system is deemed safe when the level of risk it poses is acceptable, or
tolerable, in its intended context of use. The level of risk which is judged acceptable, or tolerable, depends on many
factors including:

• Who is put at risk, e.g. adults versus children, or factory workers versus the general public;

• The “dreadness” of the risk, e.g. nuclear hazards are generally less acceptable than those due to more
“everyday” causes such as road traffic, even where the “objective” level of risk is the same or lower;

• Whether or not the risk is taken on voluntarily, e.g. people accept higher risks hang-gliding than they might
with domestic goods;

• With military systems, whether the risks arise in war, peace, or in operations other than war (OOTW).

Thus, determining whether a system is “safe enough” is highly context dependent. For military systems this is
exacerbated by the fact that many different classes of individual may be at risk, and it is necessary to deal with a
range of operational circumstances including trials and OOTW.

For a narrow range of application with little variability in terms of who is put at risk, it may be possible to come up
with a clear set of risk acceptance rules. Arguably civil aerospace falls into this class; certainly guidance documents
such as DO178B [2] and ARP 4761 [3] are clear about acceptable rates of occurrence for different classes of
hazards. In contrast, military standards have to deal with many more situations and tools such as Hazard Risk
Indices (HRI), or risk matrices, give only some of the necessary flexibility. Standards such as DS 00-56 Issue 2 say
that the HRI table is an example, but it is rare for it to be modified for a particular system – in part due to the lack of
guidance on how to adapt the matrices for a given context of use of a system.

A safety process standard cannot define acceptable levels of risk if it is to be general (to meet the OED definition).
However, it can define the goal of risk acceptance – obtaining agreement between the relevant parties (typically the
system supplier and customer, or certification authority). This has sub-goals of agreeing risk levels and agreeing
when enough evidence had been produced that the risks were at or below those levels.

Technology: Technology changes; in many modern systems “technology refresh” not only occurs during the system
life, but it may occur prior to the system entering into service. Standards also change, but generally, they change
relatively slowly – for example IEC 61508 [4] took more than ten years to produce, and, after six years of issue, it is
already being updated. Prescriptive standards thus run the risk of always being “behind the curve” of technology and
this can have several adverse effects.

First, the standards may not properly regulate new technology, i.e. new technologies may be employed where there
is insufficient evidence to show that they have been employed in a safe manner (with attendant risk of accidents).

Second, beneficial new technologies might not be employed because there is no agreed way of assessing them,
within the confines of the relevant standard.

Third, unnecessary expense may be incurred because inefficient development and assessment techniques are used
when there are better technologies – which the relevant standard proscribes, or does not accept.

A safety process standard cannot define the technologies to be used, or avoided, if it is to be authoritative, hence
stable (to meet the OED definition). However, it could define the goal of providing appropriate and sufficient
evidence of safe use of the technology. This would have sub-goals of agreeing the types and combinations of
evidence which were appropriate for a given technology, in its context of use.
Other Issues: There are many other issues with current standards. One of the most problematic areas is Safety
Integrity Levels (SILs), see for example Fowler [5] and Redmill [6] who indicate some of the difficulties in setting
“sensible” SIL targets using the rules published in standards. There is a further problem not often remarked upon. In
many standards there are four SILs (or DALs in DO178B); these link risk levels or HRI to development processes.
The fact that there are four levels constrains the definition of risk levels, e.g. having five levels would be a problem,
and thus reduces flexibility in defining and reducing risk levels.

For generality, a standard should be silent on matters where there is genuinely no consensus. Thus we would not
recommend having a goal for setting a SIL – but the rule of “being silent” includes not proscribing the use of SILs,
so they may be employed where it is genuinely helpful to do so.

Standards Evolution

Over the past decade many safety process standards have been evolving, and continue to do so.

For example, MilStd 882 has evolved from Issue C [7] in 1996 to Issue D now [8], with work under way on Issue E.
Defence Standard (DS) 00-56 has evolved through three issues in fifteen years (see below for more detail), as have
other associated standards

In Civil Aerospace, DO178B (strictly a set of guidelines not a standard) has been stable for some years, however
annual guidance bulletins have been produced and work is now under way on DO178C (through RTCA Special
Committee 205 / EUROCAE Working Group 71). Similarly updates are being made to the safety process standards
including ARP 4761 (through SAE S18 / EUROCAE Working Group 63).

In a broader context, IEC 61508 is also being updated. All this change leads to a question – why is there such a
volume of activity?

Generally, there have been few changes in declared criteria for risk acceptance, and this does not seem to be a major
driver for change. There have been technological changes and that is, for example, what is driving both DO178B
and ARP 4761. However there is an additional factor; that of approaches to regulation.

In the USA, the Perry initiative required the removal of Defense standards, where possible. In the case of MilStd
882, the change was one of substantial reduction in scope and detail, rather than wholesale removal. In the UK
Defence arena, there has been a move to make standards “as civil as possible, only as military as necessary”, and
this is a driving factor behind the development of DS 00-56 Issue 3 [9]. In addition, in the UK, the regulation of
ground-based civil air traffic management systems has changed to become explicitly goal-based. This is reflected in
the Civil Aviation Authority procedures, CAP 670 [10], especially the software section known as SW01.

We focus our detailed analysis on DS 00-56 Issue 3 as it is a system standard, not just a software standard.

DS 00-56 Issue 3

DS 00-56 Issue 3 is set out in two parts. Part 1 gives requirements, and part 2 gives guidance on meeting those
requirements. Part 2 is not mandatory, and other means of meeting the intent of Part 1 are acceptable. Part 1 is a
short standard, with less than twelve pages of mandatory requirements. It is based on twelve objectives, set out in
their entirety below, which are then amplified into “shall statements” setting out specific goals.

a. All relevant safety legislation, regulations, standards and MOD Policy are identified.

b. All activities and products comply with the identified legislation, regulations, standards, MOD Policy and
specific contractual requirements.

c. Safety is considered from the earliest stage in a programme and used to influence all activities and
products. It is essential that safety risks and project risks are managed together.

d. Tasks that influence safety are carried out by individuals and organisations that are demonstrably
competent to perform those tasks.
e. Safety management is implemented as a key element of a harmonised, integrated systems engineering
approach.

f. An auditable Safety Management System is implemented that directs and controls the activities necessary
to ensure that a safe design results from the programme and the system is safe to operate throughout its life
(i.e. the acquisition cycle).

g. A Safety Case is developed and maintained that demonstrates how safety will be, is being and has been,
achieved and maintained.

h. Safety Case Reports, that summarise the Safety Case and document the status of safety management
activities, are delivered as necessary for effective oversight of safety management, demonstrating safety
requirements have been met.

i. All credible hazards and accidents are identified, the associated accident sequences are defined and the
risks associated with them are determined.

j. All identified risks are reduced to levels that are broadly acceptable or, when this is not possible,
tolerable and As Low As Reasonably Practicable (ALARP), unless legislation, regulations or MOD Policy
imposes a more stringent standard.

k. Interfaces between Safety Management Systems, Safety Cases, systems and organisations are identified
and effectively managed.

l. Changes to the operational, technological, legislative and regulatory environment, and any other changes
that may have an impact on safety, are monitored and managed.

m. Defect/failure reports and incident/accident/near-miss reports are monitored; remedial actions necessary to
ensure continued safety are identified and implemented.

It is intended that the above set of objectives is complete in that it identifies all that has to be done to achieve and
demonstrate safety. It is rather broader than the simple statement above – identify safety requirements, design to
meet the safety requirements, provide evidence that the requirements have been met – but that is the core of these
objectives. Objectives a, b and i are concerned with establishing requirements; objective j is concerned with meeting
safety requirements and objectives g and h are concerned with demonstrating the safety has been achieved. Most of
the others are “ancillary” requirements giving confidence in the core process; objective m is included as the standard
is intended to apply through life.

Individual clauses in the standard amplify on the objectives to set out goals. Section 10.5 amplifies point i:

“10.5.1 The Contractor shall carry out Hazard Identification and Hazard Analysis to identify credible hazards and
accidents associated with the system and to determine the related accident sequences.

10.5.2 The Contractor shall demonstrate the adequacy of the Hazard Identification and Hazard Analysis process and
the suitability of the techniques employed.”

There are two other clauses in section 10.5, but they deal with change and management of hazards outside the
boundary of responsibility of the Contractor, thus the above two clauses are the full set of mandatory requirements
which relate to hazard identification. Thus, the standard is very simple, it sets out goals with the minimum of
constraint on how they might be achieved, but it is not explicit in the standard what is “sufficient” to meet these
goals. This is deliberate.

If DS 00-56 Issue 3 is applied to a military aircraft, it ought to be possible to use the hazard identification schemes in
civil aerospace standards and guidance, e.g. ARP 4761, where appropriate, i.e. these may apply to transport aircraft,
but not fast jets. Similarly, if a project is concerned with a railway system in a naval dockyard, it out to be possible
to use the relevant standards, e.g. EN 50126 [11]. By leaving open the notion of what is sufficient, it is easier to
follow the “as civil as possible, only as military as necessary” philosophy, but at the cost of some uncertainty and
posing challenges – both technical and managerial.
Challenges and Responses

The general challenge posed by goal-based standards is one of knowing “how to comply”. The approach we
recommend is to build a safety case which includes explicit arguments about compliance. The following safety case
fragment, set out in the well-known Goal Structuring Notation (GSN) illustrates how objectives a and b from DS 00-
56 might be met, here expanding the scope to cover environmental issues as well as human and material loss.

G1
All applicable
standards and
legislation met

G1.1 Identify Standards G1.2 Standards Complied G1.3 Change in Applicable Standards
With
All applicable standards The effects of any changes in applicable
and legislation identified Relevant Standards and standards on the design, manufacture,
Legislation complied with use, maintenance and disposal of the
service, system or equipment identified,
and appropriate action taken

Technologies

Software, EMC
etc.
Materials G1.1.2 Maintenance of
Information
Types of
materials G1.1.1 Identification The list of applicable standards
Countries updated on significant events,
All standards pertaining to any Design, development, e.g. design changes, and
facet of the development, use manufacture, periodically, at least annually
Environmental maintenance or disposal of the operation,
system identified and recorded maintenance
Noise, pollution,
etc.

List of Applicable
Standards

Record of standards
which apply to the
service, system or
equipment

Figure 1 – Argument Regarding Applicable Clauses

We now turn to specific challenges. As indicated in the abstract, two of the challenges which arise in meeting goal-
based standards are how to manage the overall safety program, and how to decide on what constitutes sufficient
safety evidence. We discuss these two points but start with a third issue, already introduced above, that of
determining the acceptable risk for a system.

Acceptable Risk: The challenge that arises is how to agree on acceptable or tolerable levels of risk, given that there
is no pre-determined risk acceptance scheme, e.g. a HRI matrix. This is problematic anyway, as agreeing on risk
boundaries is often a difficult moral question. However, it is exacerbated by a number of factors:

• The risk acceptance decisions have an impact on both sides of the contract boundary – on the cost, liability
and engineering activities of the supplier, and the liability and operational performance of the customer;

• The non-safety risks (e.g. project risks) of the various stakeholders are not in harmony, so a decrease in risk
for one party may increase the risk for another;

• Integrated Project Team (IPT) leaders often do not have expertise in safety engineering, but they are
formally responsible for risk-related decisions.

Arguably, these points still apply even without goal-based standards, but the introduction of goal-based approaches
brings these difficulties into sharper focus.

There are two possible responses to this challenge: use existing applicable standards, as outlined above, or negotiate
specific details for the project – but the second of these is not necessarily easy. There is guidance on this, e.g. in the
Health and Safety Executive document, “Reducing Risks Protecting People” [12], but it does not give “solutions”
for specific situations. Our recommended response here is for the supplier and customer (the MoD Duty Holder) to
record the reasons for choosing a specific set of risk criteria, and the rationale for accepting specific risks in the
safety case. Perhaps the only unusual element here is providing explicit justification for the safety criteria. Again,
this can be expressed in GSN, but we omit the details.

The problem of limitations in expertise is quite general, and we return to it below.

Program Management: A major challenge arises in program management. It is common, at least in the UK, to
contract for the development of major systems on a fixed price basis. This has always been problematic, however
the difficulty is exacerbated, or at least made more manifest, if contracting against a standard which sets goals,
rather than prescribing processes. The first question is how the contractors might comply with the standard. One of
the reasons for using a goal-setting approach is to allow contractor to use existing, and therefore (presumably) cost-
effective, practices. There is in fact a choice in the approach, which we can illustrate using the fragment of GSN
shown below (where the black diamond represents a choice in argument support strategies).
AC

Clauses applicable for the


Contract complied with explicitly
or using Contractors' process,
with justification

AC1 Clauses for Contract


Scope AC2 Most Efficient Process Used
Compliance Strategy
Clauses Applicable due to Intent of standard met using
contract scope identified and Contractor's process where All clauses met, using
agreed between Duty Holder and possible, otherwise relevant clauses Contractor's process
Contractor, ratified by ISA met explicitly where appropriate
J

1-of-3

Applicable Clauses
List

List of Applicable and


Inapplicable Clauses
of the Standard, with AC2.1 Full Compliance AC2.2 Partial Compliance AC2.3 Direct Compliance
justification for judging
clauses inapplicable Contractor's process Contractor's process partially Contractor implements
meets full intent of meets intent of Clauses and process to meet
Clauses additional work undertaken to requirements of Clauses
address shortfall, or shown not to
be cost-effective

Figure 2 – Choice of Means of Compliance

Here the contractor’s safety process might be strongly based on civil standards, so this supports the “as civil as
possible” philosophy.

If an approach to meeting the standard can be agreed during the bid stage, this can greatly simplify the process of
estimating costs and consequent program management. However, if issues are open, e.g. what constitutes sufficient
safety evidence, it is still likely to be high risk to contract at fixed price. Here a viable approach would be to phase
the development of the safety case, and to agree the price for the next phase based on agreed arguments set out at
that phase. A possible set of initial phases would be:

• Pre-contract – agreement of scope of safety process and broad means of compliance – figures 1 and 2
address these issues;

• Initial – agreement of approach to each technical activity in the safety process, and the risk acceptance
criteria and process;

• Preliminary – agreement to set of hazards, and strategy for showing that each hazard has been controlled.
To do this requires a different approach to contracting whereby prices are agreed on a phased basis, and the safety
argument is used as a major input to the pricing process.

Safety Evidence: As there are no SILs and prescriptions for specific SIL assignments, again there is the question of
how agreement is reached on what constitutes sufficient evidence. If the system of concern is based on a civil
platform, and is developed to civil standards, then there are three cases to consider:

• The civil process applies exactly, i.e. the assumptions behind the civil clearance about operational usage
and environment apply equally in the military environment;

• A variant of the civil process, i.e. the usage is close to the civil process, but some aspects of the usage
require new evidence, but such evidence can be gained within the civil process;

• Military specific, i.e. the usage is unlike any that would be cleared in a civil process.

We briefly illustrate what might be done in the second case, then consider the military specific scenario.

Imagine that we are concerned with a military variant of a civil helicopter, and our focus is on the engine controller,
or FADEC (Full Authority Digital Engine Controller). Further, assume that the helicopter and FADEC are cleared
through the FAA and the European authorities (EASA), but that there are a number of changes to usage:

• The helicopter has a different equipment fit, but the all-up weight, power consumption of electrical
systems, etc. is the same as the civil usage;

• The operational profile includes “hot and high”, a much greater risk of sand, salt water and ice ingestion,
and there is a new risk of plume ingestion (from stores).

These operational differences require some modification to the FADEC software:

• Modification to the control laws for “hot and high” operation;

• Anti-surge logic added to the FADEC, to deal with plume ingestion, but this is simply a software change
(modifications to the control laws) implemented without changing the sensor set.

Further assume that the software is developed to DO178B Level A, and that the software changes are handled using
standard practices (large part of the DO178B accomplishment summary). The safety argument might be as follows:
G0

Use of engine in new


environment is justified

S0
Argument through current C1
C0 applicability of existing
assessment for historical flight Additional
Existing
regime and new evidence Operational
Assessment
covering additional operational Scenarios
scenarios

G1 G2

Existing assessment All changes have been


applies in new shown to be safe
environment

G1.1 G1.2 G2.1 G2.2 G2.3

No other aircraft All software changes Non-conformances in


All situations where civil Engine test regime accomplishment summary which
changes invalidate properly handled in
clearance still applies addesses all changes were acceptable in civil scenarios
certification accomplishment summary
identified are acceptable in the military
(AS)
scenario

Figure 3 – Safety Argument for change of operational use


This is an example of the “as civil as possible, only as military as necessary” philosophy. Whilst this might seem
obvious as an approach, previous UK MoD practice would probably have required expensive retrospective analysis
of the software to the requirements of the software safety standard DS 00-55 [13], (now superseded by DS 00-56
Issue 3).

Where software is military specific, e.g. if the helicopter alluded to above was adapted to carry out low level flying
based on a terrain database, then appropriate evidence is required. DS 00-56 Issue 3 identifies some principles, e.g.
diversity in evidence to avoid “single errors” in the evidence undermining the overall safety argument. However, it
leaves the argument approach quite open.

Observations: There is a broader challenge hinted at above – finding the necessary skills and expertise to enable the
potential benefits of goal-based standards to be realized. Arguably, in a Defence context, most of the suppliers have
the requisite technical skill to manage a goal-based process. However it requires similar levels of skill and expertise
in the customer community. This is not always to be found, and preserving the skills and ensuring consistent
decision-making in long-running projects will be a challenge. There is no simple response to this challenge, but it
seems likely that there will need to be three elements of any response:

• Education and training for all sides, especially customers, including IPTs. (The authors provided such
education and training for the CAA and some of the airports they regulate as part of their move to goal-
based standards);

• Active support for projects (IPTs) will be necessary to enable them to make the judgments required to
comply effectively with the standard and to provide continuity when staff change;

• Development of guidelines on a sector/technology specific basis for evidence deemed acceptable by the
MoD and deemed practical by industry.

None of these responses to the challenges of DS 00-56 Issue 3 are necessarily easy, but all are practical.

Opportunities

Many opportunities arise from adopting goal-based standards, and these provide a technical motivation for the
introduction of this approach. The opportunities include:

• Ability to use new technologies, if suitable arguments can be presented – for example this might include
the use of neural networks (which many standards proscribe) if using a development and assessment
philosophy such as that presented by Kurd [14];

• Ability to use existing civil systems and practices, where they are appropriate – including avoiding
expensive retroactive application of static code analysis to meet the requirements of DS 00-55;

• Ability to undertake safety work more cost-effectively, by allowing contractors to use existing established
safety practices;

• Less difficulties in operating international safety projects –DS 00-56 Issue 3 is compatible with MilStd
882D (and early drafts of MilStd 882E) – A difference is that DS 00-56 Issue 3 asks for a safety case.
However, the core requirements of safety management are very similar;

• The possibility of having a stable standard, which is widely applicable, and which is not affected when
technology changes – although it will need to be supported by technology and domain specific guidance to
cope with changes in good/best practice in industry.

Overall, the opportunity is to be able to implement more cost-effective and flexible safety processes, with minimal
deviation between defence and civil practices. The last point is very significant – considerable difficulty and cost
arises from variation in standards over time, between countries and between nations – indeed it can be seen as
moving towards the notion of a standard as defined by the OED.
Conclusions

There are currently very many system safety standards. To some extent, it is legitimate to have different standards
for different sectors and different technologies, but one cannot help but think that we have a “Tower of Babel”,
rather than necessary diversity. A good safety process standard would be simple, and contain stable, general
principles. Arguably, MilStd 882D most closely matches this notion – but it is rather too limited in what it says, not
addressing issues such as competency, safety cases, etc. – the latter being important in UK practice.

An over-arching aim in producing DS 00-56 Issue 3 was to produce a simple standard which can stand the test of
time. If it achieves this, it will be because the set of objectives identified above are necessary and sufficient for
effective safety processes. Are these the “right” set of objectives to achieve this aim? Only time will tell. However,
it is worth observing that the set of principles identified in work for the Australian Department of Defense [15] are
very similar to the objectives set out in DS 00-56 Issue 3. Further, the key practices in the safety and security
extensions to the CMMI [16] are also quite similar (and the differences with the CMMI are largely due to the
inclusion of security). This similarity is encouraging, and it may be that there is consensus on these issues in the near
future.

The adoption of a goal-based approach to safety has benefits and presents opportunities for future projects – mainly
in the area of flexibility and avoiding unnecessary costs. However, there are a number of challenges, many of which
we believe can be met by intelligent evolution of a safety case throughout a development program. Perhaps the
biggest challenge is the need to change commercial and contractual practices to enable projects to be costed in an
incremental manner, pricing the new phase as the old one ends. MoD is currently working on contract terms for the
application of DS 00-56 Issue 3 which we hope will enable this form of contractual flexibility. Another challenge is
to ensure that the customer has enough understanding of safety issues to be able to play a proper part in decision
making, and the evolution of the safety case. This challenge is exacerbated for military projects where there is a
regular “turn-over” of personnel.

The opportunities and challenges can be thought of as advantages and disadvantages. Taking this view, we believe
that the advantages of goal-based standards outweigh their disadvantages, but we expect the transition to be difficult.
Perhaps one of the most useful things which could be done to smooth the transition would be to undertake a case
study, or to publish results from early application of goal-based standards, to give a clearer idea of what compliance
to goal-based standards entails and thus remove one of the sources of uncertainty in their use.

References

[1] Hermann, D, Software Safety and Reliability, IEEE Computer Society Press, 1999.

[2] RTCA/DO178B: Software Considerations in Airborne Systems and Equipment Certification, RTCA Inc., 1992.

[3] ARP 4761: Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems
and Equipments, Society of Automotive Engineers, 1996.

[4] IEC 61508: Functional Safety of Electrical/Electronic/Programmable Electronic Safety Related Systems,
International Electrotechnical Commission, 1998.

[5] Fowler, D, Application of IEC 61508 to Air Traffic Management and Similar Complex Critical Systems –
Methods and Mythology, in Lesson in System Safety, Proceedings of Eighth Safety-Critical Systems Symposium,
Springer Verlag, 2000.

[6] Redmill F, Safety Integrity Levels – Theory and Problems, in Lesson in System Safety, Proceedings of Eighth
Safety-Critical Systems Symposium, Springer Verlag, 2000.

[7] MilStd 882C System Safety Program Requirements (Change Notice 1), US DoD, 1996.

[8] MilStd 882D Standard Practice for System Safety, US DoD, 2002.

[9] DS 00-56 Issue 3: Safety Management Requirements for Defence Systems, Part 1: Requirements, Ministry of
Defence, 2004 (Published at Interim status).
[10] CAP 670: Air Traffic Services Safety Requirements, Civil Aviation Authority, 1998.

[11] CENELEC EN 50126: Railway applications – The specification and demonstration of dependability, reliability,
availability, maintainability and safety (RAMS), European Committee for Electrotechnical Standardisation, 1997.

[12] Reducing risk – protecting people - R2P2, Health and Safety Executive, HMSO, 2001.

[13] DS 00-55 Issue 2: Requirements for Safety Related Software in Defence Equipment, Ministry of Defence, 1996
(now superseded by [9]).

[14] Kurd Z., Kelly T.P., Safety Lifecycle for Developing Safety-critical Artificial Neural Networks in Proceedings
of the 22nd International Conference on Computer Safety, Reliability and Security (SAFECOMP'03), 2003.

[15] Lindsay P., Improved acquisition processes for safety-critical systems in the Australian Department of Defence,
in Proc. Of the 6th Australian Workshop on Safety Critical Systems and Software, Conferences in Research and
Practice in Information Technology Vol 3, Australian Computer Society, 2001.

[16] Ibrahim L. et al, Safety and security extensions for Integrated Capability Maturity Models, US FAA, September
2004.

Biography

J. A. McDermid, Ph.D., Professor of Software Engineering, Department of Computer Science, University of York,
York, UK, telephone - +44(0)1904 432726, facsimilie - +44(0)1904 432708 e-mail – john.mcdermid@cs.york.ac.uk

John McDermid has been Professor of Software Engineering at the University of York since 1987 where he runs the
High Integrity Systems Engineering (HISE) research group. HISE studies a broad range of issues in systems,
software and safety engineering, and works closely with the UK aerospace industry. Professor McDermid is the
Director of the Rolls-Royce funded University Technology Centre (UTC) in Systems and Software Engineering and
the BAE SYSTEMS-funded Dependable Computing System Centre (DCSC). He also runs a collaborative research
programme with BAE Systems, QinetiQ and Rolls-Royce and has recently started a research programme in system
safety for Airbus, in conjunction with ONERA in Toulouse and OFFIS in Oldenburg. He has extensive experience
as a consultant, including advising the MoD on the development of DS 00-56 Issue 3. He is author or editor of 6
books, and has published about 300 papers.

T.P. Kelly, D.Phil., Lecturer, Department of Computer Science, University of York, York, UK, telephone -
+44(0)1904 432764, facsimilie - +44(0)1904 432767 e-mail – tim.kelly@cs.york.ac.uk

Dr Tim Kelly is a Lecturer in software and safety engineering within the Department of Computer Science at the
University of York. He is also Deputy Director of the Rolls-Royce Systems and Software Engineering University
Technology Centre (UTC) at York. His expertise lies predominantly in the areas of safety case development and
management. His doctoral research focussed upon safety argument presentation, maintenance, and reuse using the
Goal Structuring Notation (GSN). Tim has provided extensive consultative and facilitative support in the production
of acceptable safety cases for companies from the medical, aerospace, railways and power generation sectors.
Before commencing his work in the field of safety engineering, Tim graduated with first class honours in Computer
Science from the University of Cambridge. He has published a number of papers on safety case development in
international journals and conferences and has been an invited panel speaker on software safety issues.

R. A. Weaver, Ph.D., Teaching Fellow, Department of Computer Science, University of York, York, UK, telephone
- +44(0)1904 434739, e-mail – rob.weaver@cs.york.ac.uk

Dr Rob Weaver is a Teaching Fellow in the Department of Computer Science and the University of York. Rob has
been a full-time academic since 1999. Rob’s research concerns the effect of software on system safety and the
construction and presentation of arguments in safety cases. This work has covered how evidence-based software
safety arguments can be constructed and how the assurance of safety arguments can be determined and presented.
He is a member of RTCA SC-205/EUROCAE WG-71 concerned with the update of DO178B/ED12B. He lectures
on the advanced MSc in safety critical systems engineering as well as presenting on industrial courses in System
Safety Engineering and Management in the UK and Europe.

Вам также может понравиться