Вы находитесь на странице: 1из 32

Effective and Practical Risk

Management Options for


Computerised System Validation
R.D. McDowall*
McDowall Consulting, 73 Murray Avenue, Bromley, Kent BR1 3DJ, UK

Summary
Risk management and risk assessment for computerised systems validation is a
key regulatory issue following the Food and Drug Administration’s (FDA) re-
assessment of the 21 Code of Federal Regulation (CFR) Part 11 regulations (Electronic
Records and Electronic Signatures final rule). This paper reviews the GXP (i.e.
Good Laboratory Practice (GLP), Good Clinical Practice (GCP) and Good Manufac-
turing Practice (GMP)) regulatory requirements and associated guidance docu-
ments and then focuses on the International Standards Organisation (ISO) 14971
risk management process and vocabulary for an overall risk management frame-
work.
Several risk analysis methodologies are presented and assessed for their
applicability for computerised system validation (CSV). The conclusion is that one
methodology does not fit all situations and the prudent professional should select
the best methodology applicable for the problem at hand. Finally, an overall risk
management process flow for CSV is presented and discussed based on two
questions: ‘Do I need to validate the system?’ and if so, ‘How much work do I need
to do?’ A single integrated document is presented as an alternative to a full V-model
for the validation of lower risk computer systems. Copyright # 2005 John Wiley &
Sons, Ltd.

Key Words: Computerised System Validation; Risk Management; Risk Assessment; Risk
Analysis; Hazard Analysis and Critical Control Points (HACCP); Hazard Analysis and
Operability Analysis (HazOp); Fault Tree Analysis (FTA); Failure Mode Effect Analysis (FMEA);
Functional Risk Analysis (FRA); BS7799 IT Risk Assessment; NIST SP800-30 Risk Assessment;
Integrated Validation Document

Introduction turing when they started a review of their overall


approach under the Good Manufacturing Prac-
In 2002, the Food and Drug Administration tices (GMPs) for the 21st Century programme
(FDA) adopted a risk-based approach to reg- [1]. As part of this programme, 21 Code of
ulatory compliance in pharmaceutical manufac- Federal Regulation (CFR) Part 11 was reas-
sessed, the scope narrowed and industry was
encouraged to adopt a risk-based approach to
*Correspondence to: R.D. McDowall, McDowall Consult-
ing, 73 Murray Avenue, Bromley, Kent BR1 3DJ, UK.
interpretation of the regulation and valida-
E-mail: r d mcdowall@compuserve.com tion of systems [2]. Under this approach, risk

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
DOI: 10.1002/qaj.339
Risk Management for CSV 197

assessment and risk management are key, but Regulations and Guidance for
emerging, components of computerised system Managing CSV Risk
validation (CSV), using the approach outlined in
ISO 14971 [3]. In this section, the key sections from regulations,
In order to understand the regulatory ratio- regulatory agency guidance documents or in-
nale for risk assessment and risk management, dustry guidance are highlighted and discussed.
this paper reviews the current regulations and The recent driver for risk management started
guidances for industry issued by healthcare with FDA’s adoption of risk-based approaches
agencies. In addition, the currently available to regulatory compliance in 2002. In 2003,
guidances written by the industry in collabora- the Agency started a review of 21 CFR Part 11
tion with regulatory agencies on the subject are [4] (Electronic Records and Electronic Signa-
reviewed. The aim is to provide a concise tures) where the scope of the regulation was
summary of regulatory requirement and indus- narrowed and risk-based validation was sug-
try guidance upon which to base a risk manage- gested. Therefore, the purpose of this part of
ment approach for CSV. the paper is to derive an overall approach to the
Next, the topic of risk management and risk management of validation of any compu-
its associated vocabulary is introduced and terised system operating in a regulated environ-
discussed. It is important to state that risk ment before starting any work. Failure to do this
management is not a one off process nor can result in over- or under-engineering any
separate from CSV and the system development validation effort.
life cycle, but it is a continuous process
and must be integrated within the overall
What do the regulators say?
lifecycle of system development. Under an
overall risk management approach, there is FDA Guidance on Part 11 Scope and
the choice of risk analysis methodology. Application
Several of the currently available methodo- In the section on validation is written the
logies are presented and discussed in the following statement [2]:
context of their suitability and applicability for
CSV and information technology (IT) risk We recommend that you base your approach
assessment. on a justified and documented risk assessment
Finally, a practical approach is suggested and a determination of the potential of the
for risk management within CSV that attempts system to affect product quality and safety, and
to answer the two key questions. The first record integrity.
question is: Does the system need validating?
If yes, the second question is: How much This one sentence has started a major change
validation is needed? A simple process flow is in the approaches used to validate computerised
presented to give an overall context for the systems. However, as can be seen below, it now
two questions. To answer the second question brings the FDA into line with the European
fully, one of the two approaches is suggested: Union (EU) GMP approach to CSV.
either full V-model validation (with subsequent
adoption of one of the risk analysis methodol- EU GMP Annex 11
ogies discussed above) or a reduced validation In existence since 1992, clause 2 [5] of EU GMP
for low risk systems using a single integrated Annex 11 states:
document.
The purpose of this paper is to present and The extent of validation necessary will depend
debate that different risk analysis methodologies on a number of factors including the use to
are more appropriate to a given situation than a which the system is to be put, whether the
single one-size fits all approach. validation is to be prospective or retrospective

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
198 R. D. McDowall

and whether or not novel elements are incorpo- undergone installation qualification (IQ) and
rated. operational qualification (OQ) (as outlined in
5.41) does not require the same level of end
The two main factors that determine the user (performance qualification (PQ)) testing as
extent of validation from this clause of the a customised system.
regulation are:
* What does the system do? FDA Quality System Regulation (21 CFR
* Is the software commercial off-the-shelf Part 820)
(COTS), configurable COTS software or This is GMP [7] for the medical device industry
custom coded? and it became effective in 1997. As it is a
relatively recent regulation, there are specific
The combination of customised system and
requirements for validation of software used in
automating a high regulatory risk operation will
the medical device itself or computer applica-
require the most validation effort; the corollary
tions used either in the production of the
is that a COTS application undertaking a low
medical device or the organisation’s quality
regulatory risk task should entail less validation
management system (QMS).
effort.
Design controls: Section 820.30(g)
International Conference on
Harmonisation (ICH) Q7A Design validation. Each manufacturer shall
The ICH GMP regulations for Active Pharma- establish and maintain procedures for validating
ceutical Ingredients (APIs) [6] cover compu- the device design. . . . Design validation shall
terised systems mainly in section 5.4. Of direct ensure that devices conform to defined user
interest to this debate are the following three needs and intended uses and shall include testing
clauses: of production units under actual or simulated
use conditions. Design validation shall include
5.40: GMP related computerised systems software validation and risk analysis, where
should be validated. The depth and scope appropriate. . . .
of validation depends on the diversity, complex-
ity and criticality of the computerised appli- Production and process controls: Section
cation. 820.70(i)
5.41: Appropriate installation qualification
and operational qualification should demon- Automated processes. When computers or
strate the suitability of the computer hardware automated data processing systems are used
and software to perform assigned tasks. as part of production or the quality system,
5.42: Commercially available software that the manufacturer shall validate computer soft-
has been qualified does not require the same ware for its intended use according to an
level of testing. If an existing system was not established protocol. All software changes shall
validated at the time of installation, a retro- be validated before approval and issuance.
spective validation could be conducted if appro- These validation activities and results shall be
priate documentation is available. documented.

Again, a similar approach to EU GMP is To help interpret these regulations, the


advocated: the depth and scope (equivalent to Centers for Devices and Radiological Health
‘extent of validation’ in EU GMP) depends on (CDRH) and Biological Evaluation and
what the system automates and the nature of the Research (CBER) jointly produced a ‘Guidance
software. However, Q7A goes further by saying for Industry entitled General Principles of
that commercially available software that has Software Validation’ [8] – see below.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 199

FDA General Principles of Software ware and must reconfirm the validation of
Validation those portions of the software that are used
This Guidance for Industry [8], published by the (see 21 CFR }820.70(i)).
FDA in 2002, is, in the opinion of the author,
currently the best document on software valida- Again, the extent of validation is dependent on
tion written by the Agency. The essentials of risk the documented use of the software (there is a
management are contained in two main sections: direct reference to a written specification docu-
ment being available in ‘documented intended
Section 4.8: use of that software’). The use of commercial
software is acceptable and if a function in the
Validation coverage should be based on the
application is not used it need not be validated
software’s complexity and safety risk – not on
provided that the use of the system is not high-
firm size or resource constraints. The selection
risk (e.g. Class III medical device). For these
of validation activities, tasks, and work items
high-risk medical devices, then the software
should be commensurate with the complexity of
needs to be validated fully as a malfunction
the software design and the risk associated with
may be life threatening. Equally so, for low-risk
the use of the software for the specified inten-
devices baseline validation activities need to be
ded use. For lower risk devices, only baseline
conducted.
validation activities may be conducted. As the
risk increases additional validation activities
should be added to cover the additional risk. Pharmaceutical Inspection Cooperation
Scheme (PIC/S) Guidance for
Section 6.1: How much validation is needed? Computerised Systems
The PIC/S have published guidance on ‘Good
* The extent of validation evidence needed for
Practices for Computerised Systems in GXP
such software depends on the device manu-
Environments’ [9] that provides good advice
facturer’s documented intended use of that
for risk management.
software.
* For example, a device manufacturer who Section 4.3:
chooses not to use all the vendor-supplied
capabilities of the software only needs to For GXP regulated applications it is essential
validate those functions that will be used and for the regulated user to define a requirement
for which the device manufacturer is depen- specification prior to selection and to carry out
dent upon the software results as part of a properly documented supplier assessment and
production or the quality system. risk analysis for the various system options.
* However, high-risk applications should not
be running in the same operating environ- Section 23.7:
ment with non-validated software functions,
even if those software functions are not used. GXP critical computerised systems are those
* Risk mitigation techniques such as memory that can affect product quality and patient safety,
partitioning or other approaches to resource either directly (e.g. control systems) or the integrity
protection may need to be considered when of product related information (e.g. data/informa-
high-risk applications and lower risk applica- tion systems relating to coding, randomisation,
tions are to be used in the same operating distribution, product recalls, clinical measures,
environment. patient records, donation sources, laboratory data,
* When software is upgraded or any changes etc.). This is not intended as an exhaustive list.
are made to the software, the device manu-
facturer should consider how those changes PIC/S calls for a risk analysis of the docu-
may impact the ‘used portions’ of the soft- mented system components and functions. The

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
200 R. D. McDowall

guidance also notes that systems can impact GAMP Risk-Based Approach to
product quality or patient safety directly or Compliant Electronic Records and
indirectly via the quality of information output. Signatures
This is important to note as PIC/S is more This GAMP Good Practice Guide [11] is the
detailed than the FDA’s Part 11 guidance [2] replacement for an older Guidance [13], issued
that simply states risk assessment needs to be under the original interpretation of 21 CFR Part
performed on a systems impact on product 11, and is intended to supplement the existing
quality, safety and record integrity but does GAMP Guide version 4 [12]. The Good Practice
not define what it means in any further detail. Guide aims to provide guidance about record
integrity, security and availability of records
throughout the records retention period. There
Industry Guidance Documents
is a relatively comprehensive interpretation of
There are two industry guidance documents that global predicate rule regulations to help inter-
are useful to consider in relation to risk that pret 21 CFR Part 11 under the FDA’s guidance
have been released in 2005. These are the draft on Part 11 Scope and Application [2]. The risk
ICH Q9 consensus guideline for quality risk management approach advocated in this best
management [10] and the good automated practice guide is the assessment of system risk
manufacturing practice (GAMP) Good Practice and record risk.
Guide for risk-based compliant electronic re- The risk analysis approach advocated by the
cords and signatures [11]. guide is simply a reprint of the GAMP Guide in
Appendix M3 that contains the risk assessment
ICH Q9 Quality Risk Management based on Failure Mode Effect Analysis (FMEA).
This document has reached the second step of However, in the introduction to the Good
the ICH process and was released for consulta- Practice Guide, it is noted that other methodo-
tion in March 2005. As the contents of this logies can be used.
document could change before full adoption, the
reader is advised to read the latest version
Regulatory Requirements and Guidance
available.
Summary
The purpose of the document is to serve as a
foundational or resource document that is As presented and discussed above, the regula-
independent yet supports other ICH Quality tions are very explicit about CSV risk manage-
documents by providing the principles and ment; the extent of validation should be based
examples of tools of quality risk management. on two main factors:
It proposes a systematic and formal approach to
1. The functions that are automated by the
risk management but also recognises that ad hoc
system.
informal processes can be acceptable. However,
2. The nature of the software used in the system.
in the context of computerised systems, the
formal approach is discussed in this paper. The Therefore, to justify your extent of validation
overall process flow for risk management is for any specific computer system, your approach
similar to that from ISO 14971 but for clarity needs to have a documented risk assessment to
the ISO process flow in Figure 3 will be used for mitigate and manage the overall risk posed by
consistency throughout this paper as there are the computerised system within acceptable and
some differences with ICH Q9. The great documented levels. However, with all regula-
advantage of the current version of the docu- tions the regulators say what they want but not
ment is its listing of the risk analysis methodol- how to do it to avoid being prescriptive.
ogies in section 5 and the references in section 8 The industry guidance through GAMP and
where it is interesting that GAMP Guide [12] is ICH provides a framework to carry out the
not mentioned. risk assessment and mentions some of the risk

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 201

analysis methodologies that can be used includ- Vocabulary issues


ing ISO 14971. However, before we can discuss risk manage-
ment in the context of computer validation, we
need to have a common vocabulary for risk
International Standards Organisation
management as the regulations use different
(ISO) and Risk Management
terms without defining what they mean. For
example, risk assessment is used by the FDA in
The FDA Part 11 Scope and Application [2]
the Part 11 guidance [2] and risk analysis in 21
guidance references ISO 14971 [3] as the basis
CFR Part 820 [7] and risk analysis by the PIC/S
for risk assessment. Note that references in
guidance [9]. Do they want the same end result
guidances for industry are informative and not
or are they different? In short, we do not know
mandatory in that they provide examples and
as there appears to be insufficient advice offered
relevant information rather than define the
in these regulations and guidance documents.
definitive approach. Alternative approaches are
Therefore, we need to discuss and agree upon
acceptable providing that applicable regulations
a common vocabulary for risk management and
are met. This ISO standard presents a frame-
here is where ISO 14971 [3] and ISO Guide
work for risk management for medical devices.
73 [14] enter the scene.
Therefore, we will discuss the terms and
definitions used in this standard which also
further references ISO Guide 73 [14]. When we
ISO Guide 73 and ISO 14971: Risk
complete this section, you will realise that the
Management Definitions
FDA actually require risk management rather
The following definitions are taken from ISO
than just risk assessment (Figure 1).
14971 [3] and these should be read in conjunc-
tion with Figure 3 adapted from ISO Guide 73
Risk Management [14]:

Risk Assessment * Risk management: The systematic applica-


Risk Analysis tion of management policies, procedures and
Source Identification
practices to the tasks of analysing, evaluating
and controlling risk. From Figure 3, this is the
Risk Estimation overall process that is the subject of this
paper.
Risk Evaluation
* Risk assessment: The overall process of a risk
analysis and risk evaluation. This is the major
Risk Treatment
sub-process and comprises two elements: risk
Risk Avoidance analysis and risk evaluation as shown in
Risk Optimization Figure 3. This is the stated requirement of the
FDA [2].
Risk Transfer
* Risk analysis: The systematic use of available
Risk Retention information to identify hazards and estimate
the risk.
Risk Acceptance
* Risk evaluation: Judgement, on the basis of
risk analysis, of whether a risk that is
acceptable has been achieved in a given
Risk Communication context.
* Risk: Combination of the probability of
Figure 1. Risk management terminology and occurrence of harm and the severity of that
relationships from ISO Guide 73 [15] harm.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
202 R. D. McDowall

* Harm: Physical injury or damage to the * Risk avoidance: Avoiding the risk by elim-
health of people, or damage to property or inating the risk cause and/or consequence
the environment. Note this is for a medical such as the addition of design features or
device; this needs to be interpreted as the procedural controls that prevent the risk from
consequences of a software error or malfunc- occurring.
tion of the system. * Risk limitation: Limiting the risk by imple-
* Severity: Measure of the possible conse- menting controls that minimise the adverse
quences of a hazard. impact of a threat’s exercising a vulnerability
* Hazard: Potential source of harm. (e.g. use of supporting, preventive, detective
* Risk control: The process through which controls) or by authorising operation for a
decisions are reached and protective measures limited time during which additional risk
are implemented for reducing risks to, or mitigation by other means is being put into
maintaining risks within, acceptable levels. place.
Note that all risks cannot be eliminated * Risk transference: Transferring the risk by
but they are mitigated within acceptable using other options to compensate for the
levels. What is acceptable will be determined loss, for example purchasing insurance
by the operating environment and the func- against the threat in carefully defined circum-
tions that the computerised system auto- stances.
mates.
It is important to understand that not all risks
are the same level. That is why the majority of
Aims of Risk Management
risk analysis methodologies rank risk and deal
The aim of the overall risk management process
with the highest priority/severity first and may
is shown in Figure 2. It is to take all the
often leave lower risks as they are within an
identified risks of a computer system and reduce
acceptable level.
them by mitigation activities using differ-
There are additional definitions from IEEE
ent approaches or design so that the resi-
Standard 1540 [16] (Software life cycle pro-
dual risk is within manageable or acceptable
cesses – Risk management) that should be
limits. There are a number of options that can be
included for consideration here:
used:
* Acceptability: The exposure to loss (financial
* Risk assumption: Accepting the potential risk
or otherwise) that an organisation is willing
and continue operating the IT system without
to tolerate from risk.
any additional actions. * Likelihood: A quantitative or qualitative
expression of the chances that an event will
occur.

Total System Risk It is important that acceptability be included in


these definitions as it summarises succinctly the
residual risk of a computerised system. Typically
Risk
the downside could be misinformation or regu-
Transfer Management
Risk Process latory citations of a poorly validated system.
Mitigate
Risk
ISO 14971: Risk Management for Medical
Accepted
Residual
devices
Risk
This standard provides an overall risk manage-
Figure 2. Outcome of a risk management ment framework to identify, analyse, mitigate
process and accept risk for medical devices. Within this

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 203

framework there are further definitions to The aim is to anticipate problems in the
consider; however, as will be pointed out below, design phase to prevent them occurring in the
there is not always a 1:1 relationship between operational system and improve the reliability of
ISO Guide 73 and ISO Standard 14971: any computer system.

* Intended use/intended purpose: Use of a Risk Evaluation


product, process or service in accordance Following on from the analysis phase, the
with the specifications, instructions and in- evaluation process is in essence very simple: it
formation provided by the manufacturer. For asks the question does the risk need to be
a computerised system or software applica- mitigated or not? If the answer is no, then the
tion, this means the user requirements speci- risk is accepted and nothing further is required.
fication (URS) or equivalent documents; even However, if there is a requirement for mitiga-
for a custom code system, there needs to be tion, then the risk moves into the next stage of
a specification. This is imperative and non- the process: risk control. Typically only high-
negotiable as it is the starting point of the risk factors will pass to the next stage; however,
whole risk management process. It is still this decision depends on the criticality of the
surprising that many people working in project in question.
regulated industries fail to see the need for a
properly written requirements specification Risk Control
from either the business or regulatory pers- Once the high-risk tasks have been highlighted,
pectives. then it is possible to prepare plans and counter-
* Residual risk: Risk remaining after protective measures to overcome the risk. Note that it is
measures have been taken. not always possible to eliminate a risk as this
* Risk management file: Set of records and may be impossible or require too much effort.
other documents, not necessarily contiguous, However, sufficient work needs to be done
that are produced by a risk management to ensure that the impact of any identified risk
process. This is the documented evidence is managed and is acceptable. For example,
required by the FDA and other regulatory modifying the URS or functional design of
agencies. It is important to ensure that the specific features or functions of a system may
risk management file is integrated within the be one way of controlling a defined risk. Equally
overall process of CSV. so, implementation of user training may also be
a method of avoiding or transferring a risk.
Risk Analysis Ultimately, the final outcome of this process is
The input to this process is a statement of where risk has been reduced to an acceptable
intended use of the computerised system i.e. an level that is documented appropriately.
user requirements specification (URS) or equiva- Note that this is risk treatment in ISO Guide
lent document. As requirements may change 73 and covers topics such as risk avoidance, risk
during development or are refined, this is the optimisation, risk transfer and risk retention;
major reason why risk assessment needs to be risk control here also includes risk acceptance.
reviewed and updated. From the requirements, Equivalent processes occur in ISO 14971;
potential hazards are identified and the prob- however, the framework diagram in Figure 3
abilities of risk for each hazard are estimated. does not make this clear immediately.
The aim here is to ask two simple questions:
Risk Management File
1. What can go wrong? As the project progresses, a body of information
2. If something does go wrong: What is the about how the project risk has been managed
probability of it happening and what are the and mitigated has been amassed; it is important
consequences? [17] that this information is not forgotten or ignored.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
204 R. D. McDowall

Reassess Risk * Fault tree analysis (FTA).


Risk Risk Risk Risk Mgt * Preliminary hazard analysis (PHA).
Analysis Evaluation Control File * FMEA.
Risk * Failure mode, effect and criticality analysis
Assessment (FMECA).
Risk
Management
In addition there are other methodologies that
Figure 3. The risk management process have been used either for CSV or for IT
adapted from ISO 14971 [3] infrastructure risk assessment, for example:
* Functional risk analysis (FRA).
* BS7799 risk assessment (guidance document
Reuse and update the information: it can be used PD 3002).
to feedback into the risk cycles as shown in * NIST SP800-30 risk management guide for IT
Figure 3. This portion of the risk management systems.
process incorporates the element of risk com-
munication from ISO Guide 73. Currently the main emphasis for risk manage-
ment in CSV is the GAMP methodology (similar
Continuous process to FMEA) which is suitable for complex
Risk management is a continuous process that and bespoke computer systems. However, other
needs to be conducted at least early in the approaches are also acceptable and we will
validation project using outline specifications of discuss some other main methodologies that
intended purpose and later in the project when could be used.
the intended purpose has been completed. The The key point to make with these methodo-
reason is a project starts with a high degree of logies is that they should be used in a predic-
uncertainty and hence high risk. As it progresses, tive mode (anticipating events) rather than
uncertainty in some areas is reduced but in reactive one (analysis after an event). As such,
others it can increase, hence the need for repeat- a methodology such as root cause analysis is
ing the risk assessment and plan approaches to not applicable to CSV risk management as it is
counter any newly identified risks. applied after a ‘significant event’ to analyse it, to
find the fundamental causes of the event and to
implement changes to prevent a reoccurrence
[18]. The focus of CSV risk management is on
Possible Risk Analysis Methodologies
prevention by eliminating problems, leaving risk
for CSV
at an acceptable level.
The choice of the risk analysis methodology is
left to the individual organisation to select both Risk analysis entry criterion: a complete
by the FDA and ISO. There are a number of URS
methodologies that are listed in ICH Q9 [10]
and ISO 14971 [3]. The following are potential To complete a successful risk assessment you
risk analysis methodologies that could be used must ensure that until you have an URS or
within the ISO 14971 risk management frame- equivalent specification that reflects your in-
work, each will be discussed and their applic- tended use of any new or updated computerised
ability for CSV assessed and summarised at the system. A URS is the key document around
end of this section. which all further risk management and valida-
tion work is predicated. If the URS is not
* Hazard analysis and critical control points complete or accurate then the risk analysis will
(HACCP). be incomplete and will need to be updated as
* Hazard operability (HazOp) analysis. the definition of the system proceeds. This is

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 205

also important for an upgrade of an existing or a data projector showing the starting docu-
system where new features need to be evaluated ment templates are used to capture the work-
carefully and incorporated into the requirements shop output. The URS or other specification
documentation. document will be used as the input and to
stimulate discussion and debate. Depending on
the size of the group, the facilitator can collate
Team approach to risk assessment:
the workshop output or another team member
performing the assessment
can act as the scribe for the workshop.
It is important to realise that a single individual
typically cannot conduct a risk assessment; it is a
Hazard analysis and critical control
multidisciplinary team approach. The team
points (HACCP)
membership will vary with the system being
implemented or developed but a core team will HACCP is a systematic, proactive, and preven-
consist of: tive method for assuring product quality, relia-
bility, and safety that was developed by Pillsbury
* Users of the system.
Foods to provide food for the astronauts of the
* Technical implementers (e.g. IT and/or
NASA space programme. The FDA has adopted
engineering).
this methodology and has applied it to seafood
* Validation.
(since 1995) and fruit juice (since 1998) and is
* Quality assurance.
expanding it to other food areas over which it
* Project Manager.
has control. The US Department of Agriculture
The team should be relatively small, say five, also uses HACCP for meat and poultry produc-
seven or nine people, so that it performs well tion. There is an FDA guidance document [19]
rather than getting bogged down with detail that and failure to confirm to HACCP guidelines is
can happen with larger groups. The odd number the source of many warning letters on the FDA
is deliberate so that there is an inbuilt majority. website for seafood and other food processors.
One member of the team should be a facilitator The methodology is a structured approach to
of the group and ensure that every person can analyse, evaluate, prevent, and control the risk
have their say, avoid the group being dominated or the adverse consequences of identified ha-
by personalities and to facilitate a consensus on zards during the production and distribution of
risks to be reached. The process can vary from food products. The methodology consists of
brainstorming (FMEA/FMECA) to selection of seven steps [10]:
one of two options (FRA).
1. Conduct a hazard analysis and identify pre-
Typically a room is dedicated for the risk
ventive measures for each step of the process.
assessment workshop and attendees should not
2. Determine the critical control points.
be distracted by calls from their work colleagues
3. Establish critical limits for each of the control
to ensure that their focus is on the risk
points.
assessment. The workshop scope needs to be
4. Establish a system to monitor the critical
defined and agreed in advance of the start. It is
control points.
advantageous to start the workshop in the
5. Establish the corrective action(s) to be taken
morning when people are most alert and before
when monitoring indicates that the critical
issues can have impacted them. A minimum of
control points are not in a state of control.
two workshops are needed with time between
6. Establish a system to verify that HACCP
them so that the workshop output can be
system is working effectively.
written up and circulated for review.
7. Establish a record-keeping system.
Depending on the methodology used, either
flip charts (alternatives are whiteboards with The ICH Q9 document identifies the potential
photocopy facilities or computer white boards) areas of use as mainly in process manufacturing

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
206 R. D. McDowall

and notes that HACCP is most useful when (translating for CSV, these are the specifica-
product and process understanding is sufficiently tion documents for the system).
comprehensive to support identification of * The technical skills and insights of the team.
critical control points [10]. The key phrase here * The ability of the team to use the HazOp
is ‘sufficiently comprehensive’ as the applicabil- approach as an aid to their imagination
ity of this methodology for many commercial in visualising deviations, causes, and conse-
applications may be inappropriate as this quences.
information may not be available, possibly * The ability of the team to concentrate on the
limiting its applicability to computerised systems more serious hazards which are identified and
validation. not become side tracked with minutiae.
ICH Q9 [10] notes that HazOp can be applied
HazOp analysis to manufacturing processes, equipment and
facilities for drug substances and drug (medic-
A HazOp risk analysis is a bottom-up metho- inal) products. It has also been used primarily in
dology that is intended to identify hazards and the pharmaceutical industry for evaluating
operability problems in the design of a process process safety hazards. However, as noted
facility or plant. The key concept of HazOp is above, one of the keys for success if applied to
that the team investigates how a plant might computer validation is the completeness of the
deviate from the designed intent, thus identify- system specification; therefore, it is probable
ing the hazards. The risk analysis methodology that this risk analysis methodology will remain
is based on the principle that several experts with drug manufacturing processes and not be
with different backgrounds can interact and applied widely to computerised systems.
identify more problems when working together
as a team than when working separately and
then combining their individual results. The
Fault tree analysis (FTA)
methodology uses guide words to structure the The FTA method is a top-down risk analysis
analysis, such as: methodology that assumes failure of the func-
tionality or an undesired consequence of a
* More or high, higher or greater (implying an product or process. FTA identifies various
excess). combinations of faulty and possible events
* No, none, less or low, lower or reduced occurring in the system. Typically, FTA evaluates
(implying insufficiency). each failure one at a time and the results are
* These are compared with the intended design displayed as a tree of faults with the correspond-
parameter such as high þ flow ¼ high flow ing failure mode. FTA relies on process under-
[20]. standing of the experts to identify causal factors.
At each level in the tree, combinations of fault
If, in the process of identifying problems during
modes are described with logical operators or
a HazOp study, a solution becomes apparent, it gates [20]:
is recorded as part of the HazOp result;
however, care must be taken to avoid trying to * AND gate (output event occurs if all input
find solutions which are not so apparent, events occur simultaneously).
because the prime objective for the HazOp is * OR gate (event occurs if any one of the input
problem identification. events occurs).
The success or failure of an individual HazOp The gates are linked with events to describe the
analysis depends on several factors: actions such as:
* The completeness and accuracy of drawings * Circle event (basic event with sufficient data).
and other data used as a basis for the study * Diamond (undeveloped event).

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 207

* Rectangle (event represented by a gate). inappropriate as it relies on a baseline of


* Triangle (Transfer symbol). previous experience.

It is the combination of gate and events that


allows the top-down risk analysis of a design. Failure mode effects analysis (FMEA)
ICH Q9 [10] notes that FTA can be used to
establish the pathway to the root cause of the FMEA provides for an evaluation of potential
failure. The use of FTA can be applied while failure modes for processes and the likely effect
investigating complaints or deviations to fully on outcomes and/or product performance. Once
understand their root cause and to ensure that failure modes are established, risk reduction can
intended improvements will fully resolve the be used to eliminate, reduce or control the
issue and not lead to other issues (i.e. solving potential failures. It relies on product and
one problem leads to the causing of a different process understanding. FMEA methodically
one). FTA is a good method for evaluating how breaks down the analysis of complex processes
multiple factors affect a given issue and it is into manageable steps. It is a powerful tool for
useful for both risk assessment and in develop- summarising the important modes of failure,
ing monitoring programs. factors causing these failures and the likely
effects of these failures.
The methodology was primarily developed for
Preliminary hazard analysis (PHA) material and equipment failures, but has also
been used for human error, performance and
PHA is a risk analysis methodology that uses
software errors [20]. The process has three main
previous experience or knowledge of hazards
aims:
and failures to identify future ones that might
cause harm. It can also be used for estimating * The recognition and evaluation of potential
their probability of occurrence for a given
failures and their effects.
activity, facility, product or system. The method * The identification and prioritisation of ac-
consists of:
tions to eliminate the potential failures or
* Identification of the possibilities that the risk reduce their chances of occurring.
event happens. * The documentation of these identification,
* Qualitative evaluation of the extent of possi- evaluation and corrective activities so that
ble injury or damage to health that could product quality improves over time.
result.
* Identification of any remedial measures that FMEA was developed in the late 1940s for the
could be implemented. US Military and is described in US Military
Standard 1629a [21]. The approach has been
ICH Q9 [10] notes that PHA might be useful further developed for the US car industry as
when analysing existing systems or prioritising the Society of Automotive Engineers Standard
hazards where circumstances prevent a more J-1739 [22]; as such it is suitable for complex
extensive technique from being used. The design and processes. This methodology is the
methodology is most commonly used early in basis of the approach described in GAMP Guide
the development of a project when there is little Version 4, Appendix M3 [12] and suggested for
information on design details or operating use for risk assessment in computer validation.
procedures; thus, it will often be a precursor to The broad FMEA will be described in this
further studies. However, from the perspective paper but the reader must realise that there are
of CSV it is unlikely that this methodology will both quantitative and qualitative modes of this
be used widely as there are more established methodology that can be applied depending on
ones used with computerised systems. If the what outcome is required [17], therefore the
system is unique, then the PHA may be reader is encouraged to read further to gain

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
208 R. D. McDowall

more understanding. The following books are Specification


recommended:
1. Identify
* McDermott, R. J. Mı́kulak and Beauregard Causes of
[23] for simple overview of FMEA. Failure
* Stamatis [17] for the FMEA vocabulary,
organising the exercise, the detail of the
technique and its application to several 2. Assess
3. Probability
industries including software. Chapter 11 is Severity of
of Occurance
Impact
devoted to FMEA of software and provides
a number of questions to consider. There is
also a detailed appendix on CD-ROM with 2 1 1
further information and document templates. 4. Classify
3 2 1
This book is the personal choice of the author Risk
on cost-benefit grounds. 3 3 2
* Dyadem Engineering Corporation [20] for
FMEA of medical devices including a CD-
H H M
ROM containing trial copies of software for 5. Failure
H M L 6. Prioritise
FMEA analysis. Detection by
Risk
System
The FMEA risk analysis methodology is shown M L L

as an overall process flow chart in Figure 4. The


starting point for the process, as with other risk
analysis methodologies, is the URS or equivalent 7. Mitigate
Unacceptable
specification document(s). Note that the num- Risks
bers in the list below correspond to the
equivalent stages in the process flow chart of Figure 4. Process flow for a failure mode and
Figure 4: effects analysis

1. Identify potential risks in the system from FMEA is outlined in the GAMP Guide
both a business and a regulatory perspective. Appendix M3 [12] and is discussed in this
This is done by considering the functions paper. For information on quantitative mode
documented in the URS. For each function FMEA, the book by Stamatis [17] is recom-
identified the team should list all possible mended.
causes of failures (here is where paranoia The assessment of the severity of the failure is
comes into play and the process needs to be one of the following options:
carefully facilitated). Equally so if a function * Low: System malfunctions without im-
poses no apparent risk, this should also be pact.
documented. * Medium: System malfunctions without
2. Next, for each failure mode identified above, impacting quality issues.
the severity of the failure needs to be * High: Significant impact (health issues,
assessed. Here there are differences in the regulatory issues, data integrity/quality
way that FMEA can be used, either in compromised).
quantitative or qualitative mode. The quali- 3. For each failure the team needs to assess the
tative mode is preferable for the nature of the probability of occurrence and this is usually
systems used in the pharmaceutical and one of the following:
medical device industries unless the system * Low.
is life threatening and then a more rigorous * Medium.
approach should be considered. Qualitative * High.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 209

Risk Probability Detection Probability


Low Medium High Low Medium High

High Level 2 Level 1 Level 1 Level 1 High High Medium

Severity
of Medium Level 3 Level 2 Level 1 Risk Level 2 High Medium Low
Impact Classification

Low Level 3 Level 3 Level 2 Level 3 Medium Low Low

Figure 5. Boston grid for classifying a risk Figure 6. Boston grid used to prioritise risk
(FMEA Step 4) (FMEA Step 6)

In the early stages of a system design, there 6. Prioritise the risk by plotting the risk
may not be enough knowledge of how the classification (level 1, 2 or 3) versus the
computerised system may handle and identify probability of detection (low, medium and
failures and errors. Therefore it may be high likelihood) in a second a 3  3 Boston
prudent to allocate a medium probability of grid as shown in Figure 6. Now risk is
occurrence that can be refined later in the prioritised as:
life cycle as the design is developed and * High: Low/level 1, medium/level 1 or
refined. low/level 2.
4. Then the risk of each failure is classified * Medium: Low/level 3, medium/level 2 or
by plotting the probability of occurrence high/level 1.
(low, medium or high) versus the severity * Low: Medium/level 3, high/level 3 or
of the failure (also low, medium or high) high/level 2.
using a 3  3 Boston grid as shown in Figure 7. Mitigation of unacceptable risks is now
5. The risk is classified into one of three undertaken. The key question is what is
levels: unacceptable, which of course depends on
* Level 1 (high/high, medium/high or high/ the system and the functions it automates,
mediumÞ ¼ high-impact risk. but generally speaking the high priority risks
* Level 2 (low/high, medium/medium or need to be mitigated in some way through
high/lowÞ ¼ medium-impact risk. design modifications or procedural means. As
* Level 3 (medium/low, low/low or low/ many of the medium risks should be ad-
mediumÞ ¼ low-impact risk. dressed as possible and the low risk ones are
Each risk is formally classified graded using generally left if the risk is acceptable. All of
this grid to identify the most important ones these decisions should be documented in the
versus the lower impact ones in a very appropriate validation documentation.
structured manner.
5. Now the probability of the system to detect Some FMEA schemes can have an additional
the failure is assessed as one of the following stage to identify if the failure is due to the system
options: itself or an operator. This minor refinement of
* Low: Detection is unlikely. the approach can be used to highlight issues
* Medium: Moderate probability of detec- outside the system such as user training and
tion. documentation of procedures.
* High: Malfunction detection highly ICH Q9 [10] notes that FMEA can be used to
likely. prioritise risks and monitor the effectiveness of

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
210 R. D. McDowall

risk control activities. FMEA can be applied to ICH Q9 [10] notes that FMECA application
equipment and facilities, and might be used to in the pharmaceutical industry will mostly be
analyse a manufacturing process to identify utilised on failures and risks associated with
high-risk steps or critical parameters. When manufacturing processes; however, it is not
used at early stages of the specification of a limited to this application. The output of an
system it can also be used to improve the design FMECA is a relative risk ‘score’ for each failure
of the system to mitigate or remove potential mode that is used to rank the modes on a risk
failure modes. basis.

Limitations of FMEA
Functional risk assessment (FRA)
Limitations of FMEA can be summarised as
follows [20]: FRA is a simpler risk analysis methodology that
has been developed specifically for CSV of
* Analysis of complex systems that have multi-
commercially available software [24, 25]. The
ple functions consisting of a number of
input to the process, as is the case with the other
components can be tedious and difficult.
risk analysis methodologies, is a prioritised URS.
* Compound failure effects cannot be analysed.
The process flow in Figure 7 is described in the
* Can be costly and time consuming, unless
list below and the numbers in the figure
carefully controlled.
correspond to the tasks below.
* Successful completion requires expertise, ex-
perience and good team skills.
1. The URS requirements are prioritised as
* Incorporating all possible factors influencing
either mandatory (M) or desirable (D). The
the system, such as human errors and
environmental impacts, can make the analysis
lengthy and requires a thorough knowledge Specification
of the characteristics and performance of the
components of the system.
* Dealing with data redundancies can be time
consuming. 1. Prioritise
Requirements
In addition, the use of FMEA in the validation of
commercially available or configurable software
(GAMP categories 3 and 4) is, however, overkill
as the design documentation for the application 2. Assess
Business &
nor the source code are available to the Regulatory
validation team. As such the methodology is Risk
far more suitable for complex process equip-
ment or category 5 software.
M H 3. Plot Priority versus
Failure mode, effect and criticality Criticality for all
L M requirements
analysis (FMECA)
FMEA can be extended to incorporate an
investigation of the degree of severity of the
consequences, their respective probabilities of 4. Determine
Test Approach
occurrence and their detectability, and might for High Items
become a FMECA. Again, to perform such an
analysis, the product or process specifications Figure 7. Functional risk assessment process
should be established. flow chart

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 211

Table 1. Part of a combined risk and analysis and traceability matrix for a CDS
Req. No. Data system feature specification Priority M/D Risk N/C Test

3.3.01 The CDS has the capacity to support ten concurrent users from M C TS05
an expected user base of 40 users
3.3.02 The CDS has the capacity to support concurrently ten data M C TS05
A/D data acquisition channels from an expected 25 total
number of channels
3.3.03 The CDS has the capacity to support concurrently ten digital D N }
data acquisition channels from an expected 25 total number of channels
3.3.04 The CDS has the capacity to control concurrently ten instruments M C TS05
from an expected 20 total number of connected instruments
3.3.05 The CDS has the capacity to simultaneously support all concurrent users, M C TS05
data acquisition and instrument connects whilst performing operations
such as data reprocessing and reporting without loss of performance
(maximum response time is 510 s from sending the request) under
peak load conditions
3.3.06 The CDS has the capacity to hold 70 GB of live data on the system D N }

Priority: M/D=prioritisation of the requirement as either mandatory (M) or desirable (D). Risk: N/C=assessment of
regulatory and/or business risk as either critical (C) or not critical (N). Test: Traceability matrix these requirements are
tested under Test Script 5 (TS05), other requirements can be traced to installation of components or to an SOP.

mandatory assignment needs the require- availability. A requirement for the availabil-
ment must be present for the system to ity of the system will adversely impact a
operate and if desirable is assigned, then the chromatography data system (CDS) support-
requirement need not be present for oper- ing a continuous chemical production far
ability of the system simply a nice to have more than the same system in an R&D
[24]. environment.
2. The next stage in the process is to carry out a The approach is shown in Table 1 in the
risk assessment of each function to determine fourth column from the left. Here, each
if the function is business and/or regulatory requirement has been assessed as either
risk critical (C) or not (N). This risk critical or non-critical. All other require-
assessment methodology uses the tables from ments are assessed as non-critical in the FRA
the URS that have two additional columns methodology.
added to them as shown in Table 1. For a 3. The functional risk assessment approach is
requirement to be assessed as critical one or based on the combination of prioritised user
both of the following criteria need to be met. requirements and regulatory and/or business
The requirement functionality poses a reg- risk assessment. Plotting the two together
ulatory risk that needs to be managed. The produces the Boston Grid shown in Figure 8.
basic question to ask here is: will there be a Requirements that are both mandatory and
regulatory citation if nothing is done? For critical are the highest risk, medium are
example, requirements covering security and those that are mandatory and non-critical or
access control, data acquisition, data sto- desirable and critical with desirable and non-
rage, calculation and transformation of data, critical as the lowest risk.
use of electronic signatures and integrity of For most commercial systems, requirements
data are areas that would come under the either fall into the high- and low-risk
banner of critical regulatory risk. categories. There will be a few requirements
A requirement can also be critical for in the mandatory and non-critical quadrant
business reasons, e.g. correctness of data of the grid but few, if any, in the desirable
output, performance of the system or system but critical quadrant. This is logical. If your

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
212 R. D. McDowall

can also be used to link requirements to


other deliverables such as standard operating
Mandatory Medium High
procedures (SOPs), IQ or OQ documents
Prioritised
and the vendor audit report. Other require-
User
Requirements ments can be verified by linking to the system
Desirable Low Medium configuration log such as server requirements
or writing procedures.
Non Critical Critical
Requirement Criticality

Figure 8. Plot of prioritised functions versus Limitations of functional risk assessment


risk assessment The FRA methodology is intended for use with
COTS software (GAMP category 3) and config-
urable COTS software (GAMP category 4). It
requirement were only desirable why would has not been applied to bespoke or custom-
it be critical? If many requirements fall in this coded systems (GAMP category 5).
last quadrant, it may be an indication that The methodology is relatively simple which
the initial prioritisation was wrong. There- allows it to sit on top vendor testing of commer-
fore under this classification, only the soft- cial software as this is usually more extensive
ware requirements classified as ‘high’ in the than an end user or a validation team can
grid (mandatory and critical) will be con- perform. However, this assumption should be
sidered for testing in the qualification of the verified by a vendor audit for critical systems.
system. No other requirement will be con-
sidered for testing [24]. Once the risk analysis
has been completed, the traceability matrix IT risk assessment methodologies
can be included in the same document. There are two risk assessment methodologies
4. The purpose of a traceability matrix is to that have been developed specifically for IT,
show the coverage of testing or verification the first within the BS 7799 (Information
against a specific requirement. For a com- Management Security) [26] and the second
mercial application this matrix can be under- from the National Institute of Standards and
taken using the risk assessment by adding an Technology (NIST) [27]. Each methodology
additional column on the right of the table as will be presented and discussed in the follow-
shown in Table 1 (the column labelled Test). ing sections; however, before we start it is
As outlined in the FRA, only those functions important that a few concepts are presented
that are classified as both mandatory and and discussed.
critical are considered for testing in the In the context of IT risk assessment vulner-
qualification phase of the validation. There- ability is defined as a flaw or weakness in system
fore functions 3.3.03 and 3.3.06 are not security procedures, design, implementation, or
considered for testing, as they do not meet the internal controls that could be exercised (acci-
inclusion criteria. Of the remaining four dentally triggered or intentionally exploited) and
requirements these all constitute capacity result in a security breach or a violation of the
requirements that can be combined together system’s security policy [27].
and tested under a single capacity test script, In addition, the status of the system being
which in this example is called Test Script 05 analysed for risk is important as it modifies the
(TS05). In this way, requirements are priori- approach that should be taken in the analysis.
tised and classified for risk and the most
critical one can be traced to the PQ test script. * If the system is being designed, the search
As well as linking specific requirements to for vulnerabilities should be driven from
individual test scripts, a traceability matrix an organisation’s security policies and

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 213

procedures together with the specification for methodology is to define what is in scope
the system and the vendors’ or developers’ and what is out of scope. In this way the part
security product analyses. This is the best and of the system covered by the individual risk
most cost-effective way to conduct the assessment is identified and localised. For
analysis as controls can be designed into the example, if a risk assessment needs to be
system rather than added on to an existing conducted before a new wireless local area
system as an afterthought. network (W-LAN) is implemented, then the
* If the system is being implemented, the site or specific buildings where the next W-
identification of vulnerabilities should be LAN will be installed are specifically docu-
expanded to include more specific informa- mented. Equally so, if a global wide area
tion, such as the planned security features network (WAN) is being upgraded then the
inherent within the system and how they are WAN elements can be identified up to the
being implemented. Equally so, a second risk site routers but the individual site LANs can
analysis can be conducted at this stage of a be explicitly excluded.
system being developed to build on the first This phase of the work is important as the
risk analysis performed above. definition of what is in and what is out of the
* However, if the system is operational, then scope of the risk assessment is the basis on
the process of identifying vulnerabilities which all other parts of this methodology are
should include an analysis of the system based. Equally so, it is also important to
security features and the security controls, document what has been excluded from the
both technical and procedural, already used scope of the risk assessment.
to protect the system in addition to any 2. Asset identification and valuation: Once
intelligence from suppliers about the level of the scope and boundaries of the risk assess-
threats to components, e.g. operating system ment have been established, the information
(OS). assets contained within need to be identified
and listed. This is typically done by listing
Both technical and non-technical controls can be
the applications and data contained within
classified as either preventive or detective [27],
the boundaries identified in the previous
which are described as follows:
stage.
* Preventive controls inhibit attempts to violate The list should not just be limited to
security policy and include such controls as regulatory systems but should also include
access control enforcement, encryption, and business systems, e.g. manufacturing and
authentication. distribution systems, laboratory systems,
* Detective controls warn of violations or R&D systems, financial systems, and busi-
attempted violations of security policy and ness systems and all the associated records
include such controls as audit trails, intrusion and data. Once the list has been generated,
detection methods, and checksums. the value of the records needs to be
quantified in approximately terms. For ex-
BS 7799 IT risk assessment methodology ample, if there is a problem and records or
The flow chart in Figure 9 depicts the BS 7799 data are lost what would the loss to the
risk assessment methodology described in PD organisation be? What is the value of a
3002 [26], this is a simpler and IT-specific risk production batch or the cost of repeating a
assessment methodology than that suggested by clinical or non-clinical study?
GAMP in their Good Practice Guide for IT 3. Threat identification: See step 4, as in
Infrastructure [28]. practice this stage can be merged into a
single task.
1. Define scope and boundaries of system: The 4. Treat identification and vulnerability assess-
starting point of the BS 7799 risk assessment ment: This is shown as two stages in Figure 9

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
214 R. D. McDowall

Define what is covered by


1. Define Scope
the scope of the project.
and Boundaries
Document any exclusions

2. Asset List the information assets


Identification & within the scope of the
Valuation project

3. Threat List the threats associated


Assessment wih the listed assets

List the vulnerabilities


4. Vulnerability
associated with the listed
Assessment
assets

Identify and document all


5.Identification of
existing / planned controls
Security Controls
for the listed assets

Collate information on assets,


6. Risk
threats and vulnerabilities
Assessment
and assess risk

7. Select Identify if controls are


Controls to adequate or implement new
Mitigate Risk controls to mitigate risk

8. Risk Accept risk or consider


Acceptance further controls

Figure 9. BS 7799 IT risk assessment methodology

but is considered here as a single process as it possible threats to the network and the
can be combined in a risk assessment work- information assets contained within the
shop. The first stage is to identify the defined boundary. Then the impact of each

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 215

threat needs to be identified and documen- NIST SP800-30 risk management guide
ted. For example, if an unauthorised user for IT systems
were able to access a regulated network what The NIST SP800-30 [27] risk assessment meth-
could that individual be able to do? Once odology is shown diagrammatically in Figure 10
identified, the probability or likelihood of and describes the process in more detail. The
each threat occurring should be assessed as numbers in the text below refer to the corre-
high, medium or low. sponding steps in the flow chart in Figure 10.
5. Identification of security controls: Each
of the threats needs to be assessed against 1. Characterise the system: Similar to the
the existing or planned controls for the BS7799 risk assessment methodology, the
network. This is a key stage and should be NIST approach starts by defining the scope
performed as a facilitated workshop where and boundaries of an IT system including
the threats and controls are discussed inter- with the resources and the information that
actively. Here ideas and views can be constitute the system:
harnessed effectively to debate the issues
posed by each threat. * System functions.
The output of this phase of the assessment is * Definition of the operational boundaries.
a considered opinion that the controls are * Definition of the system components
adequate or that further controls are re- such as hardware, software, system
quired. This phase needs to be documented connectivity, etc.
carefully as it is the core of the risk * System and data criticality (e.g. the
assessment. system’s value or importance to an
6. Risk assessment: Are controls adequate? organisation).
If the controls for each threat are consi- * Identification of the system owner and
dered adequate then this is documented support personnel.
and no further work is required. However, * Physical and logical security for the
where there is still a risk the threat moved to system.
the next stage as further controls are * Other operational controls for the sys-
required. tem and any monitoring utilities used.
7. Select controls to mitigate risk: Where the
risk is unacceptable, then further technical or If required, the list can be extended to include
procedural controls are implemented. environmental controls, e.g. for computer
8. Risk acceptance: If the residual risk for each hardware and communications equipment.
of the identified risks and vulnerabilities is However, if this is a known entity or has been
acceptable after appropriate controls have the subject of a separate risk assessment, then
been devised, then this is documented and this may be excluded but the fact is docu-
the process stops here. However, if the mented along with the rationale for the
residual risk is unacceptable then the process exclusion.
loops back to the identification of further 2. Vulnerability identification: Next, the vulner-
controls and assesses what further ones are abilities of the defined system are identified.
required. The goal of this step is to develop a list of
these issues that could result in a security
In essence, this is the simplified BS 7799 risk breach or a violation of the system’s security.
management process, there is a more detailed It is important to understand the status of the
risk assessment process described in PD 3002 system at this stage (being designed, being
[26] but there is not sufficient space to discuss implemented or operational) and apply the
this and the reader is recommended to look at appropriate approach to the assessment of
this publication for more information. threats and controls as described earlier.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
216 R. D. McDowall

System purpose & scope System boundary


Components & interfaces 1. System System functions
Data & information Characterisation System & data criticality
Users System & data sensitivity

History of system attack 2. Threat


Threat statement
Data from external sources Identification

Audit comments
Input from prior risk reports 3. Vulnerability List of potential
Security requirements Identification vulnearabilities
Security test results

Current controls 4. Control List of current & planned


Planned controls Analysis controls

Threat capacity
Nature of vulnerability 5. Likelihood
Likelihood rating
Current controls Determination
Threat-source motivation

Mission impact analysis


Asset criticality assessment 6. Impact
Impact rating
Data criticality Analysis
Data sensitivity

Likelihood of threat
Magnitude of impact 7. Risk Risk & associated risk
Adequacy of planned or Determination levels
current controls

8. Control
Recommended controls
Recommendn ' s

9. Results
Risk assessment report
Documentation

Figure 10. NIST SP800 30 risk assessment flowchart (process inputs to each stage are shown
on the left and outputs on the right)

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 217

Some of the recommended ways of identify- 4. Control analysis: Next, the system controls
ing system vulnerabilities are the use of that have been or will be implemented are
vulnerability sources (from application ven- analysed to assess how effective they are
dors, security sources, etc.), the performance likely to be. During this step, the personnel
of system security testing or using a security involved in the risk assessment determine
requirements checklist that can be based on whether the security requirements designed,
internal policies or available publicly. In implemented or are operational for the IT
addition, there may be software utilities system are being met by existing or planned
that can be used to analyse for example the security controls. Similar to 21 CFR Part 11,
use of poor passwords for operational sys- there are technical and procedural controls
tems: a password cracker can be used to that can be implemented; and it is the
assess how many accounts could be compro- combination of the two that provide the
mised through the use of poor password effectiveness of any system to minimise or
selection. eliminate a threat to exploit any vulnerability.
3. Threat identification: Threats can be divided The NIST risk assessment describes three
into the main sources: natural, human and levels of security controls, namely manage-
environmental as listed below [27]. ment, operational and technical security [27].
Natural: Floods, earthquakes, tornadoes, Typically, the system security requirements
landslides, avalanches, electrical storms, and can be presented in table form, with each
other such events. requirement accompanied by an explanation
Human: Events that are either enabled by of how the system’s design or implementation
or caused by human beings, such as un- does or does not satisfy that security control
intentional acts (inadvertent data entry) or requirement. The outcome of this process
deliberate actions (network-based attacks, is a security requirements checklist for the
malicious software upload, unauthorised system.
access to confidential information). Some of 5. Likelihood determination: To derive an over-
the more common human sources are: all likelihood rating that indicates the prob-
ability that a potential vulnerability may be
* Employees who are poorly trained or exercised within the construct of the asso-
motivated. ciated threat environment, the following
* Poor internal security procedures. governing factors must be considered:
* Disaffected employees.
* Criminal or industrial espionage attack. * The motivation and capability of the
* Hacker. source of the threat.
* Nature of the vulnerability of the system.
Environmental: Long-term power failure, * Existence and effectiveness of current
pollution, chemicals, liquid leakage. controls.
Note that a threat does not present a risk
when there is no vulnerability of an IT system Using this information the likelihood of a
or network that can be exploited. For threat can be determined as one of three
example, if a facility was sited in an earth- categories:
quake zone such as the San Francisco bay
area, then an earthquake poses a serious risk * High: The threat is highly motivated and
to the IT system. However, an earthquake sufficiently capable, and controls to
poses no risk if a similar system was sited in prevent the vulnerability from being
a geologically stable location. Equally so, a exercised are ineffective.
San Francisco bay area facility has little * Medium: The threat is motivated and
vulnerability from avalanches. capable, but controls are in place that

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
218 R. D. McDowall

may impede successful exercise of the of risk to the IT system and its data to an
vulnerability. acceptable level. The following factors should
* Low: The threat lacks motivation or be considered in recommending controls and
capability, or controls are in place alternative solutions to minimise or eliminate
to prevent, or at least significantly identified risks:
impede, the vulnerability from being
exercised. * Effectiveness of recommended options
6. Impact analysis: Before starting this stage, (e.g. system compatibility).
review the information from stage 1 of * Legislation and regulation.
the process such as system and data criti- * Organisational policy.
cality, this puts the impact analysis into * Operational impact.
context e.g. for a low criticality and risk * Safety and reliability.
system versus a high one. The impact of
an adverse security event can be described The control recommendations are the results
in terms of loss or degradation of any, or a of the risk assessment process and provide
combination of any, of the following three input to the risk mitigation process, during
areas: which the recommended procedural and
technical security controls are evaluated,
* Loss of system or data integrity. prioritised, and implemented. It should be
* Loss of system availability, functionality noted that not all possible recommended
or operational effectiveness. controls can be implemented to reduce loss.
* Loss of system or data confidentiality. To determine which ones are required and
7. Risk determination: The impact of the risk is appropriate for a specific organisation, a
assessed in qualitative terms of one of the cost–benefit analysis, as discussed in section
following terms: 4.6 of the NIST guide, should be conducted
for the proposed recommended controls,
* Low: Limited adverse effect on organisa- to demonstrate that the costs of implemen-
tional operations, organisational assets, ting the controls can be justified by the
or individuals. The organisation should reduction in the level of risk. In addition,
decide if corrective actions are required the operational impact (e.g. effect on system
or the organisation accepts the risk. performance) and feasibility (e.g. technical
* Medium: Serious adverse effect on requirements, user acceptance) of introdu-
organisational operations, organisa- cing the recommended option should be
tional assets, or individuals. Corrective evaluated carefully during the risk mitigation
actions are needed and implemented process.
over a reasonable period of time. 9. Document results: When the risk assessment
* High: Severe or catastrophic adverse process has been completed, the results
effect on organisational operations, should be formally reported and approved
organisational assets, or individuals. by management, as typically there will be
Therefore there is an imperative need corrective actions.
for corrective actions to resolve the issue
rapidly, especially if the system is opera-
tional. Summary of risk analysis methodologies
8. Control recommendations: Here controls There is no universally applicable risk analysis
that could mitigate or eliminate the identified methodology for computer system validation as
risks, as appropriate to the organisation’s Table 2 demonstrates. Therefore, the onus is on
operations, are provided. The goal of the a validation team to select the right tool for the
recommended controls is to reduce the level job, using FMEA or FRA for applications and

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 219

Table 2. Applicability of different risk analysis methodologies to computer system validation


Risk analysis Applicability to CSV
methodology

HACCP * Process-based methodology for the food industry


* Limited use for CSV except where there is sufficiently comprehensive understanding to
determine the critical control points of the system

HazOp * Developed for evaluating manufacturing process safety hazards, equipment and facilities
* Limited applicability to CSV

FTA * Structured top-down approach using gates and events


* Little application to software

PHA * Conducted at start of a project when information from similar projects available
* Little application for computer system validation

FMEA & FMECA * Well established risk analysis methodology for design or process risk analysis
* Works well with complex computer systems and process equipment (category 5)
* Over complex methodology for commercially available software (category 3 and more
complex category 4 systems)

FRA * Developed specifically for CSV of commercially available systems (category 3 and 4 software)
* Easy to understand and apply and quick to perform
* Not used for bespoke (custom) systems (category 5)

BS 7799 (PD 3002) * Useful for infrastructure risk analysis such as implementation of new technologies
e.g. W-LAN to mitigate potential risks
* Information security management of existing or planned systems

NIST SP800-30 * Useful for infrastructure risk analysis such as implementation of new technologies
e.g. W-LAN to mitigate potential risks
* Information security management of existing or planned systems

either the BS7799 or NIST SP800-30 risk The areas that that will be discussed in this
assessment approaches for infrastructure or the section are discussed below and shown dia-
IT elements of a specific system. Furthermore, grammatically in Figure 11:
risk assessment methodologies could be com-
bined if required, one for the application and an * Do I need to validate my system? This will be
IT risk assessment. a discussion of the Society of Quality
Assurance (SQA) risk analysis questionnaire
Practical Approaches to Risk [29] from the mid-1990s. The aim is to
Management of Computer Validation produce a simple YES (must validate) or
of Applications NO (no need to validate) response.
* If I need to validate the system: How much
After reviewing the regulatory requirements and work is necessary? This needs to evaluate the
risk management and risk analysis methodolo- use of the system and the nature of the
gies have been discussed and presented, we will software that is used to automate the process
consider a practical approach for CSV in this as the main factors in making a decision
section of the paper. of the extent of validation based on the

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
220 R. D. McDowall

Cost of Non-
Compliance
Do I Need to Validate?

Cost of
Decision
Compliance
Criteria

No Yes

0% % Compliance 100%
Extent of Validation?
Figure 12. Balancing the costs of compliance
and non-compliance [25]
Decision
Criteria Note well the cost of compliance is always
High Risk Low Risk
cheaper than the cost of non-compliance. If any
reader is in doubt I suggest that they read any of
the recent consent decrees (e.g. [30, 31]). The
cost of non-compliance can now be quantified in
Full Reduced terms of hundreds of millions of US dollars for
Validation Validation
some companies.
Figure 11. Process flow to decide if valida- Figure 12 shows the situation graphically. The
tion is required and what is the extent of vertical axes represent the cost of compliance
work and non-compliance, respectively. Note that the
cost of compliance axis is smaller than the cost
discussion of the regulations presented earlier of non-compliance axis this is deliberate as
in this paper. The outcome of this decision doing the job right first time is the best way to
matrix is either a high-risk system (full work. On the bottom is the extent of compliance
V-model validation approach) or a low-risk expressed as a percentage.
system. The low-risk system is suggested to be If all risk is to be removed from a system then
validated using a single integrated validation you validate as much as possible, but this would
document, the rationale for which is based on take a lot of time and resource to achieve but
the FDA’s comment of baseline validation [8] would for the majority of systems be difficult
provided it can be justified after an analysis of to justify unless there were specific reasons for
risk and complexity. this such as a critical medical device. However,
if the main validation points are covered plus a
commercial system is implemented then a more
Balancing the costs of compliance and
cost-effective validation can be accomplished in
non-compliance
a shorter timeframe with less resource. Some
There is always a question of either ‘how much risk may still exist but it is managed risk rather
must I do’ or ‘what is the minimum I can get than regulatory exposure. This is where the term
away with’ when it comes to validation of ‘acceptability’ should be visible, the remaining
computer systems in a regulated environment. risk is planned and managed and not randomly
This can be summarised as balancing the cost of generated.
non-compliance (doing nothing and/or carrying
the risk) versus the cost of compliance (doing the
Do I need to validate a system?
job right in the first place). It is important to
understand the context of this in the validation The first question, shown in Figure 11, considers
of computerised systems. should I validate my computerised system? The

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 221

key is what decision criteria should be used? To ment in an authorised document so that it is
answer these questions, the Computer Valida- defendable in an inspection. Equally as impor-
tion Initiative Committee (CVIC) of the SQA tant is documenting the NO answers that justify
developed a questionnaire to determine if why you have not validated a system.
a computer system should be validated or not The questionnaire was written in the mid-
[29]. This consists of 15 closed questions (the 1990s and therefore needs updating the world of
only answers is to the question are either YES the ‘new’ 21 CFR Part 11, therefore a question
or NO). If you answer YES to any question, that could be added to this list might be:
then you need to validate the system. The ques- * Do(es) the predicate rule(s) require a record?
tions cover the whole of the regulated area
of healthcare from development, submission, This allows the validation team to decide if Part
manufacturing and distribution. 11 applies to the system as well.
As an example, the following questions from For those systems where the answers to the
the CVIC document presented below are con- questions are NO, the work stops with the
sidered to a CDS operating in either a regulated approval of the questionnaire stating that no
R&D or production environment [25]: validation is required. However, for the systems
where validation needs to be performed, a more
* Does the application or system directly
intriguing question arises: how much validation
control, record for use, or monitor laboratory
work is necessary to manage the risk posed by
testing or clinical data?
the system and the records it generates?
* Does the application or system affect regula-
tory submission/registration?
* Does the application or system perform Risk classification: only high- and low-
calculations/algorithms that will support a risk systems
regulatory submission/registration?
In Figure 11, only two routes from the decision
* Is the application or system an integral part of
criteria are used to determine the extent of the
the equipment/instrumentation/ identification
validation work required: high- and low-risk
used in testing, release and/or distribution of
systems. This is a deliberate approach based on
the product/samples?
practical issues. For example, in any classifica-
* Will data from the application or system be
tion, high- and low-risk systems are relatively
used to support QC product release?
easy to define and validate by either a V-model
* Does the application or system handle data
(full life cycle) or an integrated validation docu-
that could impact product purity, strength,
ment approach, respectively.
efficacy, identity, status, or location?
However, the practical problem lies in defin-
* Does the application or system employ
ing a medium-risk system. Is this a low-rated
electronic signature capabilities and/or pro-
high risk-system or a high-rated low-risk sys-
vide the sole record of the signature on a
tem? Moreover, how should a medium-risk
document subject to review by a regulatory
system be validated? Should a simplified V-
agency?
model approach or a more detailed integration
* Is the application or system used to automate
validation document approach be used for these
a manual QC check of data subject to review
systems? Therefore, from practical reasons,
by a regulatory agency?
there are only two categories in this workflow.
* Does the application or system create, up-
If a system is evaluated as a high risk, the
date, or store data prior to transferring to an
validation plan can be tailored to define exactly
existing validated system?
the tasks that will be undertaken and therefore
If you answer YES to any question in this list, this document provides a further mechanism to
this triggers validation of the application, in this manage risk. The appropriate life cycle tasks
case a CDS. You should undertake this assess- and corresponding documented evidence to be

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
222 R. D. McDowall

generated can be detailed in the validation plan the risk evaluation, even if this high risk use is a
e.g. for an Enterprise Resource Planning (ERP) minor proportion of the work performed by the
system there will be far more detail in terms of system.
mapping the current business flows and deciding
which ones are GXP relevant versus a standa- Nature of the software: GAMP software
lone laboratory system. Both systems are high categories
risk but the amount of work for the ERP The nature of the software can be defined
application is far greater than for the laboratory using the software categories as defined in the
system. GAMP guidelines [12] in Appendix M4 provide
a validation strategy for different classes of
Decision criteria for the extent of software and hardware. This concept is very
validation important as it provides a key understanding
about one of the major risk factors involved in
The extent of validation required now depends the validation of any computerised system.
on two major factors, as defined by the EU GMP There are five GAMP software categories:
Annex 11 clause 2 [5]:
* Category 1: OS: The strategy for the OS is to
1. The use of the system.
ensure that it is correctly installed and
2. The nature of the software.
configured during the installation phase of
It is the combination of these two criteria that the life cycle and then to implicitly test
determines the extent of validation needed for a the OS during the OQ and PQ phases of
system as will be discussed now. the qualification. The assumption being that
the correct functioning of the application
Use of the system infers that the OS works acceptably.
A system can be classified as either a high- and a * Category 2: Firmware: This class of software
low-risk category based on its use. Some consists of read-only memory (ROM) chips
examples of high- and low-risk system use are that are present in the system: typically this is
shown in Table 3, the GAMP Part 11 Good qualified and calibrated where appropriate
Practice Guide also lists more examples [11]. and not validated. The sole exception is
Where there are several uses of the system, the where the firmware is custom built and then
highest category ones are used for determining this must be treated as category 5.

Table 3. Risk assessment based on system use


Assessment Potential system uses

High * Data that are submitted directly to regulatory agencies or are included in regulatory submissions
* Data supporting batch release (e.g. certificate of analysis) of drug product, clinical trial material or API
* Stability data for drug products
* Data from or support to non-clinical laboratory studies
* Clinical trial study data
* Laboratory support to clinical studies

Low * In-process monitoring of drug product and APIs


* Supportive data not directly submitted to regulatory agencies
* Pharmacology data
* In vitro data
* Research data
* Data generated in development of analytical methods

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 223

* Category 3: COTS Software: This is software


that is commercially available and used as is
High Low High
from installation. The changes that are made
are defining the security levels in the applica- System Use
(Regulatory Risk)
tion, linking to printers if used on a network
and any other parameters to make it work in Low Low Low
the operating environment.
* Category 4: Configurable COTS Software: Low High
This is configurable software that is commer- Nature of the Software
cially available and changes to the operation (Record Vulnerability)
of the application are made with a variety Figure 13. Boston grid of system use and
of means. At its simplest the configuration nature of the software
is via hot buttons or switches provided by
the vendor of the application (configuration
or parameterisation). More complex ways
of configuring the application are either to shelf solution to a business or manufacturing
use a proprietary language that modulates the process (GAMP category 3).
execution of the code or to configure and/or
link workflows within the system.
System criticality assessment
* Category 5: Custom or Bespoke Software: This
is the essence of ‘novel elements’ being soft- In answering the two regulatory questions of
ware that is unique. This can range from larger what does the system automate and the nature
systems that can be programmed in-house or of the software, the overall system risk can be
outsourced to a software company to an Excel assessed and hence the extent of validation
macro that uses the software package as a basis required can be determined. Similarly, to the
in combination with visual basic programming functional risk assessment, the answers from
to generate a specific application. the two questions are plotted in a 2  2 Boston
grid shown in Figure 13. Only systems with
Nature of the application software is defined as
the combination of high regulatory risk and
either high or low, in essence, the higher the
high record vulnerability (top right quadrant of
GAMP category, the higher the risk to records
the grid) that results in the highest assessment
(record vulnerability) contained within the
will require full validation. Systems falling in the
system. The rationale is that the more unique
other three quadrants will require low-risk
is the software the less it is tested overall,
validation.
including the experiences at customer sites.
The combination of these two assessments can
High: result in different approaches to validation for
the same system, the only difference being the
* Custom software application, or includes a system use. A CDS used for low-risk analysis
custom extension (e.g. macro) to an existing such as in process analysis only could be
application (GAMP category 5). validated using the reduced approach whilst
* Commercially available software package the same system used for release testing of APIs
that involves configuring predefined software and final product undergo full validation.
modules and possible developing customised
modules (GAMP category 4).
High-risk systems
Low:
Systems falling in the high-risk category, should
* Commercially available standard non-config- use a V-model validation that requires the use
urable software package providing an off-the- of a risk analysis methodology within the ISO

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
224 R. D. McDowall

14971 risk management framework. Rather overall control for the validation is covered in a
than a single risk assessment methodology to VMP or SOP.
fit all situations as suggested by the GAMP The main sections of the integrated validation
Forum [12, 28, 32], a more practical approach document are:
of selecting the right methodology for the right
system is advocated by the author. Although the * Introduction including and system use.
GAMP documents suggest this approach and * System description.
state that other approaches are acceptable, many * Referenced documents: included here are
organisations implement a GAMP risk assess- the system-specific documents including ap-
ment ‘one size fits all’ approach. proaches such as calibration, maintenance
The following approach is suggested: and daily control measures to ensure that
the system works as intended. Where there
* Complex and custom built systems should use are vendor IQ and OQ documents these are
FMEA [12]. included to demonstrate that the system has
* COTS and Configurable COTS software been.
should use functional risk assessment [24, 25]. * Definition of user requirements: only require-
* If required, both of the above risk assessments ments that define the intended purpose of
could use either of the IT risk assessments to system are documented here. Each require-
evaluate exposure to IT vulnerabilities. ment must be written so that it can be tested
The overall approach to be used for risk or verified. Typically the requirements are
assessment should be documented in the valida- grouped e.g. system functions, security and
tion plan for the system along with the defined access control, definition and protection of
life cycle elements and anticipated documented electronic records, audit trail, etc.
evidence. * Test preparation.
* Procedures to test user requirements, to
collate documented evidence and to compare
Low-risk systems results versus acceptance criteria; there is also
The approach for low-risk systems is a valida- a section on the assumptions, exclusions and
tion that uses a single integrated document that limitations of the testing approach.
defines the intended purpose of the system along * Test execution notes.
with any applicable predicate rule and 21 CFR * Test summary report.
Part 11 requirements. It also tests these require- * System sign off and release for use.
ments in test procedures within the same
document. The intended use and test procedures are
Control of low-risk validation is documented written, reviewed and updated and then the
either in a specific SOP or in a Validation Master whole document is approved before execution of
Plan (VMP) as defined by EU GMP Annex 15 testing. Requirements traceability is also in-
[5]; although the latter is the personal preference cluded in the document as instead of a prior-
of the author. itisation, there is a pointer to the test procedure
where it is tested. After testing and collation of
Integrated validation document the documented evidence, the document’s final
The integrated validation document (Validation section is a summary report and release section
Lite approach) takes the key elements of the for operational use.
system development life cycle and condenses
them into a single document. The specification Example of a low-risk system validation
of the intended use and testing sections are As an example of this approach, a time of flight
clearly separated within this document and (TOF) mass spectrometer (MS) is used as a
pre-approved before use. As stated above, the standalone system in an analytical laboratory.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 225

The instrument is used for elemental analysis to In advocating this approach, the philosophy
support: of Albert Einstein is adopted: keep it as simple as
possible – but no simpler. In essence, use the
* Impurity identification for process chemistry right tool for the right job. Therefore the
for APIs. following approach to risk assessment and risk
* Identification of unknown and known com- management is advocated:
pounds.
* The system creates and stores electronic
* Highly configured and customised (bespoke)
records. systems should use FMEA as the best risk
analysis methodology for CSV.
Therefore, the answer to the first question ‘do I * COTS or configurable COTS applications
need to validate the system’ is YES. Now the should use a simplified risk analysis metho-
question is ‘how much should be done?’ dology, FRA to build on the testing that the
From the regulatory risk perspective, the data vendor has performed.
from the instrument are used in Investigational * IT security and infrastructure should use
New Drug (IND) and New Drug Application either the BS 7799 or NIST SP800-30 risk
(NDA) submissions, thus the assessment is high. assessment methodologies. Either of these
However, the software is used for the instrument methodologies can be used either in isolation
is GAMP category 3 which is low. Plotting both or in combination with risk assessment of the
criteria in the Boston grid in Figure 13, the application itself.
system risk rating is low and hence a reduced
validation is performed. Useful Web sites for Risk Management
When collecting information about the system
to prepare the integrated validation document, General sites for risk management:
do not forget to assess what regular maintenance
and calibration is performed. Often laboratory * FDA: http://www.fda.gov
systems can be calibrated or checked (e.g. by * 21 CFR Part 11 web site dealing with all
performing a system suitability test) on a daily aspects of Part 11 including risk management
basis before any analysis is performed and this specifically there is a good library and a list
can be used to justify and support some of the server for questions: http://www.21cfrpart11.
approaches to testing. The key aim is to ensure com
integrity of data generated and maintained by * International Conference on Harmonisation
the system and backup of records should always (ICH), Q9 quality risk management docu-
be included in the requirements and test suite. ment: http://www.ich.org

HACCP:

Conclusions * FDA site for HACCP covering resources and


information for food: http://www.cfsan.fda.
This paper reviews the regulations for compu- gov/lrd/haccp.html
terised systems and discusses some of the * US Department of Agriculture and FDA
practical options available for risk management resource and training site with links to other
and risk assessment for CSV. There is not a HACCP resources: http://www.nal.usda.gov/
single risk assessment methodology for CSV that fnic/foodborne/haccp/index.shtml
is applicable to all systems in all situations.
Fault tree analysis (FTA):
Therefore the best, most prudent and cost-
effective approach is to select the appropriate * Institute of Electrical Engineers overview of
methodology for the computerised system rather FTA and also FMEA: http://www.iee.org/
than adopt a one size fits all approach. Policy/Areas/Health/hsb26c.cfm

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
226 R. D. McDowall

* Sandia National Laboratories, Centre for Management to Medical Devices. International


System Reliability: http://reliability.sandia. Standards Organisation: Geneva, 2000.
gov/Reliability/Fault Tree Analysis/fault tree 4. US Food and Drug Administration. Fed Regist 1997;
analysis.html 62:13430.
5. Medicines Control Agency. Rules and Guidance
Failure mode effect analysis (FMEA):
for Pharmaceutical Manufacturers and Distributors
* GAMP Forum for availability of advice 2002 (6th edn). The Stationary Office: London,
documents and good practice guides: http:// 2002.
www.ispe.org/gamp/ 6. International Conference on Harmonisation
* FDA, CDRH, Design Guidance for Medical (ICH). ICH Q7A – Good Manufacturing Practice for
Devices – guidance from 1997: http://www. Active Pharmaceutical Ingredients (CPMP/ICH/4106/00).
fda.gov/cdrh/comp/designgd.pdf Geneva, 2000.
7. US Food and Drug Administration. Fed Regist 1996;
Functional risk analysis:
61:52601.
* Bob McDowall’s web site for general CSV edu- 8. US Food and Drug Administration. Guidance for
cation and articles can be downloaded from Industry: General Principles of Software Validation.
the library: http://www.rdmcdowall.com Washington, DC, 2002.
9. Pharmaceutical Inspection Convention/Scheme
BS 7799 risk assessment:
(PIC/S). Good Practices for Computerised Systems in
* British Standards Institute (BSI) web pages ‘GXP’ Environments. PIC/S: Geneva, 2003.
for purchase of PD3002: http://www.bsi- 10. International Conference on Harmonisation (ICH).
global.com/ICT/Security/pd3002.xalter ICH Q9 (Step 2) Quality Risk Management, Geneva,
NIST SP 800-30 risk assessment: 2005.
11. GAMP Forum. A Risk-Based Approach to Compliant
* National Institute of Science and Technology Electronic Records and Signatures. International
(NIST) Computer Security Resource Centre Society for Pharmaceutical Engineering: Tampa, FL,
for download of SP 800 series publications: 2005.
http://csrc.nist.gov/publications/nistpubs/index. 12. GAMP Forum. Good Automated Manufacturing Prac-
html tice (GAMP) Guide Version 4. International Society
for Pharmaceutical Engineering: Tampa, FL, 2001.
13. GAMP Forum. Good Practice and Compliance for
Acknowledgement
Electronic Records and Signatures. Part 2: Comply-

The author would like to thank Sion Wyn for ing with 21 CFR Part 11, Electronic Records

his comments on the draft manuscript. and Electronic Signatures. International Society of
Pharmaceutical Engineering: Tampa, FL, 2001.
14. International Standards Organisation. Risk Manage-
References ment – Vocabulary – Guidelines for Use in
Standards. International Standards Organisation:
1. US Food and Drug Administration. Pharmaceutical Geneva, 2002.
cGMPs for the 21st Century: A Risk-Based Approach. 15. International Standards Organisation. Risk Manage-
Washington, DC, 2002. (http://www.fda.gov/oc/ ment – Vocabulary – Guidelines for Use in Standards
guidance/gmp.html) (ISO/IEC Guide 72: 2002). Geneva, 2002.
2. US Food and Drug Administration. Guidance for 16. Institute of Electronic and Electrical Engineers.
Industry: 21 CFR Part 11; Electronic Records; Electro- Software Engineering Standards. IEEE: Piscataway,
nic Signatures Part 11 Scope and Application. NJ, 2001.
Washington, DC, 2003. 17. Stamatis DH. Failure Mode and Effect Analysis:
3. International Standards Organisation. ISO Standard FMEA from Theory to Execution (2nd edn). ASQ
14971 – Medical Devices – Application of Risk Press: Milwaukee, WI, 2003.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.
Risk Management for CSV 227

18. Ammerman M. The Root Cause Analysis Handbook – 25. McDowall RD. Validation of Chromatography Data
A Simplified Approach to Identifying, Correcting Systems. Royal Society of Chemistry: Cambridge,
and Reporting Workplace Errors. Productivity Press: 2005.
New York, NY, 1998. 26. British Standards Institute. Guide to BS 7799 Risk
19. US Food and Drug Administration. Hazard Analysis Assessment (PD 3002–2002). British Standards Insti-
and Critical Control Point Principles and Application tute: London, 2002.
Guidelines. Washington, DC, 1997. 27. National Institute of Standards and Technology. Risk
20. Dyadem Engineering Corporation. Guidelines for Management Guide for Information Technology
Failure Modes & Effects Analysis for Medical devices. Systems (Special Publication 800-30). NIST: Gaithers-
CRC Press: Boca Raton, FL, 2003. burg, MD, 2002.
21. US Department of Defense. Military Standard 28. GAMP Forum. Good Practice Guide – IT Infrastruc-
(MIL-STD-1629a) Procedures for Performing a ture Control and Compliance. International Society
Failure Mode, Effects and Criticality Analysis. US for Pharmaceutical Engineering: Tampa, FL, 2005.
Department of Defense: Washington, DC, 1980. 29. Society of Quality Assurance. Computer Valida-
22. Society of Automotive Engineers. Standard J-1939 tion Initiative Committee (CVIC), Risk Assessment/
Failure Mode Effect Analysis. Society of Automotive Validation Priority Setting. Charlottesville, VA,
Engineers: Troy, MI, 1993. 1997.
23. McDermott REM, Mikulak RJ, Beauregard MR. The 30. US District Court for Illinois. Abbott Consent Decree
Basics of FMEA. Productivity Press: New York, NY, 1996. of Permanent Injunction, Chicago, IL, 1999.
24. McDowall RD. Validation of chromatography data 31. US District Court of New Jersey. Schering Plough
systems. In Computer Systems Validation: Quality Consent Decree of Permanent Injunction. Newark,
Assurance, Risk Management and Regulatory Com- NJ, 2002.
pliance for Pharmaceutical and Healthcare Compa- 32. GAMP Forum. Best Practice Guide – Laboratory
nies, Wingate G (ed.). Interpharm/CRC: Boca Raton, Systems. International Society for Pharmaceutical
FL, 2004; 465. Engineering: Tampa, FL, 2005.

Copyright r 2005 John Wiley & Sons, Ltd. Qual Assur J 2005; 9, 196–227.

Вам также может понравиться