Вы находитесь на странице: 1из 105

PROCESS DESIGN

UNDERSTANDING THE PRODUCT


AND PROCESS
LIFECYCLE APPROACH TO PROCESS
VALIDATION FDA STAGE 1
FDA Lifecycle Approach to Process ValidationWhat, Why, and How? ............................................................... 3
The Forgotten Origins of Quality by Design ....................................................................................................... 14
Understanding Physicochemical Properties for Pharmaceutical Product
Development and ManufacturingDissociation, Distribution/Partition, and Solubility ....................................... 19
Understanding Physicochemical Properties for Pharmaceutical Product Development
and Manufacturing II: Physical and Chemical Stability and Excipient Compatibility ........................................... 30
Patent Potential .............................................................................................................................................. 42
First Steps in Experimental DesignThe Screening Experiment ........................................................................ 46
First Steps in Experimental Design II: More on Screening Experiments .............................................................. 54
A Further Step in Experimental Design (III): The Response Surface ................................................................... 63
Estimation: Knowledge Building with Probability Distributions .......................................................................... 71
Understanding Hypothesis Testing Using Probability Distributions .................................................................... 86
PQ=Confirmation ........................................................................................................................................... 102
Paul L. Pluta

FDA Lifecycle Approach


to Process Validation
What, Why, and How?
Paul L. Pluta

tFDA has provided recommendations for the


PQ Forum provides a mechanism for validation
general lifecycle and stages 1, 2, and 3. Specific
practitioners to share information about Stage
expectations are discussed.
2 process qualification in the validation lifecycle.
tStage 1Process Design may be generally
Information about supporting activities such as
described as process understanding. Stage
equipment and analytical validation is shared.
1 work is ultimately reflected in the master
The information provided should be helpful and
production record and control records.
practical so as to enable application in actual work
tStage 2Process Qualification may be
situations.
described as validation performance. This
Reader comments, questions, and suggestions
stage comprises demonstration of final process
are needed to help us fulfill our objective for this
performance by means of conformance lots.
column. Please contact column coordinator Paul
Stage 2 confirms the development work of Stage
Pluta at paul.pluta@comcast.net or managing editor
1 Process Design.
Susan Haigney at shaigney@advanstar.com with
tStage 2 specific recommendations are provided
comments, suggestions, or topics for discussion.
for design of a facility and qualification of
utilities and equipment, process performance
KEY POINTS qualification (PPQ), PPQ protocol, and PPQ
The following key points are discussed:
protocol execution and report.
tThe US Food and Drug Administration issued
tStage 3Continued Process Verification
Process Validation: General Principles and Practices
may be simply described as maintaining
in January 2011, which has given widespread
validation. This stage comprises the ongoing
visibility to the lifecycle approach concept.
commercial manufacturing of the product
tThe process validation guidance integrates
under the same or equivalent conditions as
strategy and approaches to provide a
demonstrated in Stage 2 Process Qualification.
comprehensive approach to validation. Three
tThe integration of development work, process
stages in the lifecycle approach are identified.
conformance, and continuing verification
The lifecycle concept links development,
provides assurance the product or process will
validation performance, and product or process
consistently remain in control throughout the
maintenance in a state of control during routine
entire product lifetime.
commercial production.
tThe lifecycle approach integrates various
tUnderstanding the sources of variation and
strategies, approaches, and expectations that
control of variation commensurate with risk is a
had been mentioned in multiple previously
key component of the lifecycle approach.
published documents, guidelines, and
presentations for many years.

[
For more Author
ABOUT THE AUTHOR
Paul L. Pluta, Ph.D., is a pharmaceutical scientist with extensive industrial development,
information,
manufacturing, and management experience. Dr. Pluta is also an adjunct associate professor at the
go to
University of Illinois-Chicago College of Pharmacy. Dr. Pluta may be contacted by e-mail at paul.
gxpandjvt.com/bios
pluta@comcast.net.

PROCESS VALIDATION Process Design 3


Paul L. Pluta

tThe concepts identified in the respective The definition of process validation stated in the
stages of the FDA process validation 2011 guidance is as follows:
guidanceunderstanding, performance, and Process validation is defined as the collection
maintenanceserve as a model for all areas of and evaluation of data, from the process design
validation and qualification. stage throughout production, which establishes
tThe new guidance affects many areas of site scientific evidence that a process is capable of
validation programs including organizational consistently delivering quality product. Process
aspects, validation performance specifics, risk validation involves a series of activities taking place
analysis, training, and documentation. over the lifecycle of the product and process.
tSenior and functional management support The guidance describes process validation
is needed to transition organizations to the activities in the following three stages:
lifecycle approach to validation. Risk analysis tStage 1Process Design: The commercial
is key to development and prioritization of a process is defined during this stage based on
suitable program that will be embraced and knowledge gained through development and
supported. scale-up activities.
tStage 2Process Qualification: During this
INTRODUCTION state, the process design is confirmed as
The US Food and Drug Administration issued being capable of reproducible commercial
Process Validation: General Principles and Practices manufacturing.
(1) in January 2011. This guidance has given tStage 3Continued Process Verification:
widespread visibility to the lifecycle approach Ongoing assurance is gained during routine
concept. Validation managers are now responding production that the process remains in a state of
to questions and comments about the guidance control.
from their colleagues. The following discusses These sections of the 2011 guidance clearly
these and other areas of concern raised by identify the key difference between the lifecycle
attendees at validation meetings in Montreal approach compared to validation in the 1987 FDA
(2010), Philadelphia (2010), and Amsterdam guidance. The 2011 lifecycle approach to process
(2011). These are relevant hands-on questions validation encompasses product and process
from people that face validation problems every activities beginning in development and continuing
day. Topics addressed in this discussion include throughout the commercial life of the product. The
the following: 1987 definition and subsequent discussion in the
tWhat is different about the lifecycle approach? guidance placed major emphasis on the validation
What is its emphasis compared to the 1987 FDA protocol, testing, results, and documentationwhat
process validation guidance (2)? is now considered to be Stage 2 in the lifecycle
tWhy the lifecycle approach? Is it really a new approach. Development work and post-validation
approach? monitoring were not emphasized in the 1987
tShould the lifecycle approach be applied to guidance.
other areas of validation and qualification?
What about using the lifecycle approach to Approach to Process
other processes and to equipment, HVAC, ValidationStages 1, 2, and 3
computer systems, and other qualifications? The approach to process validation stated in the
tHow does the guidance affect our current 2011 guidance clearly emphasizes contemporary
validation programs? What areas need to concepts and expectations for pharmaceutical
be modified to be compliant with the new manufacturing. The manufacturer should
guidance? have great confidence that the performance of
manufacturing will consistently produce active
THE LIFECYCLE APPROACH pharmaceutical ingredients (APIs) and drug
The January 2011 process validation guidance products meeting expected attributes. This
(1) has integrated information, strategy, confidence is obtained from objective information
and approaches discussed in various US and data from laboratory, pilot, and commercial-
and international documents to provide a scale studies (i.e., the work of Stage 1). After
comprehensive approach to validation (i.e., the completion of Stage 1 development, Stage 2
lifecycle approach). The guidance provides specific Process Qualification confirms the work of Stage
and detailed recommendations for each stage of the 1. After successful Stage 2 performance, Stage
lifecycle approach. 3 Continued Process Verification maintains the
validated state. The guidance states:

4 PROCESS VALIDATION Process Design


Paul L. Pluta

The lifecycle concept links product and process tEstablishing a strategy for process control. This
development, qualification of the commercial section addresses reducing input variation,
manufacturing process, and maintenance of adjustment for input variation during
the process in a state of control during routine processing, and related topics.
commercial production. This guidance supports Stage 2process qualification. This
process improvement and innovation through stage may be simply described as validation
sound science. performance. This stage is most similar to
Successful validation depends on knowledge the traditional definition and performance
and understanding from product and process of validation. The testing of Stage 2 should be
development. Specific key areas mentioned in the commensurate with the risk identified for the
guidance include the following: product and process.
tUnderstanding the sources of variation Stage 2 comprises demonstration of commercial
tDetect the presence and degree of variation process performance by means of conformance lots.
tUnderstanding the impact of variation on the This stage confirms the development work of Stage
process and ultimately on product attributes 1. Successful stage 2 performance demonstrates
tControl the variation in a manner that the proposed manufacturing process is capable
commensurate with the risk it represents to the of reproducible commercial manufacture. Process
process and product. performance qualification (PPQ) conformance
lot manufacturing includes increased testing
FDA Recommendations to demonstrate acceptability of the developed
The 2011 guidance discusses several areas formulation and process.
and provides specific details. These include The 2011 validation guidance provides several
recommendations for the general lifecycle and specific recommendations for the respective stages
stages 1, 2, and 3. The entire recommendations of process validation. Validation managers must
section of the guidance is provided online at FDA. become familiar with these requirements and
gov. incorporate them into their site training programs.
General considerations. These considerations The guidance discusses the following in Stage 2.
are applicable to all stages in the lifecycle. For Facility, utilities, and equipment. The FDA 2011
example, an integrated team approach that includes guidance specifies the following regarding facility,
expertise from multiple disciplines and project equipment, and utilities:
plans is recommended. The support of senior tUtilities and equipment construction materials,
management is termed essential. Other general operating principles, and performance
topics discussed include the initiation of studies characteristics must be appropriate for their
to further understand product and process during specific use.
the lifecycle, attribute evaluation, and the need for tUtilities systems and equipment must be
higher levels of control for parameters associated
built and correctly installed, according to
with higher risk.
manufacturers directions, and then properly
Stage 1process design. This stage may be
maintained and calibrated.
generally described as process understanding.
tUtility system and equipment must be
Studies are conducted during this stage to develop
qualified to operate in the ranges required in
and characterize product and process. The work of
processing. The equipment should have been
Stage 1 should be commensurate with the identified
qualified under production-level loads and for
or expected risk for the product and process.
production-level durations. Testing should also
Stage 1 recommendations address development
activities that will ultimately be reflected in the include interventions, stoppage, and start-up as
master production record and control records. The is expected during routine production.
guidance clearly states the goal of stage 1: To The 2011 guidance provides specific expectations
design a process suitable for routine commercial for a plan to qualify facility, equipment, and
manufacturing that can consistently deliver a utilities, as follows:
product that meets its quality attributes. The
following two topics are discussed: tThe plan should include risk management to
tBuilding and capturing process knowledge prioritize activities and documentation
and understanding. This section discusses tThe plan should identify
the role of product development and uses (1) Studies or tests to use
terminology common to the quality-by-design (2) Criteria appropriate to assess outcomes
(QbD) initiativequality attributes, design of (3) Timing of qualification activities
experiments (DOE) studies, and so on.

PROCESS VALIDATION Process Design 5


Paul L. Pluta

(4) Responsibilities tDiscuss and cross-reference all aspects of the


(5) The procedures for documenting and protocol
approving the qualification tSummarize and analyze data
tChange evaluation policy tEvaluate unexpected observations and
tDocumentation of qualification activities additional data not specified in the protocol
tQuality assurance (QA) approval of the tDiscuss deviations and non-conformances
qualification plan. tDescribe corrective actions
tState a clear conclusion whether the process
The above is a clear directive to the site is validated or if not, what should be done to
validation approval committee (VAC) as to validate the process
FDAs expectations for facilities, equipment, and tBe approved by appropriate departments and
utilities qualification. the quality unit.
Process performance qualification. The PPQ is
intended to confirm the process design and Stage 3Continued process verification.
development work and demonstrate that the This stage may be simply described as maintaining
commercial manufacturing process performs as validation, or maintaining the validated state.
expected. This stage is an important milestone in Maintenance activities of Stage 3 should be
the product lifecycle. The PPQ should be based on commensurate with the risk identified for the
sound science and experience. The PPQ should product and process.
have a higher level of testing and sampling. The Assuming good development of the process,
goal of the PPQ is to demonstrate that the process identification of potential variation, and control of
is reproducible and will consistently deliver quality same, the manufacturer must maintain the process
products. under control over the product lifetime (i.e., the
PPQ protocol. A written protocol is essential and work of Stage 3). This control must accommodate
should discuss the following: expected changes in materials, equipment,
tManufacturing conditions, process parameters, personnel, and other changes throughout the
process limits, and raw material inputs commercial life of the product based on risk
tHow data are to be collected and evaluated analysis.
tTesting and acceptance criteria Stage 3 comprises the ongoing commercial
tSampling plan including sampling points and manufacturing of the product under the same
number of samples or equivalent conditions as demonstrated in
tNumber of samples should demonstrate Stage 2. This phase continues throughout the
statistical confidence entire commercial life of the product or process.
tConfidence level based on risk analysis Specific topics discussed in this section include the
tCriteria for a rational conclusion of whether the following:
process is acceptable tOngoing program to collect and analyze process
tStatistical methods used to analyze data data, including process trends, incoming
tProvision to address deviations and materials, in-process material, and finished
non-conformances products
tDesign of facilities, qualification of equipment tStatistical analysis of data by trained personnel
and facilities tProcedures defining trending and calculations
tPersonnel training and qualification tEvaluation of inter-batch and intra-batch
tVerification of sources of materials and variation
containers and closures tEvaluation of parameters and attributes at
tAnalytical method validation discussion PPQ levels until variability estimates can be
tApproval by appropriate departments and the established
quality unit. tAdjustment of monitoring levels based on the
above
PPQ protocol execution and report. Protocol
tTimely assessment of defect complaints, out-of-
execution should not start until the protocol has
specification (OOS) findings, deviations, yield
been approved. Changes to the approved protocol
variations, and other information
must be made according to established procedures.
tPeriodic discussion with production and
The routine manufacturing process and procedures
quality staff on process performance
must be followed (i.e., usual conditions, personnel, tProcess improvement changes
materials, environments, etc.). The PQ report tFacilities, utilities, and equipment must be
should do the following: maintained to ensure process control.

6 PROCESS VALIDATION Process Design


Paul L. Pluta

WHY THE LIFECYCLE APPROACH?


For manufacturing processes to be truly validated, Excerpt from an interview with FDA investi-
each of the stages must be addressed and gator Kristen Evans published in the Journal
integrated. This integration of development work, of Validation Technology, Volume 6, No. 2,
process conformance, and continuing verification February 2000.
provides assurance that the product or process
will consistently remain in control throughout the Q. What are some of the major process vali-
entire product lifecycle. Process validation must dation problems you have seen during your
not be considered a one-time event or a focused inspections of manufacturing facilities in the
one-time task performed just prior to commercial United States?
launch that emphasizes only the manufacture of
three conformance lots. Acceptable manufacture of A. I think, as a whole, the failure to recognize
three conformance batches must not be interpreted the lifecycle approach to validation. We see
as completion of validation. These lots cannot many firms, for whatever reason, thinking
truly represent the future manufacturing process that once they complete their prospective
with unexpected and unpredictable changes. three-batch validation, thats the end and
Conformance lots are often inadvertently biased theyre on their way. I like to say that pro-
(i.e., they may utilize well-characterized and spective validation is not the end. Its not the
controlled API and excipients, be manufactured beginning of the end; it is hopefully the end
under well-controlled conditions, be monitored of the beginning. But, clearly, its an ongo-
by expert individuals, and performed by most ing process. It requires a concerted effort to
experienced or well-trained personnelall best- really maintain confidence in the process and
case conditions). It is highly unrealistic to contend to be able to demonstrate that at any given
that the manufacture of three conformance lots time. So, when we conduct our inspections,
under best-case conditions conclusively predicts we want to know how the firm gives itself,
successful manufacturing over the product lifetime. and therefore us, the confidence that a given
True process validation must be a process that is process on that day is under control. And
never completed and is always ongoing. youre not simply saying, Well, we validated
it a few years ago, or Were going to do
Is This Really a New Approach? our annual review in a couple of months,
The lifecycle approach to process validation is not and that will show us, but rather, systems
really a new approach or a new concept (3). In an are in place at any given time to show from a
interview with FDA investigator Kristen Evans big picture that its validated. As opposed to
published in the Journal of Validation Technology in general problems discussed in the previous
February 2000, the investigator commented on the paragraph, a more specific problem that we
failure of manufacturers to recognize a lifecycle see is a lack of scientific rationale in the pro-
approach to validation (see Sidebar). tocols and acceptance criteria. At least there
The three-stage lifecycle description of process is a lack of documentation of such rationale,
validation as discussed in the FDA process which is what were expecting to see. We
validation guidance integrates various strategies, want the process to be there, that youve
approaches, and expectations that had been come up with a scientific study, a protocol
mentioned in several published documents, this is what youre attempting to show, and
guidelines, and presentations. FDA representatives this is why, and this is how its going to be
have openly discussed the lifecycle approach to evaluated, and then just simply executing
process validation for several years (4,5,6). The that. Thats documentation of the scientific
draft process validation guidance that formally rationale.
introduced the lifecycle approach for industry Note: The above comments are the per-
comment was published in 2008 (7). The lifecycle sonal opinions of Mr. Evans and are not FDA
approach overcomes the checklist approach to policy.
process validation, whereby, process validation is
considered to be a one-time event. Encouraging
comprehensive process understanding improves
root cause analysis when manufacturing problems
occur. Successfully manufacturing three validation
lots without sufficient process understanding does
not provide good assurance that the manufacturing

PROCESS VALIDATION Process Design 7


Paul L. Pluta

process will consistently yield an acceptable product for validated processes are clearly stated. Before
throughout the product commercial life. commercial distribution begins, a manufacturer
The September 2006 FDA Quality Systems is expected to have accumulated enough data
Approach to Pharmaceutical CGMP Regulations (8) and knowledge about the commercial production
clearly discusses expectations for maintenance of process to support post-approval product
the validated state. In discussing performance and distribution. Normally, this is achieved after
monitoring of operations, the regulations state, satisfactory product and process development,
An important purpose of implementing a quality scale-up studies, equipment and system
systems approach is to enable a manufacturer to qualification, and the successful completion of the
more efficiently and effectively validate, perform, initial conformance batches. Conformance batches
and monitor operations and ensure that the (sometimes referred to as validation batches and
controls are scientifically sound and appropriate. demonstration batches) are prepared to demonstrate
Further, Although initial commercial batches that, under normal conditions and defined ranges
can provide evidence to support the validity and of operating parameters, the commercial scale
consistency of the process, the entire product process appears to make acceptable product. Prior
lifecycle should be addressed by the establishment to the manufacture of the conformance batches
of continual improvement mechanisms in the the manufacturer should have identified and
quality system. Thus, in accordance with the quality controlled all critical sources of variability. FDA
systems approach, process validation is not a one- has removed reference to manufacture of three lots
time event, but an activity that continues through a as a requirement for validation in this document.
products life. This document also discusses trend The process validation guidance is consistent
analysis, corrective action and preventive action with FDA QbD principles. The various QbD
(CAPA), change control, and other quality systems presentations and publications strongly encourage
programs. demonstrations of process understanding for both
The FDA Pharmaceutical cGMPs for the 21st API and drug product (11,12,13,14). In the 2006
Century-a Risk-Based Approach (9), states the FDA Perspective on the Implementation of Quality by
following: Design (QbD), a QbD system is defined as follows:
We have begun updating our current thinking tThe API or drug product is designed to meet
on validation under a Cross-Agency Process patient needs and performance requirements
Validation workgroup led by CDERs Office of tThe process is designed to consistently meet
Compliance Coordinating Committee with critical quality attributes
participation from CDER, CBER, ORA, and CVM. tThe impact of starting raw materials and
In March of this year, FDA began this process process parameters on quality is well
issuing a compliance policy guide (CPG) entitled understood
Process Validation Requirements for Drug Products tThe process is evaluated and updated to allow
and Active Pharmaceutical Ingredients Subject to for consistent quality over time
Pre-Market Approval (CPG 7132c.08, Sec 490.100) tCritical sources of process variability are
(10). The CPG stresses the importance of rational identified and controlled
experimental design and ongoing evaluation of tAppropriate control strategies are developed.
data. The document also notes that achieving and
maintaining a state of control for a process begins The various FDA guides to inspections (15,16,17),
at the process development phase and continues all issued during the 1990s, emphasized the
throughout the commercial phase of a products development phase of the validated process
lifecycle. The CPG incorporates risk-based and associated documentation. This included
approaches with respect to inspectional scrutiny; documented experiments, data, results, control
use of advanced technologies, and by articulating of the physical characteristics of the excipients,
more clearly the role of conformance batches in particle size testing of multi-source excipients
the product lifecycle. The document clearly signals and determination of critical process parameters.
that a focus on three full-scale production batches Development data serves as the foundation for the
would fail to recognize the complete story on manufacturing procedures, and variables should be
validation. identified in the development phase. Raw materials
In the 2004 revision of FDA Compliance were identified as a source of lot-to-lot variation,
Policy Guide Sec. 490.100, Process Validation as were equipment or processes that could impact
Requirements for Drug Products and Active product effectiveness or product characteristics (i.e.,
Pharmaceutical Ingredients Subject to Pre- the validated state must be maintained).
Market Approval (CPG 7132c.08), expectations

8 PROCESS VALIDATION Process Design


Paul L. Pluta

Some of the key concepts in the 2011 process parameters should not be included in validation.
validation guidance were originally mentioned in Regarding post validation, there should be periodic
the FDA 1987 guidance. For example, the 1987 review of validated systems.
guidance states, ...adequate product and process
design...quality, safety, and effectiveness must Medical Device Validation Guidance
be designed and built into the product... and Although the 2011 process validation guidance
During the research and development (R&D) does not apply to medical devices, medical device
phase, the desired product should be carefully documents espouse an equivalent comprehensive
defined in terms of its characteristics, such as approach to process validation. In the January
physical, chemical, electrical, and performance 2004 Global Harmonization Task Force (GHTF)
characteristics. In addition to discussing actual Study Group 3, Quality Management SystemsProcess
validation protocols, the document mentions Validation Guidance (21), activities conducted during
several post-validation considerations, as follows: product or process development to understand
...quality assurance system in place which requires the process are described. For example, The use
revalidation whenever there are changes in of statistically valid techniques such as screening
packaging, formulation, equipment, or processes experiments to establish key process parameters
which could impact product effectiveness or and statistically designed experiments to optimize
product characteristics, and whenever there the process can be used during this phase. This
are changes in product characteristics. The document also describes activities conducted post-
quality assurance procedures should establish validation to maintain the product or process. For
the circumstances under which revalidation is example, Maintaining a state of validation by
required. monitoring and control including trend analysis,
The 2011 process validation guidance clearly states changes in processes or product, and continued
its consistency with International Conference on state of control of potential input variation such as
Harmonisation (ICH) Q8, Q9, and Q10 documents raw materials. Tools including statistical methods,
(18). These documents provide current global process capability, control charts, design of
thinking on various aspects of the product lifecycle experiments, risk analysis, and other concepts are
from development through commercialization. They described.
provide a comprehensive and integrated approach The 1997 FDA Medical Device Quality Systems
to product development and manufacturing to be Manual (22) further emphasizes activities to be
conducted over the lifecycle of the product. ICH Q8 conducted post validation. It states, Process and
discusses information for regulatory submission product data should be analyzed to determine what
in the ICH M4 Common Technical Document the normal range of variation is for the process
format (19). ICH Q8 describes a comprehensive output. Knowing what is the normal variation
understanding of the product and manufacturing of the output is crucial in determining whether
process that is the basis for future commercial a process is operating in a state of control and is
manufacturing including QbD concepts. ICH Q9 capable of consistently producing the specified
provides a systematic approach to quality risk output. Process and product data should also
management through various risk assessment tools. be analyzed to identify any variation due to
ICH Q9 also suggests application of risk management controllable causes. Appropriate measures should
methods to specific functions and business processes be taken to eliminate controllable causes of
in the organization. ICH Q10 complements Q8 and variation...Whether the process is operating in a
Q9, and discusses the application of the various state of control is determined by analyzing day-to-
quality system elements during the product lifecycle. day process control data and finished device test
Elements of the quality system include process data for conformance with specifications and for
performance monitoring, CAPA, change control, and variability.
management review. These quality system elements The 1997 Guide to Inspections of Medical Device
are applied throughout the various phases of the Manufacturers (23) states, It is important to
product lifecycle. remember that the manufacturer needs to maintain
The 2000 ICH Q7 Good Manufacturing Practice a validated state. Any change to the process,
Guide for Active Pharmaceutical Ingredients (20) including changes in procedures, equipment,
discusses activities conducted prior to and post personal, etc. needs to be evaluated to determine
validation. For example, ICH Q7 states that critical the extent of revalidation necessary to assure
parameters or attributes should be identified during the manufacturer that they still have a validated
development, and these critical process parameters process.
should be controlled and monitored. Non-critical

PROCESS VALIDATION Process Design 9


Paul L. Pluta

APPLYING THE LIFECYCLE APPROACH then continues routine commercial production


The concepts identified in the respective stages with oversight by the quality unit or the qualified
of the FDA process validation guidanceprocess person (QP). Often there is minimal ongoing
design (understanding), process qualification constructive interaction between R&D, validation,
(performance), and continued process verification manufacturing, and quality during the product
(maintaining validation)serve as a model for all lifetime.
areas of validation and qualification. Although The lifecycle approach to validation is
not specifically mentioned in the FDA guidance, clearly different than the above described
the sequence of understanding, performance, situation. Product R&D and technical support
and maintaining the validated state is certainly should approach their work as supporting the
applicable and desirable for other processes in entire product lifecycle including commercial
pharmaceutical manufacturing including packaging, manufacturing. They must be involved in
cleaning, analytical, and so on. Further applying monitoring and maintenance of the validated
this sequence to equipment qualification, HVAC, state. Their work should provide the technical
computer systems, and other areas is also appropriate basis or justification for all aspects of
and desirable. Presentations on these associated manufacturing including any changes and
topics at validation meetings have already been necessary improvements. The validation group
structured according to this model. The installation should coordinate the process qualification stage
qualification-operational qualification-performance of manufacturing based on technical development
qualification (IQ/OQ/PQ) model (24) and the work, and should participate in determining
ASTM E2500 (25) model are consistent with the ongoing control strategy. The validated state
understanding, qualifying, and maintaining must be maintained through process monitoring,
qualification through calibration, preventive technical data evaluation, and change control.
maintenance, change control, and associated Manufacturing fixes or tweaks should be
activities. Applying the stages 1, 2, and 3 sequence of evaluated by technical people, and should ideally
activities to all validation and qualification unifies be supported by data or sound technical judgment
the site approach to project management activities, whenever possible. R&D should be involved in
standardizes expectations, facilitates training, and process improvements and provide the technical
generally simplifies organizational thinking. justification for these improvements. Organizations
should foster development of a continuous
THE AFFECT ON CURRENT business process beginning in R&D and continuing
VALIDATION PROGRAMS throughout the entire product lifecycle with
A major concern of validation practitioners gets to ongoing collaboration and communication among
the bottom lineHow does the 2011 guidance all relevant organizational areas. The lifecycle
affect current validation programs, and how can the approach to process validation must become a
new guidance be implemented? comprehensive organizational effort.

Organizational Aspects Validation Performance Specifics


The lifecycle approach to process validation The 2011 guidance describes many specific details
requires commitment from many areas in the and expectations for Stage 2 and Stage 3. Validation
organization. The lifecycle approach must become and quality managers should evaluate their practices
part of organizational strategy. This will require a and procedures regarding these specifics. FDA
comprehensive and continuing view of validation recommendations for Stage 2 PPQ protocol-related
rather than focus on the performance of the usual activities are substantial. FDA recommendations for
three conformance lotsand job done. Many Stage 3 post-validation monitoring are significantly
firms organize their operations in distinct silos different from a traditional Annual Product Review
(e.g., R&D, manufacturing, and quality). The silos approach. Deficiencies in site programs should be
create barriers to communication and cooperation. identified and corrective actions or improvements
The R&D organization develops the product. After prioritized. Risk to the patient and to the
development is completed, the product is transferred organization should be considered in prioritization.
to manufacturing. Commercial operations
personnel adjust the process and make it ready for Risk Analysis
validation and routine production. The validation Risk assessment has a critical role in all of the
function coordinates process validation. After activities described herein. All activities conducted
the conformance lots are successfully completed, in the organization should be conducted with risk
the validation effort is finished. Manufacturing in mind. ICH Q9 describes various risk assessment

10 PROCESS VALIDATION Process Design


Paul L. Pluta

methods and potential applications of risk Terminology


assessment. There are numerous applications of risk The terminology associated with the various
management used during the entire process validation phases of validation has had minor variations over
lifecycle. Examples cited in ICH Q9 relevant to process the years. The 2011 process validation guidance
validation include product and process development, describes process design, process qualification, and
facilities and equipment design, hygiene aspects continued process verification stages in the validation
in facilities, qualification of equipment, facility, or lifecycle. Stage 2 Process Qualification includes
utilities, cleaning of equipment and environmental PPQ manufacturing of commercial lots. The 1987
control, calibration and preventive maintenance, FDA validation guidance describes installation and
computer systems and computer controlled operational qualification, process performance
equipment, and so on. In brief, risk assessment helps qualification, and product performance qualification.
to identify the most important potential problems Products lots manufactured in the process
in all three stages of process validation, and then qualification phase were termed conformance lots.
addresses these problems appropriately. There should PPQ batches have also been named demonstration
be consistency between the risk-based activities in all lots, qualification lots, PQ lots, and validation
three stages of process validation. Risk management lots, in past years. Stage 2 process qualification
must become pervasive in the organization. phase also includes equipment, facilities, and utilities
qualification.
Training While the variety of terminology used may
The issuance of the 2011 FDA guidance requires cause difficulties in communicating, the intent
appropriate training for all involved in validation- of all validation programs is the same: Sequential
related activities. All involved in validation and process understanding, validation performance,
validation- or qualification-related activities must and maintaining the validated state as described
be aware of the 2011 process validation guidance herein comprise the validation lifecycle continuum.
and concepts therein. Personnel who previously Validation programs addressing these phases of
considered themselves to be apart or distant from the product or process lifecycle, no matter what
commercial product validation (e.g., development specific terminology is used or how categorized
scientists) must now be included in validation in documentation, will meet the expectations
training. robustness, repeatability, and reliability for validated
Especially important are personnel who process. Regulatory investigators are knowledgeable
write validation plans, protocols, results, and and able to interpret different organizational
associated documents. These writers must think terminology as long as the sequence of process
comprehensively, incorporating pre-validation understanding, validation performance, and
development information as well as considerations for maintaining the validated state are demonstrated.
post-validation maintenance of the validated state into
their validation documentation. Documentation
Also critically important for training are the site All work associated with process validation in
VAC members. There must be a clear understanding all stages of the validation lifecycle must be
and agreement among VAC members and the documented. This includes product and process
validation group as to their functions and design, experimental and development studies for
responsibilities. Clearly stating the responsibilities process understanding, risk analysis in development,
of the VAC provides focus and expectations for the designed experiments, process parameter
VAC. Clearly stating the responsibilities of the VAC optimization, validation and qualification protocols,
provides clear expectations for those submitting and process monitoring to maintain the validated
protocols, validation plans, and other documents state. Development scientists must understand that
for VAC review and approval. VAC members must their work is integral to the validation lifecycle.
maintain awareness and compliance with the 2011 Development reports may be requested in regulatory
process validation guidance. The VAC members audits. Summary documents are recommended,
should consider themselves to be a surrogate FDA especially when multiple documents must be
(or other regulatory agency) auditor. The VAC should integrated by the reader. All work associated with
assume responsibility for site preparedness for equipment, facilities, and utilities qualification and
future regulatory audits of the validation function. analytical validation must also be documented.
Future audits will certainly include concepts and Document quality is important; in many cases,
recommendations stated in the 2011 process documents are reviewed literally years after they
validation guidance. are written and long after authors have moved on
to new careers inside or outside of the company. All

PROCESS VALIDATION Process Design 11


Paul L. Pluta

associated documents must be readily available. FDA has provided recommendations for the general
Documents are often required to be quickly retrieved lifecycle and stages 1, 2, and 3 including specific
in regulatory audits. Document storage in an easily details for each of the stages. Stage 1Process
accessible centralized location is recommended. Design may be generally described as process
understanding. Stage 1 work will ultimately be
Analytical reflected in the master production record and control
The guidance briefly discusses expectations for records. Stage 2Process Qualification may be
analytical methodology in process validation. It described as validation performance. This stage
states that process knowledge depends on accurate comprises demonstration of commercial process
and precise measuring techniques. Analytical areas performance by means of conformance lots and
supporting early Stage 1 R&D work must be aware that confirms the development work of Stage 1. Stage
their methods and data may be subject to inspection in 2 details are also provided for design of a facility
validation audits. Test methods must be scientifically and qualification of utilities and equipment, PPQ,
sound (e.g., specific, sensitive, accurate), suitable, PPQ protocol, and PPQ protocol execution and
and reliable. Analytical instruments must function report. Stage 3Continued Process Verification may
reliably. Analytical method development reports be simply described as maintaining validation.
must be available for auditor review. Procedures This stage comprises the ongoing commercial
for analytical methods, equipment maintenance, manufacturing of the product under the same or
documentation practices, and calibration practices equivalent conditions as demonstrated in Stage 2.
should be documented or described. Current good Stage 3 continues throughout the entire commercial
manufacturing practice CFR 210 and 211 must be life of the product or process. Understanding
followed as appropriate for batch release of commercial the sources of variation and control of variation
lots. commensurate with risk should be applied during all
stages of validation. The integration of development
Management Support work, process conformance, and continuing
The support of senior management and the respective verification provides assurance the product or process
functional management of affected areas in the will consistently remain in control throughout the
organization is critical to implementing the lifecycle entire product lifetime.
approach. Management in the organization must The lifecycle approach is not a new concept. This
become familiar with the 2011 validation guidance approach as described in the guidance integrates
and its ramifications. Transitioning organizations various strategies, approaches, and expectations that
to the lifecycle approach to validation cannot be had been mentioned in several previously published
completed without management support. Employees documents, guidelines, and presentations. The
provide what management expects. Validation and concepts identified in the respective stages of the
quality professionals should help their management FDA process validation guidanceunderstanding,
to assess the status of their organizations. Deficiencies performance, and maintenanceserve as a model for
must be corrected and enhancements implemented. all areas of validation.
Validation and quality professionals should prioritize Implementation of the lifecycle approach in site
activities based on risk to patient and organization. validation programs has significant ramifications
Economic impact must also be considered. A balance for the organization. Organizational functions
of risk, cost, and compliance considerations is key to previously distant from commercial processes
development of a suitable validation program that will are now integral to ongoing performance. Post-
be embraced and supported. validation monitoring of process performance
including timely responsiveness to data trends is an
CONCLUSIONS expectation. The lifecycle approach affects many
FDA issued Process Validation: General Principles and areas of validation programs including organizational
Practices in January 2011, which has given widespread aspects, validation performance guidance specifics,
visibility to the lifecycle approach concept. This risk analysis, training, and documentation. Senior
comprehensive approach to validation has significant and functional management support is needed to
and far-reaching ramifications. Three stages in the transition organizations to the lifecycle approach to
lifecycle approach have been identified. The lifecycle validation. Risk analysis is key to development and
concept links development, validation performance, prioritization of a suitable validation program that
and maintenance of the process in a state of control will be embraced and supported.
during routine commercial production.

12 PROCESS VALIDATION Process Design


Paul L. Pluta

REFERENCES 17. FDA, Guide to Inspections, Oral Solutions and Suspensions,


1. FDA, Guidance for Industry, Process Validation: General August 1994.
Principles and Practices, January 2011. 18. www.ICH.org
2. FDA, Guideline on General Principles of Process Validation, 19. ICH, M4. The Common Technical Document for the
May 1987. Registration of Pharmaceuticals for Human Use, www.ich.
3. Evans, Kristen and Vincent, D., Ask the FDA: Questions org.
a nd A nswers w it h Cur rent a nd For mer A genc y 20. ICH, Q7. Good Manufacturing Practice Guide for Active
Personnel. Critical Factors Which Affect Conducting Pharmaceutical Ingredients, November 2000.
Effective Process Validation: One FDA Investigators 21. GHTF, Study Group 3. Quality Management Systems
View, Journal of Validation Technology, Volume 6, #2, Process Validation Guidance. Edition 2, January 2004.
February 2000. 22. FDA, Medical Device Quality Systems Manual, January
4. Mc Nally, Grace E ., Lifec yc le Approac h Process 1997.
Validation, GMP by the Sea, Cambridge, MD. August 26, 23. FDA, Guide to Inspections of Medical Device Manufacturers,
2008. Process Validation, 21 CFR 820.75, December 9, 1997.
5. Mc Nally, Grace E ., Lifec yc le Approac h Process 24. ISPE G A M P Te st i ng SIG ., Ke y P r i nc iple s a nd
Validation, GMP by the Sea, Cambridge, MD. August 29, C on side r at ion s for Te s t i ng of G x P S y s t e m s ,
2007. Pharmaceutical Engineering, Vol. 25, No. 6, Nov.-Dec.
6. Famulare, Joseph, Benefits of a Pharmaceutical Quality 2005.
System, PDA/FDA Joint Conference, Bethesda, MD, 25. ASTM, ASTM E 2500-07, Standard Guide for Specification,
November 2, 2007. De sig n , an d Ve r if i c at io n of Ph ar m a c e ut ic al an d
7. FDA, Draft Guidance for Industry, Process Validation: Biopharmaceutical Manufacturing Systems and equipment,
General Principles and Practices, November 2008. August 2007. JVT
8. FDA, Quality Systems Approach to Pharmaceutical CGMP
Regulations, September 2006. ARTICLE ACRONYM LISTING
9. FDA, Pharmaceutical cGMPs for the 21st CenturyA Risk- API Active Pharmaceutical Ingredient
Based Approach, Final Report-Fall 2004, September 2004. CAPA Corrective Action and Preventive Action
10. FDA, Compliance Policy Guide 7132c.08. Section DOE Design of Experiments
490.100. Process Validation Requirements for Drug FDA US Food and Drug Administration
Products and Active Pharmaceutical Ingredients Subject ICH International Conference on Harmonisation
to Pre-Market Approval. IQ Installation Qualification
11. Nasr, Moheb, FDA Perspective on the Implementation OOS Out of Specification
of Quality by Design (QbD), 9th APIC/CEFIC, Prague, OQ Operational Qualification
Czech Republic, October 10, 2006. PPQ Process Performance Qualification
12. Yu, Lawrence X., Robert Lionberger, Michael C. Olson, PQ Performance Qualification
Gordon Johnston, Gary Buehler, and Helen Winkle, QbD Quality by Design
Quality by Design for Generic Drugs, PharmTech.com, QP Quality Person
October 2, 2009. R&D Research and Development
13. Winkle, Helen H., Implementing Quality by Design, VAC Validation Approval Committee
PDA/FDA Joint Regulatory Conference, September 24,
2007.
14. Yu, Lawrence X., Pharmaceutical Quality by Design:
Product and Process Development, Understanding,
and Control, Pharmaceutical Research online, January
10, 2008.
15. FDA, Guide to Inspections of Oral Solid Dosage Forms.
Pre/Post Approval Issues for Development and Validation,
January 1994.
16. FDA, Guide to Inspections of Topical Drug Products, July
1994.

Originally published in the Spring 2011 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 13


John McConnell, Brian K. Nunnally, and Bernard McGarvey

The Forgotten Origins of


Quality by Design
John McConnell, Brian K. Nunnally, and Bernard McGarvey

tQbD then proceeds with identification of critical


Analysis and Control of Variation is dedicated quality attributes (CQAs). Juran calls this
to revealing weaknesses in existing approaches to development of product features. These should
understanding, reducing, and controlling variation be reviewed by both customer and producer.
and to recommend alternatives that are not only tInternal service groups must feel competition
based on sound science, but also which demonstrably from outsourced services to continually improve
work. Case studies will be used to illustrate both their services, reduce variation, and positively
problems and successful methodologies. The impact product throughout the product lifecycle.
objective of the column is to combine sound science tOnce CQAs have been defined, the
with proven practical advice. manufacturing process to meet these CQAs can
Reader comments, questions, and suggestions be designed including technology and human
will help us fulfill our objective for this column. components
Case studies illustrating the successful reduction tSampling and sample handling variability
or control of variation submitted by readers are should be quantified and minimized.
most welcome. Please send your comments and tCapability (CpK) is the measure of a process
suggestions to column coordinator John McConnell ability to meet its targets or specifications.
at john@wysowl.com.au or journal coordinating tThe final aspect of QbD is designing a control
editor Susan Haigney at shaigney@advanstar.com. strategy, which Juran called developing process
controls.
KEY POINTS DISCUSSED tUnderstanding the impact of variation, and
The following key points are discussed: designing it out wherever possible is an
tQuality by design can be traced to the original important part of QbD.
work of Dr. Joseph M. Juran. tThe dominant variables of a process can be
tJuran believed that nearly all problems (e.g., described as set-up, components, time, and
deviations, quality defects, etc.) are built into worker elements.
the system, and that they can be traced to tKnowing which is likely to be dominant
inadequate design of the process. Therefore, if significantly impacts on design.
addressed during the design phase, they can be tDesigning a control strategy that ensures a robust
largely designed out. process depends on understanding the sources of
tQbD starts with the quality target product variation for each step (or assay) and controlling
profile. Juran calls this defining the customer them. Understanding which category a variable
needs. helps to determine how best to attack variability.

[
ABOUT THE AUTHORS
For more Author John McConnell is owner and director at Wysowl Pty Ltd. in Queensland, Australia. He may be
information, reached at john@wysowl.com.au. Brian K. Nunnally, Ph.D., is in charge of process validation at
go to
Pfizer in Sanford, NC. He may be reached at nunnalb@wyeth.com. Bernard McGarvey, Ph.D., is
Process Modeling Group Leader, Process Engineering Center at Eli Lilly and Company in Indianapolis,
gxpandjvt.com/bios
IN. He may be reached at mcgarvey_bernard@lilly.com.

14 PROCESS VALIDATION Process Design


John McConnell, Brian K. Nunnally, and Bernard McGarvey

tJurans work, the basis for QbD, is now described Figure 1: The Juran trilogy (adapted from 1).
in ICH Q8 (R2), Pharmaceutical Development.

INTRODUCTION
The US Food and Drug Administration and the
International Conference on Harmonisation (ICH)
have embraced the concept of building quality into Quality Quality
pharmaceutical manufacturing processes. This Planning Control
has been called quality by design (QbD). Many
pharmaceutical scientists are not aware of the origins
of QbD, nor are they familiar with its originator, Dr.
Joseph Juran.
The QbD concept was originally published in
1985. The seminal work is Juran on Quality by Design
(1). This book outlines the reasons and methodology
for planning quality into a manufacturing process.
Quality
Juran did not use pharmaceuticals or medical devices
Improve-
in his book, but all of the principles are present and ment
have been adapted into ICH Q8 (R2), Pharmaceutical
Development (2).

BASICS OF QBD our customers is to be a customer, as Juran would


For Juran, quality problems are planned that way probably say. We need to think like a patient and
(i.e., they are built into the system) (1). He believed not forget the patient in all of our work. The quality
that all quality defects were a result of a poorly cannot be built into the drug until the parameters
planned quality system. The Juran trilogy is shown needed are defined (e.g., the route of administration,
in Figure 1. We have shown this as an interlocking dosage, strength, etc.)
sequence, as we are sure Juran believed it to be.
The corollary to this, and the most important Critical Quality Attributes
aspect for the pharmaceutical and medical device Once the QTTP is defined, work on defining the
industry, is that the quality problems can be critical quality attributes (CQAs) can commence.
prevented by understanding the process better for Juran would call this process the development of
the determination of design space, identification product features. His advice rings true with todays
of critical quality attributes, and defining an development. Product development requires
appropriate control strategy. not only functional expertise; it also requires
the use of a body of quality-related know-how
Quality Target Product Profile (1). Development scientists need a thorough
QbD starts with an understanding of the quality understanding of manufacturing and quality as their
target product profile (QTPP). This is defined as a data and experiments form the basis of the processes
prospective summary of quality characteristics of and methods used during commercial manufacture.
a drug product (or medical device, drug substance, Juran would be supportive of the efforts made by
etc.) that ideally will be achieved to ensure the manufacturing to have influence on the development
desired quality, taking into account safety and process and by development to better understand the
efficacy of the drug product (or medical device, needs of commercial manufacturing and the product
drug substance, etc.) (2). Juran would call this released to the marketplace. It is the job of both to
defining the customer needs. Juran states, the goal ensure the planned approach includes work to guard
should be customer satisfaction rather than mere against external failures. These include product and
conformance to stated needs (1). Translated for the process design-related issues, carryover from previous
pharmaceutical industry (note: medical devices are processes, and degradation (2). The carryover of
included but will not be further explicitly discussed), previous processes (think toolbox technologies) can
this would mean a focus on producing medicines contribute to future quality issues by embedding these
that improve the lives of the patients we serve. Our into new processes.
industry is not involved in the production of widgets; The CQAs are those product (or drug substance)
we produce life-saving and life-enhancing medicines characteristics having an impact on product quality
designed to improve, save, and elongate the lives of and should be studied and controlled (2). Juran
our customers. In this light, the best way to serve advises that this criticality needs to be reviewed by

PROCESS VALIDATION Process Design 15


John McConnell, Brian K. Nunnally, and Bernard McGarvey

Figure 2: Pharmaceutical adapted Juran pro- parameters, output characteristics, and assays.
cess model (adapted from 1). Variability should be identified and controlled to
ensure patient and business success. This is not
QTPP an end, but should be considered to be a process
Inputs encompassing the entire lifecycle history of the
CQAs
product and process.

Critical Process Parameters


Product
Process Once CQAs have been defined, the manufacturing
Development
process to meet these CQAs can be selected or
designed (2). This process includes both technology
as well as the human components (1). The
Process
Outputs human factor is built into the process and must be
Features considered in the design to error proof the human
element. Too many deviations are seen that are
both customer and supplier (1). This means both blamed on human root causes when, in fact,
the patient and the business are stakeholders. The these are process design elements that could have
patient benefits from a robust process that ensures designed the problem out. We have been told about
those attributes central to the safety and efficacy of companies where up to 50% of the root causes are
the drug are predictable. Likewise, the business human error. For these situations, we predict these
benefits through reduction of cost and increase in same 50% of the deviations will happen again due to
volume (see Littles Law [3,4]), lowered frequency of inadequate corrective and preventative actions.
deviations, improved utilization of human capital Juran offered that process design should be goal
and fixed assets, and less rework and waste. The oriented, systemic, capable, and legitimate (1). A
CQAs will be dependent on the various inputs pharmaceutical adapted Juran model of process
to the processJuran predicted this as well (1). development is shown in Figure 2. A process cannot
Minimizing the variability in these inputs (including meet a target not specified. The following are
process parameters and raw materials, sampling, and examples:
analytical) will improve the ability to determine and tWhat are the quality goals to be met?
maintain the CQAs. tHow much variation can be tolerated from
the process?
Internal Monopolies tSampling?
In his discussions of the development of product tAnalytical methods?
features, Juran has an interesting sidebar on internal tWhat output is needed?
monopolies. Internal monopolies were once the tHow can the process be scaled to meet demand
rule in the pharmaceutical industry. Services such (higher or lower)?
as clinical trial management, active pharmaceutical
ingredient (API) manufacture, and testing were All of these questions (and more) must be
all done in-house by the companys employees. discussed to prepare a comprehensive set of
With the increase in outsourcing, there is really goals for the process. The process is a series of
no such thing as an internal monopoly. All of the interconnected systems; each one having an
services mentioned previously are being performed impact on the other. For instance, variability
by contract organizations. Juran predicted the in analytical results used by the process as part
beginnings of this when he advised that competitive of the manufacture (e.g., used by the process to
information from outside suppliers of similar determine how the next step will be run) have a
services could be acquired to ensure the highest larger impact on the overall variability than those
quality, speed, and value are being obtained (1). that are not used to run the process. It is important
This put pressure (used in its least pejorative sense) to remember that the laboratory tests the sample,
on the internal monopoly to demonstrate its not the process. Sampling (and sample handling)
competitiveness. This benchmarking is critical to the variability cannot be ignored and should be
survival of the internal monopoly. A word of caution quantified and minimized as it ultimately affects
on outsourcing: While it seems to be all benefit the overall variability of the process. Capability is
with no cost, this is not always the case. Part of crucial to meeting the process goals. Capability is
the agreement between the company and the service the measure of a process ability to meet its targets
providers has to include tough variability reduction or specifications. The key aspect of capability (CpK)
(per Demings approach to quality) of operating is variability, as shown in the following equation:

16 PROCESS VALIDATION Process Design


John McConnell, Brian K. Nunnally, and Bernard McGarvey

CQAs (2). Juran did not utilize the terminology, but


did describe the different types of variables present
in a process. He described the concept of dominant
variables, defined as those variables that are more
where important than others. The variables described
USL is the upper specification limit include the following:
LSL is the lower specification limit tSet-up dominant variables. These are
x is the mean variables affecting processes that are dependent
is a standard deviation. on the setup and confirmation of the setup to
provide stability and reproducibility. Equipment
There are only three ways to improve capability, and supplier differences can contribute to
as follows: variability for these operations. Automated
tMove the specification away from the assays and packaging machines fall into the set-
mean. This is a difficult task to accomplish up dominant category.
in the pharmaceutical industry without tTime dominant variables. These are variables
significant data and lengthy regulatory agency affecting processes that change over time and
discussions. need to be adjusted based on evaluation of
tMove the mean away from the specification. control parameters. An example of this variable
For double-sided specifications, this only is ultra filtration operations. It is important to
works if the mean is not centered. be vigilant against over-control (e.g., needlessly
tDecrease the variability. Given the fact adjusting the process based on the last result
that this is multiplied by three, this has the obtained).
greatest power to improve capability. In an tComponent dominant variables. These are
extreme understatement, Juran suggested this variables affecting processes in which the main
evaluation had merit (2). More than this, it variable is the quality of the inputs or materials.
is a critical part of process understanding. The variability in drug substance can be a large
source of variability for a drug product process.
It is important to remember that it is impossible Many chemical reactions fall in this category
to know or understand the capability of an as well. In these instances reducing variability
unpredictable process. Finally, the process upstream is critical (Littles law).
must be legitimate. Juran states, ...if there is the tWorker dominant variables. These are
approval of those to whom responsibility has been variables affecting processes where the quality
delegated (1). This is not a common concern in the attributes are determined by the skill of the
pharmaceutical industry where clear accountability worker. Many types of assays fit into this
is often present. category. Non-automated processes can be
These are critical steps that are important to the included here as well. Ensuring operators
determination of the CQAs. The planning associated or analysts performing the same functions
with these steps should also include planning of the or on different shifts have identical practices
quality system. This plan includes alarm strategy, is important to reducing variability in this
redundancy, and design to improve robustness and category.
decrease variability. The final process including
manufacturing, sampling, and analytical should be Designing a control strategy that ensures a robust
designed and demonstrated to be in control (e.g., process depends on understanding the sources of
predictable), appropriate for its intended use, and variation for each step (or assay) and controlling
capable of meeting the defined CQAs (and thus the them. Understanding which category a variable
QTPP). helps to determine how best to attack variability.
The closest comparison to the design space concept
Control Strategy is what Juran referred to as the Data Base (1). He
The final aspect is designing a control strategy described this as a compendium of lessons learned
(2) that Juran called developing process controls from human experiences. He referred to the exercise
(1). This should be done in concert with process of reviewing the information as the Santayana
development. The concept of design space comes Review named after George Santayana who stated,
into play at this point. So much has been written those who cannot remember the past are doomed
and presented on design space that we will not to repeat it (1). The purpose of this process is to
discuss this in great detail in this article. Design improve decision-making by using the lessons of
space is the relationship of the process inputs to the the past. For the pharmaceutical industry, the

PROCESS VALIDATION Process Design 17


John McConnell, Brian K. Nunnally, and Bernard McGarvey

design space is reviewed and documented into REFERENCES


process experience reports that are summarized in 1. Juran, Joseph M., Juran on Quality by Design, The Free
the development history section of the common Press, 1992.
technical document format of the dossier. Having 2. ICH, Q8 (R2), Pharmaceutical Development, 2009.
this information codified in a single location is 3. Nunnally, Brian K, and J.S. McConnell, Six Sigma in the
critical to resolve process upsets and to continue Pharmaceutical Industry, Taylor and Francis, 2007.
process improvements post marketing. 4. McConnell, John, Brian K. Nunnally, and Bernard
McGarvey, Having It All, Journal of Validation Technology,
FINAL THOUGHTS Vol. 16, No. 2, Spring 2010. JVT
The concept initially developed by Joseph Juran
is the origin of the modern QbD movement in ARTICLE ACRONYM LISTING
the pharmaceutical industry. All of the relevant CQA Critical Quality Attributes
parts are there; long before they were published in FDA US Food and Drug Administration
the ICH. A review of these lessons forms the first ICH International Conference on Harmonisation
principles understanding of QbD. QbD Quality by Design
QTPP Quality Target Product Profile

Originally published in the Summer 2010 issue of The Journal of Validation Technology

18 PROCESS VALIDATION Process Design


Deliang Zhou

Understanding Physicochemical
Properties for Pharmaceutical
Product Development and
ManufacturingDissociation,
Distribution/Partition, and Solubility
Deliang Zhou

Welcome to Product and Process Design. form characteristics, quality attributes, and so
Product and Process Design discusses scientific and on. Work conducted in development supporting
technical principles associated with pharmaceutical product and process design stage must be based
product development useful to practitioners in on scientific and technical principles. Disciplines
validation and compliance. We intend this column to supporting this work include basic chemistry and
be a useful resource for daily work applications. The pharmaceutics; pharamcokinetics including drug
primary objective for this feature: Useful information. absorption, metabolism, distribution, and excretion;
Information developed during product and biopharmaceutics information, and so on. The
process design is fundamental to drug and principles associated with these areas are broad
product development and all subsequent activities and complex. Also, the technical language and
throughout the product lifecycle. The quality-by- mathematics associated therein may be esoteric and
design (QbD) initiative encourages understanding intimidating. This column addresses topics relevant
of product and process characteristics and process to product and process design with these difficulties
control based on risk analysis. Process design in mind. It is our challenge to present these topics
is the first stage of pharmaceutical process clearly and in a meaningful way so that our readers
validation as described in the recent US Food will be able to understand and apply the principles
and Drug Administration process validation draft in their daily work applications.
guidance. This stage comprises activities that Reader comments, questions, and suggestions are
support product and process knowledge leading needed to help us fulfill our objective for this column.
to a consistent manufacturing process throughout Please send your comments and suggestions to column
the product lifecycle. Product development coordinator Yihong Qiu at qiu.yihong@abbott.com
work provides much of the information in the or to journal coordinating editor Susan Haigney at
product and process design stage, including active shaigney@advanstar.com.
pharmaceutical ingredient information, dosage

[
For more Author ABOUT THE AUTHOR
information, Deliang Zhou, Ph.D., is an associated research investigator in Global Formulation Sciences, Global
go to Pharmaceutical R&D at Abbott Laboratories. He may be reached at deliang.zhou@abbott.com.
gxpandjvt.com/bios Yihong Qiu, Ph.D., is the column coordinator of Product and Process Design. Dr. Qiu is a re-
search fellow and associate director in Global Pharmaceutical Regulatory Affairs CMC, Global Pharma-
ceutical R&D at Abbott Laboratories. He may be reached at qui.yihong@abbott.com.

PROCESS VALIDATION Process Design 19


Deliang Zhou

KEY POINTS t.PMFDVMBSJOUFSBDUJPOTJNQPSUBOUJOUIF


The following are key points addressed in this solution process include Van der Waals forces,
discussion: hydrogen bonds, and ionic interactions
t6OEFSTUBOEJOHUIFQIZTJDPDIFNJDBMQSPQFSUJFT t5FNQFSBUVSFHSFBUMZJOGMVFODFTUIFTPMVCJMJUZ
of the active pharmaceutical ingredient (API) of a drug molecule and is always noted when
is fundamental to pharmaceutical product solubility values are reported
development, manufacturing, and stability t5IFTPMVCJMJUZQSPEVDUPGBOJOTPMVCMF
t.PTUQIBSNBDFVUJDBMTBSFPSHBOJDNPMFDVMFT  compound is the basis for applications of the
many of which are weak acids or weak bases common ion effect
that partially dissociate in aqueous solution t5IFBRVFPVTTPMVCJMJUZPGBXFBLBDJEPSCBTFJT
t%JTTPDJBUJPONBZCFDIBSBDUFSJ[FECZ strongly pH-dependent due to relative species
acidity dissociation constant (K a) or basicity abundances at different pH
dissociation constant (K b), and their t5IFTPMVCJMJUZPGQPMZNPSQITDBOCF
corresponding pK a and pKb significantly different because they have
t#VGGFSTZTUFNTTVDIBTDJUSBUFBOEQIPTQIBUF different free energies. The metastable
buffers that resist pH changes upon the polymorph can convert to the more stable
addition of small quantities of acid or base are polymorphs through solution-mediated
based on dissociation behavior or weak acids transformation. The solubility difference
and weak bases among various solid forms is one of the driving
t"DJECBTFEJTTPDJBUJPOIBTTJHOJGJDBOUJOGMVFODF forces responsible for phase changes during
on almost all other physicochemical properties pharmaceutical processing.
including salt formation, API purification, t4PMVCJMJUZQMBZTBDFOUSBMSPMFJOBOVNCFSPG
formulation, processibility, solubility, stability, issues related to drug dissolution, absorption,
mechanical properties, and biopharmaceutical formulation and process development and
performance manufacture, as well as stability
t%JTUSJCVUJPOPSQBSUJUJPOSFGFSTUPUIFSFMBUJWF t4PMVCJMJUZJTPGQBSBNPVOUTJHOJGJDBODFUPUIF
solubility of a drug between aqueous and non- biological performance of a drug molecule.
aqueous immiscible liquids The physiology of the gastrointestinal
t%JTUSJCVUJPOPSQBSUJUJPOJTDIBSBDUFSJ[FECZ (GI) tract plays an important role in drug
logD or logP solubilization, including the pH environment
t5IFEJTUSJCVUJPOQBSUJUJPOQIFOPNFOPOJT and solubilization by bile salts. pH change is
important in manufacturing and laboratory particularly relevant for weakly acidic or basic
extraction processes to separate organic drugs. Solubility changes along the GI tract
and inorganic compounds. Partition could also cause changes such as formation of
is the foundation underlying various salt, free acid or base, and hydrate, amorphous,
chromatographic techniques and polymorph conversion.
t%JTUSJCVUJPOQBSUJUJPOJTJNQPSUBOUJO t4PMVCJMJUZJTFYUFOTJWFMZVUJMJ[FEJO
determining the biological, pharmacological, crystallization and purification of active
pharmaceutical, and toxicological activities of pharmaceutical ingredients and intermediates.
a drug molecule. It is widely used in designing Experimental conditions, such as pH, solvent,
drug candidates to possess appropriate antisolvent, temperature, and other factors may
absorption, distribution, metabolism, and be modified to optimize processes, control
excretion (ADME) properties solid forms, and maximize yields.
tn-Octanol/water is the most commonly used t4PMVCJMJUZJTPOFPGUIFVOEFSMZJOHSFBTPOT
system to characterize drug partitioning during many process-induced phase
t"TPMVUJPOJTBTJOHMFQIBTFNJYUVSFDPOTJTUJOH transformations such as hydrate formation
of two or more components, usually solute and during wet granulation and amorphous
solvent transition during drying.
t"TPMVUJPOJOXIJDIUIFTPMVUFJTBCPWF
the solubility limit (supersaturated) is not INTRODUCTION
thermodynamically stable and will ultimately Physicochemical properties are the foundation
lead to solute crystallization for pharmaceutical product development,
t5IF6OJUFE4UBUFT1IBSNBDPQFJBIBTEFGJOFE manufacturing, and stability. They also provide
solubility definitions such as very soluble, insights to the biopharmaceutical performance of
freely soluble, slightly soluble, and other terms a drug molecule. A general appreciation of these

20 PROCESS VALIDATION Process Design


Deliang Zhou

subjects is important for scientists and engineers Similar treatments can be applied to a weak base:
involved in pharmaceutical product development B + H 2O BH+ + OH
and manufacturing.
This column discusses basic physical properties In this case, the proton is transferred from the
including: Dissociation of weak acids and bases, water to the base, generating a hydroxyl ion. The
distribution and partitioning between aqueous and basicity dissociation constant is then as follows:
non-aqueous liquids, and solubility. This column [BH+][OH]
Kb =
starts with basic definitions, follows with important [B]
considerations, and completes each discussion with
practical relevance. Particular attention is given to By convention, the concentration term of water
the contributions of dissociation to distribution and is dropped because it is in large excess and its
solubility. The significance of these basic properties concentration does not change appreciably. It is
is highlighted in the contexts of biopharmaceutics, also well known that water itself is very weakly
pharmaceutical development, manufacturing, and dissociable to produce a proton and a hydroxyl ion
stability. as follows:
H 2O H+ + OH
DISSOCIATION AND THE DISSOCIATION
CONSTANT The self-dissociation constant of water is often
The vast majority of pharmaceuticals are essentially designated as Kw = [H+] [OH], which equals to 1
organic molecules, a large fraction of which are 10 14 at 25C. Substituting the Kw in the previous
weak acids or weak bases. Typical acidic groups equation, one obtains the following:
include carboxylic acid (COOH), sulfonic acid Kw[BH+] Kw
Kb = =
(SO3H), sulfonamide (SO2NH), and phenols [H+][B] Ka
(C6H5OH). Primary, secondary, and tertiary
amines, as well as pyridines and other aromatic where Ka is the acidity constant of BH+, the
amines, are the most frequently encountered basic conjugate acid of the base B, as shown below:
groups. These molecules tend to dissociate partially BH+ H+ + B
in aqueous solutions, where equilibrium exists
between the various ionized and unionized species. [H+][B]
Ka =
[BH+]
The Ionization Equilibrium
The dissociation of a monoprotic weak acid, HA, Therefore, the basicity constant of a weak
can be represented as follows: base can be represented by the acidity constant
HA H+ + A of its conjugate acid as: pKa = pKw pKb, as is
conventionally done. Basicity increases with
Similar to chemical equilibrium, the dissociation increasing pK a, contrary to the acidity.
equilibrium is defined as follows: Acids and bases can be categorized as monoprotic,
[H+][A] diprotic, or polyprotic based on the number of
Ka =
[HA] protons they can accept or donate. Multiple ion
equilibriums exist simultaneously for a polyprotic
Where Ka is known as the acidity dissociation electrolyte, and each has its own dissociation
constant or simply as the ionization constant, and the equilibrium. For example, phosphoric acid
bracket [ ] represents concentration of a particular dissociates in the following three steps:
species at equilibrium. The concentrations, instead Ka1
of activities, are used throughout because of their H 3PO4 H+ + H 2 PO4
simplicity, and appropriateness for the purpose of
this general discussion. Similarly to the notion Ka2
of pH, pKa is more commonly used to denote H 2 PO4 H+ + HPO42
the negative logarithm of the acidity dissociation
constant. The pKa value, often a small positive Ka3
number, is more convenient because most Ka values HPO42 H+ + PO43
are so small. It is obvious that the acidity decreases
with pKa. A typical organic carboxylic acid has pKa ~ These stepwise pKas for phosphoric acid are 2.21,
5, while phenols and sulfonamides are weaker, with 7.21, and 12.7, respectively. The acid becomes
typical pKa values of 8-10. progressively weaker.

PROCESS VALIDATION Process Design 21


Deliang Zhou

Figure 1: Species distribution as a function of For example, consider the following equation for a
pH for succinic acid. weak acid:
1.0

0.8
H2 Suc
H Suc
2

pH = pKa + log
( )
[A]
[HA]
= pKa + log
[base form]
[acid form]
Suc
This equation is known as the buffer equation or
0.6
Fraction

the Henderson-Hasselbalch equation, which describes


how pH changes with the ratio of its species
0.4 concentrations. The right most part of the equation
is more general as it also applies to a weak base.
0.2 It can be demonstrated that the maximum buffer
capacity occurs at pH = pK a, where concentrations
of the acid and base forms are equal. The exact
0
buffer capacity can be obtained by differentiating
0 2 4 6 8 10 12
pH the Henderson-Hasselbalch equation:

Species distribution and buffer capacity are direct [H+]Ka


= d[base]/dpH = 2.303 Ctot =
results from dissociation and are discussed next. ([H+]Ka)2

Species Distribution In Solution This is the exact reason that most buffers are
In an aqueous solution of a weak electrolyte various used around their pKa values in order to achieve
species, such as the unionized parent, the proton, their intended function. Unlike the strong acid and
and the ion, coexist. The relative fraction of each base buffers, a weak acid and base buffer allows
species depends on its pKa value and the solution pH, the buffering pH to be maintained while adjusting
which can be calculated based on the dissociation the buffer capacity by changing the total buffer
equilibrium. For example, consider a monoprotic concentration, Ctot.
acid HA:
Significance Of Dissociation
log
( )
[A]
[HA]
= pH - pKa
Acid/base dissociation is a simple concept, yet
has significant influences on almost all other
physicochemical properties. Salt formation is
Therefore, the ionized species will dominate at well built upon the dissociation concept, which
pH > pKa, while the unionized neutral molecule has been extensively exploited during active
will dominate at pH < pKa. To preferably target a pharmaceutical ingredient (API) manufacture for
certain species, either ionized or unionized, we purpose of purification, processibility, and other
often follow the rule of two pH units from pKa, aspects of pharmaceutical development such as
because the ratio is either 100 to 1 or 1 to 100. This solubility, stability, mechanical properties, and
is particularly useful in separation science such as biopharmaceutical performance. A general rule
high performance liquid chromatography (HPLC), of thumb for salt formation is that the difference
because mixed mode partition can cause peak in pK a between the drug and the counter ion
broadening or even shouldering. Similar equations should be at least three to increase the chance of
can be worked out for bases. An example species success. Dissociation can also significantly impact
distribution is shown in Figure 1 for succinic acid. stability, because the ionized and unionized species
The hydrogen succinate displays a maximum usually have different intrinsic reactivitytheir
around pH of 5. respective rates of degradation may be significantly
different. The more stable form of the API can thus
Buffer Capacity Of Weak Electrolyte be selected for product development. One or both
Solutions of these forms can be catalyzed by the proton or
Weak acids and bases are often used as buffering hydroxyl ions, as evidence from the pH-stability
agents (e.g., citrate buffer, phosphate buffer). A profiles. Buffering agents have been frequently
buffered solution, namely, refers to its ability used to maintain a formulation at the desired
to resist pH changes upon the addition of small pH for optimal stability, which is particularly
quantities of acid or base. Quantitatively, buffer important for parenteral and other liquid
capacity, , is the ratio the added strong base (or formulations.
acid) to the resulted pH change.

22 PROCESS VALIDATION Process Design


Deliang Zhou

DISTRIBUTION AND PARTITION Figure 2: Example pH - LogD profile for a


When an immiscible liquid, such as hexane or weak base with pK a of 9.
n-octanol, is added to an aqueous drug solution, the
drug molecules will distribute themselves in each
of the two immiscible phases. This phenomenon is
4
known as distribution or partition. At equilibrium,
the ratio of the concentrations in each phase is
constant, which is a characteristic of the drug 2
molecule itself and the nature of the two phases.
This ratio is called the distribution coefficient of the

LogD
drug as follows: 0

D = CO/CW
-2
where C o and Cw are the equilibrium concentration
of molecule in the oil and in the water,
respectively. The distribution coefficient can also -4
be approximated as the solubility ratio. Because 0 2 4 6 8 10 12

distribution coefficients are usually large for pH


organic molecules, they are often converted to
logarithms, known as LogD. Following the
convention, the natures of the partitioning phases from inorganics by extracting with an immiscible
are often noted, such as LogD n-octanol / water or LogD solvent. Separation of organic mixtures can
n-octanol / pH 7.4 buffer
. be possible if their LogD values are sufficiently
The partition coefficient, LogP, is usually different, or by manipulating their LogD values via
reserved to the partitioning of the neutral or pH modification.
unionized species, which can be deemed as the Partition is the foundation underlying various
intrinsic distribution coefficient of a molecule chromatographic techniques. In HPLC, a solute
(as to be shown, LogD is pH-dependent with molecule is repeatedly partitioned between the
ionizable molecules). Another term, called cLogP, stationary (column) and the mobile (eluent) phases.
refers to calculated LogP value based on molecular The number of equivalent partitioning process is so
fragmentation algorithms. large that minute differences in the solute molecules
can be magnified and thus successfully separated.
LogD For Weak Electrolytes The pH of the mobile phase is an important factor
An ionized species has much less affinity to the to optimize the interactions between solute and the
oil phase than the corresponding neutral species. stationary phase and between the solute and the
It could be well assumed that only the unionized mobile phase, because ionized and unionized species
species distributes in both phases while the ionized have enormously different LogD values. Both single
species is concentrated in the aqueous phase. The mode and mixed mode elution have been utilized in
pH-distribution relationship can then be derived. modern HPLC development.
For example, the following equation holds for a Distribution/partition also plays an important
monoprotic weak base: role in determining the biological, pharmacological,

LogD = LogP + log


( Ka
[H+] + Ka ) pharmaceutical, and toxicological activities of a
drug molecule.
The distribution/partition characteristics are
Figure 2 shows such a plot for a weak organic directly related to the lipophilicity of a molecule.
base with a LogP of 4 and a pK a of 9. LogD Lipophicility characterizes the affinity of a
decreases linearly with pH with a slope of negative molecule to lipid, fat, or oil. It is well known
unity at pH < pK a. It even becomes negative below that the cell membrane has the phospholipid
pH 5, which renders itself practically not extractable bilayer structure, which consists of various
by oil phase and can be utilized in separation. phosphoglycerides. There is a similarity between
water/oil partitioning of a molecule and its ability
Applications Of Distribution/Partition to penetrate cell membrane. As a perquisite for a
The distribution/partition phenomenon is common molecule to function biologically, the molecule
and has been utilized to a great extent by various itself has to be able to pass the cell membrane
industrial and laboratory extraction processes. and penetrate into the cell. Indeed, Hansch et al.
Organic compounds can be readily separated (1) established a correlation between n-octanol

PROCESS VALIDATION Process Design 23


Deliang Zhou

and water partition coeffienct and biological Partitioning Systems


activities in the era of quantitative structure activity For historical reasons, the n-octanol/water partition
relationship (QSAR). Since then, the n-octanol/ coefficient has prevailed compared to other solvent
water distribution/partition coefficient has been systems in the pharmaceutical fields. The octanol/
well established in the pharmaceutical industry and water system was initially thought to model
other related fields. essential properties of typical biological membrane.
In pharmaceutical sciences, distribution/ However, a thorough literature review has not
partition coefficient is now widely used in resulted in compelling arguments. Partition
designing drug candidates to possess appropriate between water and other immiscible solvents,
absorption, distribution, metabolism, and excretion such as hexane, cyclohexane, heptane, isooctane,
(ADME) properties. The majority and the most dodecane, hexadecane, and chloroform have also
convenient route of oral administration of a drug appeared in the research literature, but far less
requires the drug molecule to be able to cross the frequently.
gut wall membrane and get absorbed into the blood From a physicochemical property point of
stream, and similar requirements hold true for view, the n-octanol molecule is characterized by
routes of administration other than intravenous a hydrophobic backbone (i.e., the C8 group) and a
applications. In the absence of carrier mediated hydrophilic hydroxyl head. The hydroxyl group can
absorption (nutrients or nutrient-like molecules) be both a hydrogen-bond donor and a hydrogen-
or paracelluar diffusion (small hydrophilic bond acceptor. In addition, water-saturated octanol
molecules), passive diffusion in which the drug has ~25 mol % water, that has led to the suggestion
crosses the bilayer of the intestinal epithelium of the 1:4 tetrameric structure. However, it was
remains the primary mechanism for most drug later determined by X-ray diffraction analysis
absorption. A molecule needs to be sufficiently that a cluster structure is more evident where ~16
lipophilic to be able to cross the gastrointestinal octanol molecules point their hydroxyl groups
(GI) membrane. Therefore, an appropriate LogD to the water cluster by forming an extensive
value is required for passive permeability in the GI hydrogen-bonding network (3). Therefore, a cluster
epithelium. After a drug gets absorbed into the structures interaction with a drug molecule can
blood stream, the drug molecules are then carried be complicated and may not capture all the lipid
to the various tissues and distributed throughout permeation characteristics of a molecule.
the body. This process can be deemed similarly The proposition of using multiple systems to
as partition. It is generally true that a hydrophilic better characterize biological membrane modeling
drug is distributed primarily in the plasma, while a has also appeared, such as the critical quartet
lipophilic molecule is more extensively distributed system (4). However, there has not been much
in peripheral tissues and organs. It has been well development in this area. The quartet system is
known that sufficient lipophilicity is necessary for a composed of n-octanol (a hydrogen-bond acceptor
drug molecule to pass the thick lipid bilayer called and donor), chloroform (a hydrogen-bond donor),
blood-brain-barrier (BBB), a tight layer of glial cells alkane (an inert medium), and propylene glycol
surrounding the capillary endothelial cells in the dipelargonate (a hydrogen-bond acceptor).
brain and spinal cord. Therefore, the distribution
coefficient is of vast importance in designing SOLUBILITY
molecules to have pharmacological activities in the A solution is a single-phase mixture consisting of
brain. A recent survey (2) on drug metabolism has two or more components. A binary solution is
also suggested that lipophilic molecules are usually composed of only two components, while a ternary
more extensively metabolized in the body, while solution is composed of three components. The
hydrophilic ones are often excreted unchanged in liquid component is usually called the solvent,
the urine or bile, relating to its ability to access the and the dissolved component is the solute. But the
enzymes responsible for metabolism. Therefore, distinction is not absolute and becomes vague when
the LogD/LogP values can provide tremendous two liquid components are concerned.
insight on the drugs relevant fates in the body. A drug may not be dissolved in a solvent at
Distribution/partition has also been useful in any concentration. Indeed, above the solubility
environmental sciences. Hydrophobic chemicals limit, a solution will not be thermodynamically
tend to stay in the environment longer and are more stable. Still widely employed today as the most
difficult to clean up. They also tend to preferably reliable method, solubility can be determined
get into various living species. Hence, the partition by equilibrating excess solids with a solvent.
coefficient is an important consideration in Equilibrium is deemed reached when the solution
environmental regulations. concentration reaches an asymptotethe dissolving

24 PROCESS VALIDATION Process Design


Deliang Zhou

Table: USP designations on solubility.


Part of solvent required Solubility in mg solute USP terminology
for 1 part of solute per mL of solvent
Less than 1 part >1 g/mL Very soluble
1-10 parts 100-1000 mg/mL Freely soluble
10-30 parts 33.3-100 mg/mL Soluble
30-100 parts 10-33.3 mg/mL Sparingly soluble
100-1000 parts 1-10 mg/mL Slightly soluble
1000-10,000 parts 0.1-1 mg/mL Very slightly soluble
More than 10,000 parts <0.1 mg/mL Practically insoluble

solvent contains the maximum amount of solute. units for drugs are mg/mL or g/mL, because
At this point, the solution is said to be saturated solubility usually falls in these concentration
with the solute and the solution is known as ranges. It should also be noted that the United
a saturated solution. The solution concentration States Pharmacopeia (USP) has a special solubility
equals the solubility in a saturated solution. In the designation system, as shown in the Table.
language of thermodynamics, the free energy of the
solute in solution equals that of the solid solute. A Solubility And Molecular Interactions
solution is unsaturated when concentration is less Lets take a look at schematic solution formation
than the solubility, while a supersaturated solution process at the microscopic level. According to
refers to one whose concentration is higher than the Hesss law, energy change of a process is path
solubility. One might be curious as how a solution independent, only the initial and final states being
could become supersaturated because macroscopic of importance. This path independence is true for
dissolution ceases once its concentration reaches all state functions, such as enthalpy, free energy,
the solubility. Well, there are many scenarios where and entropy. Therefore, the solution formation can
this could happen. For example, temperature can be deemed to be composed of the following three
significantly affect solubility. Therefore, one can essential steps:
purposely create an unsaturated or supersaturated tStep 1. A solute molecule is liberated from
solution by merely adjusting the temperature of the its crystal lattice. Work is needed to overcome
solution. the lattice energy. This process could be
A supersaturated solution is thermodynamically approximated as the melting of the solute
unstable and will lead to solute crystallization crystal. A high melting temperature and
when adequate time is given. The reason is that the melting enthalpy usually imply high lattice
free energy of solute molecules in a supersaturated energy.
solution is always higher than those in the
solid phase, and the crystallization process is
spontaneous by going from a higher free energy +
state to a lower free energy state. However, rate of
crystallization (kinetics) could vary. Crystallization
needs to start with nucleation, a process where tStep 2. A solvent cavity is created, large
nuclei are formed, that serves as the sites for enough to hold the solute molecule. This
crystal growth. A supersaturated solution may process needs to overcome the cohesive
be kinetically stable for a long duration without attractions among solvent molecules.
crystallization. Parent crystals, dusts, and other
exogenous materials can in many cases serve as the
nucleation sites and are often utilized in laboratory
and industrial crystallization processes.
Solubility is often reported in units such as
weight by volume (w/v, e.g., mg/mL), weight-by-
weight (w/w), molar fraction, molarity, or molarlity. tStep 3. Finally, a solution is formed by placing
These units can be inter-converted when adequate the solute molecule into the solvent cavity created
information is given. The most frequently used in step 2. Energy is released resulting from

PROCESS VALIDATION Process Design 25


Deliang Zhou

Figure 3: A pH-solubility profile for a weak acid is called the dipole-induced dipole or Debye
with pKa of 4 and intrinsic solubility of 1 g/mL. force. Finally, transient polarization exists in
nonpolar molecules because of the electronic
cloud movements in the molecule. This induced
1.E+07
dipole-induced dipole interaction is called the
1.E+06
dispersion or London force. All these van der
Waals forces decrease quickly with distance (1/
1.E+05 r6) and are usually in the magnitude of 1 10
kcal/mol. On an individual basis, they are weak
and follow the rank order Keesom force > Debye
Solubility

1.E+04
force > dispersion force. However, they cannot
1.E+03 be ignored for a collection of molecules. Even
the weakest dispersion force is sufficient to cause
1.E+02
the condensation of nonpolar gas molecules and
crystallization of nonpolar liquids.
1.E+01
tHydrogen bonds. Hydrogen bonds refer
1.E+00
to the interaction between a hydrogen atom
0 2 4 6 8 10 and a strongly electronegative atom such as
pH fluorine, oxygen, and nitrogen. The nature
solvent-solute interaction. Hopefully this energy of hydrogen-bond is largely electrostatic
is large enough to compensate the work needed in interaction between the small size and large
the previous two steps. It should be pointed out, electrostatic field of the hydrogen atom, and the
though, that the energy in the first two steps need strong electronegativity of the acceptor atoms.
not be entirely compensated, because the entropy Hydrogen bond requires certain direction, is
gained after mixing is also favorable, and can a fairly strong intermolecular interaction (2-7
negate part of the energy consumed during steps kcal/mol), and is always in addition to the van
1 and 2. der Waals forces. It exists in water, hydrogen
fluoride, and ammonium, and is responsible
for their abnormally high boiling points and
+ high melting points in their corresponding
homologous series.
tIonic interactions. Ionic interaction is
Tremendous insight can be gained on solubility the electrostatic interaction (attraction and
by just considering these processes. The solubility repulsion) between ions of the opposite or same
of a compound can be low due to: high melting charges. They are the primary interactions
point; high melting enthalpy; lack of solute- responsible for ionic crystals. Solubility of
solvent interaction; and too strong solvent-solvent ionizable molecules also depends partially
cohesion. on the ion-dipole and ion-induced dipole
For a specific compound and a specific solid interactions between the solute and solvent. A
form, the melting temperature and enthalpy is polar solvent has high dielectric constant (e.g.,
given and can not be changed. However, the water) that can reduce the ion-ion interaction.
solvent-solute interaction and solvent-solvent Therefore, ionizable compounds such as salts
cohesion can be modulated and an appropriate usually are more soluble in polar solvents than
solvent system may be designed to achieve desired in non-polar solvents.
solubility. For this reason, the following molecular
interactions are important: The empirical rule on solubility states like
tVan der Waals forces. Van der Waals forces dissolves like. The rule itself is a bit vague;
include both dipole-dipole, dipole-induced however, one may make judgment calls by
dipole, as well as induced dipole-induced dipole examining the interactions stated previously. A
interactions. A molecule having permanent polar molecule is likely more soluble in polar
dipole tends to interact with another dipole by solvents. When a drug molecule is capable of
pointing to opposite pole and this interaction forming hydrogen bonding, a solvent capable of
is known as dipole-dipole or Keesom forces. satisfying such requirements is likely to provide
Permanent dipoles are also able to induce an improved solubility. In the absence of any polar
electric dipole in nonpolar but polarizable group, a drug molecule is probably more soluble in
neighboring molecules, and this interaction a nonpolar solvent such as alkanes and benzene.

26 PROCESS VALIDATION Process Design


Deliang Zhou

A drug molecule that contains both polar and where s and aq. represent solids and aqueous,
nonpolar groups may require a solvent or a respectively. Its solubility product is as follows:
mixed solvent capable of providing simultaneous Ksp = [Ag+][Cl]
interactions to both parts of the molecule. Therefore, the solubility of silver chloride can be
reduced if additional chloride (e.g., NaCl) or silver
Effect Of Temperature (e.g., silver nitrate) ions are introduced into the
Temperature greatly influences the solubility of a solution. This phenomenon is known as the common
drug molecule; therefore, the temperature is always ion effect.
noted when solubility values are reported.
The influence of temperature on solubility pH-Solubility Profile
depends on the heat of solution. Solubility The aqueous solubility of a weak acid or base is
decreases with temperature when the heat of strongly pH-dependent due to relative species
solution is negative (i.e., when the dissolution is abundances at different pH. The concentration
an exothermic process). Solubility increases with ratio between the ionized and unionized species is
temperature if the solution process absorbs heat related via the Henderson-Hasselbalch equation,
(endothermic). This observation is consistent with as discussed previously. The overall solubility can
the Le Chateliers principle on equilibrium. The then be derived for a weak acid or base, noting that
magnitude of the heat of solution determines the the unionized species is in equilibrium with the
steepness of solubility change when temperature excess solid (i.e., at the intrinsic solubility S0). For
is altered. The quantitative relationship is well example, the total solubility of a weak acid is as
captured by the Vant Hoff equation. follows:
Some molecules have a more complicated S = S 0 (1 + 10pHpKa)
temperature-solubility relationship. For example,
phenol shows an upper consolute temperature A typical pH-solubility plot is shown in Figure 3
(UCP) of 66.8C, above which it can be mixed with for a weak acid with pK a of 4 and intrinsic solubility
water at any ratio. Triethylamine demonstrates a of 10 g/mL. Below pH 4, solubility is essentially
lower consolute temperature (LCT) in water of ~18, constant and is equal to the intrinsic solubility.
below which the two are miscible in all portions. However, above pH 4, the solubility increases
Nicotine-water system shows both an UCP and exponentially with pH. The solubility will not,
LCT. The existence of UCP and LCT has something however, increase monotonically without a limit.
to do with the solute-solvent interactions and Salt formation could also take place; therefore,
their temperature dependences. For example, the the solubility reaches a plateau (shown in Figure
hydrogen-bonding interaction could be destroyed at 3 as the blue dash line), the value of which is
higher temperature; therefore, an LCT could form. determined by the solubility product of the salt.
Sometimes solute can form complexes with The broken red line is just an extension of the
solvent called solvates. It is then the solvated form pH-solubility profile if a salt is not formed.
that controls the equilibrium solubility. A solvate
is usually less soluble in the same solvent than the Solubility Of Polymorphs And Solvates
unsolvated form due to thermodynamic reasons. A Polymorphism is the existence of multiple crystal
solvate may become unstable and dissociate above forms by the same chemical entity. They exist as
a certain temperature. Therefore, the temperature- different molecular packings in the crystal lattice.
solubility line may break its trend. Sodium sulfate The reasons leading to polymorphism are complex
is an example of this type of interaction. but are related to the ability of a molecule to
efficiently pack itself in a unit cell with different
Solubility Product And The Common Ion conformation, orientation, or hydrogen-bonding
Effect motifs. Crystal hydrate is the incorporation
When a salt is dissolved, it dissociates into the of water molecules in a drug crystal unit cell.
respective anion and cation. At equilibrium, the Similarly, a solvate could be formed. In addition,
product of the concentrations of these ions is amorphous form refers to the non-crystalline solid
constant. This is known as the solubility product, when the long-range order in a crystal is destroyed.
often represented as K sp. For example, the All these can be collectively called the solid forms
dissolution of silver chloride is as follows: of a drug molecule.
AgCl(s) Ag+ (aq.) + Cl (aq.) The solubility of polymorphs can be significantly
different because they have different free energies.

PROCESS VALIDATION Process Design 27


Deliang Zhou

For example, the solubility of ritonavir Form II is The physiology of the GI tract plays an important
about 1/4 of that of Form I at 5C (5). The most role in drug solubilization, including the pH
stable form has the least free energy and lowest environment and the solubilization by various
solubility. The free energy difference between two bile salts. For example, the human stomach is
polymorphs is related to the following solubility characterized by a low pH of 1.5-2 under fed
ratio: conditions and pH 2-6 under fasted conditions.
The pH increases going down to duodenum (pH
G(I II) = RT ln(SI/SII) 4.5-5.5), small intestine (pH 6-6.5), ileum (pH 7-8),
and colon (pH~5.5-7). The small intestine is also
The metastable polymorph can convert to the characterized by a high concentration of bile salts
more stable polymorphs through solution-mediated and the large surface area attributing to its villi and
transformation, where the solution concentration micro-villi structures. The small intestine serves
is higher than the solubility of more stable form as the primary site of drug absorption. On the
thus providing the thermodynamic driving force contrary, the large intestine does not possess the
for conversion. The ritonavir Form I to Form II villi structure. It also lacks the fluid to solubilize
transition in the liquid-filled capsules is such an a drug and is challenging for drug absorption,
example. Still, the amorphous form, being the except for certain highly soluble and highly
most energetic, has the highest solubility. permeable molecules. The change in pH, bile salt
Solvates usually have less solubility in the concentration, fluid contents, motility, as well as
same solvent than the unsolvated form because the residence time, all affect the solubility and in
equilibrium dictates so due to the large excess vivo dissolution of a drug molecule, and influences
of solvent. Therefore, a solvated form is often its oral absorption. The pH change is particularly
discovered during the crystallization in a particular relevant for weakly acidic or basic drugs. Solubility
solvent. There also exists a critical solvent activity, changes along the GI tract could also cause
above which the solvate is more stable while solid form changes, such as salt formation,
below which the unsolvated form is more stable. parent formation, hydrate formation, as well as
Specifically for hydrates, the critical relative humidity polymorphic conversion, all complicating the drug
(RH) or water activity comes to play a significant absorption process.
role, because moisture is ubiquitous during storage Appropriate solubility and solvent system are
and because water is the most frequently used also required to develop other formulations such as
solvent in wet granulation. It should also be noted parenteral, subcutaneous, transdermal, and aerosol
that a solvate has higher solubility in other miscible formulations.
solvents, because the dissociated solvent molecules Solubility is extensively utilized in crystallization
drag the drug molecules into solution. and purification of active pharmaceutical
The solubility difference among various solid ingredients and/or their intermediates.
forms is one of the driving forces responsible for Experimental conditions, such as pH, solvent,
phase changes during pharmaceutical processing. antisolvent, temperature, and other factors may be
modified individually or in combination in order to
Significance Of Solubility optimize the process. Solubility plays an important
Solubility plays a central role in a number of issues role in controlling the solid forms as well as the
related to drug dissolution, absorption, formulation yields.
and process development and manufacture, as well Amorphous forms have been extensively utilized
as stability. to improve drug bioavailability because they have
Solubility is of paramount significance to the higher apparent solubility (6).
biological performance of a drug molecule. Drug Solubility is one of the underlying reasons for
molecules, when presented as oral solid dosage many process-induced phase transformations (7).
forms, need first to dissolve in the GI fluid before For example, hydrate formation can occur during
absorption can take place. The dissolution rate of a wet granulation. The hydrate may then partially
solid is proportional to its solubility, diffusivity, and or fully dehydrate during drying, resulting in
surface area. Therefore, the higher the solubility, formations of partially amorphous or less ordered
the faster a molecule can get into solution, and the molecular form. A highly water-soluble, low
higher its concentration in the GI fluid. In turn, dose drug may completely dissolve during wet
this leads to a higher rate of absorption, because granulation and turn into amorphous upon drying.
the passive diffusion is driven by the concentration All these changes can have profound effects on
gradient across the GI membrane. product attributes such as stability, dissolution, and
in vivo performance. These phase changes can be

28 PROCESS VALIDATION Process Design


Deliang Zhou

scale, time, and equipment dependent due to their 4. Leahy, D. E.; Morris, J. J.; Taylor, P. J. and Wait, A. R.,
kinetic nature, which is worrisome from the quality Membranes and Their Models: Towards a Rational
control point of view. The solubility and solubility Choice of Partitioning System, Pharmacochem. Libr.
differences among the various solid forms are 16, 75-82, 1991.
responsible for, and are the keys to, understanding 5. Bauer, J.; Spanton, S.; Henry, R.; Quick, J.; Dziki, W.;
and resolving these transformations as they provide Porter, W. and Morris, J., Ritonavir: An Extraordinary
the thermodynamic driving forces for change. E x a mple of C on for m at ion a l Pol y mor ph i sm ,
Pharmaceutical Research, 18, 859-866, 2001.
SUMMARY 6. Law, D.; Schmitt, E. A.; Marsh, K. C.; et al., Ritonavir-
Dissociation, distribution/partition, and PEG 8000 Amorphous Solid Dispersions: In vitro and
solubility constitute the fundamental physical in vivo Evaluations, Journal of Pharmaceutical Sciences,
properties of a drug molecule. They influence, 93, 563-570, 2004.
directly or indirectly, almost every aspect in 7. Zhang, G. G. Z.; Law, D.; Schmitt, E. A. and Qiu, Y.,
drug development: Absorption, distribution, Phase Transformation Considerations During Process
metabolism and excretion, formulation Development and Manufacture of Solid Oral Dosage
development, processing development, stability, Forms, Advanced Drug Delivery Reviews, 56, 371-390,
product manufacture, and regulatory concerns. 2004. JVT
Many issues during pharmaceutical development
can be related to characteristics in these basic ARTICLE ACRONYM LISTING
properties in one way or another. Awareness ADME Absorption, Distribution, Metabolism,
and knowledge of these subjects is important Excretion
to understand and solve various issues during API Active Pharmaceutical Ingredient
pharmaceutical development, manufacturing, and BBB Blood-Brain-Barrier
stability. FDA US Food and Drug Administration
GI Gastrointestinal
HPLC High Performance Liquid Chromatography
REFERENCES Ka Acidity Dissociation Constant
1. Hansc h, C. and L eo, A. Substituent Constants for Kb Basicity Dissociation Constant
Correlation Analysis in Chemistry and Biolog y, Wiley- LCT Lower Consolute Temperature
Interscience, New York, 1979. QSAR Quantitative Structure Activity Relationship
2. Custodio, J. M.; Wu, C.Y. and Benet, L. Z., Predicting RH Relative Humidity
Drug Disposition, Absorption/Elimination/Transporter UCP Upper Consolute Temperature
Interplay and the Role of Food on Drug Absorption, USP United States Pharmacopeia
Advanced Drug Delivery Reviews 60, 717-733, 2008.
3. Franks, N. P., Abraham, M. H., and Lieb, W. R .,
Molecular Organization of Liquid n-Octanol: An X-ray
Diffraction Analysis, Journal of Pharmaceutical Sciences,
82, 707-712, 1993.

Originally published in the Spring 2009 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 29


Deliang Zhou

Understanding Physicochemical Properties


for Pharmaceutical Product Development
and Manufacturing II:

Physical and Chemical Stability


and Excipient Compatibility
Deliang Zhou

Product and Process Design discusses scientific and t1IBSNBDFVUJDBMQSPDFTTJOHTVDIBTDPNNJOVUJPO


technical principles associated with pharmaceutical (miling), compaction, granulation, drying, and
product development useful to practitioners in coating may induce partial or complete phase
validation and compliance. We intend this column to conversion, which could lead to inconsistent drug
be a useful resource for daily work applications. The product quality if not understood and properly
primary objective for this feature: Useful information. controlled
Reader comments, questions, and suggestions are t%SVHEFHSBEBUJPONBZCFDBUFHPSJ[FEBT
needed to help us fulfill our objective for this column. thermolytic, oxidative, and photolytic
Please send your comments and suggestions to column t)ZESPMZTJTBDDPVOUTGPSUIFNBKPSJUZPGSFQPSUFE
coordinator Yihong Qiu at qiu.yihong@abbott.com or drug degradations and it is common for a broad
to coordinating editor Susan Haigney at shaigney@ category of organic molecules derived from
advanstar.com. weak functional groups such as carboxylic acids.
Moisture, temperature, and pH may greatly impact
KEY POINTS the rate of hydrolysis.
The following key points are discussed in this article: t0YJEBUJPOPGESVHTJTUIFOFYUHSFBUFTUDBVTFPG
t4UBCJMJUZPGQIBSNBDFVUJDBMQSPEVDUT JODMVEJOH degradation. Three primary mechanisms exist
physical and chemical stability, is a core quality for oxidative degradations: nucleophilic and
attribute potentially impacting efficacy and safety electrophilic, electron transfer, and autoxidation.
t1IBTFUSBOTGPSNBUJPOJTBDPNNPOGPSNPGQIZTJDBM Nucleophilic and electrophilic oxidations are
instability. Polymorphic transition; solvation and typically mediated by peroxides. Transition metal
desolvation; salt and parent conversion or salt and catalyzes oxidation via electron transfer process.
salt exchange; and amorphization and devitrification Autoxidation involves free-radical initiated chain
are the common types of phase transformations. reactions. A single free-radical can cause oxidation
Phase transformation can occur via solid-state, melt, of many drug molecules. Autoxidation is often
solution, or solution-mediated mechanisms. autocatalytic and non-Arrhenius.

[
For more Author ABOUT THE AUTHOR
information, Deliang Zhou, Ph.D., is an associated research investigator in Global Formulation Sciences, Global
go to Pharmaceutical R&D at Abbott Laboratories. He may be reached at deliang.zhou@abbott.com.
gxpandjvt.com/bios Yihong Qiu, Ph.D., is the column coordinator of Product and Process Design. Dr. Qiu is a
research fellow and associate director in Global Pharmaceutical Regulatory Affairs CMC, Global
Pharmaceutical R&D at Abbott Laboratories. He may be reached at qui.yihong@abbott.com.

30 PROCESS VALIDATION Process Design


Deliang Zhou

t1IPUPMZUJDEFHSBEBUJPOPDDVSTPOMZXIFOMJHIUJT INTRODUCTION
absorbed. Excited states of drug molecules may Stability is a core quality attribute for any viable drug
have enhanced reactivity. Photolytic degradation product. While drug molecules tend to degrade over
is complex but can usually be mitigated by time like other chemicals, stability of drug products is
packaging. broader than chemical degradation alone. Any aspect
t3FBDUJPOPSEFS DBUBMZTJT BOEQ)SBUFQSPGJMFBSF related to the change of the physical, chemical, and
some of the basic concepts in chemical kinetics, biopharmaceutical properties of a drug product could
which are helpful to the basic understanding of drug be classified as instability.
degradation Generally speaking, instability of pharmaceuticals
t%SVHEFHSBEBUJPOJOTPMJEEPTBHFGPSNTJT includes but is not limited to: loss of active ingredient
often determined by the surface characteristics potency, increase in levels of impurities, change in
of the active pharmaceutical ingredient (API) appearance (e.g., color, shape), change in mechanical
and excipient particles, or collectively the strength (e.g., tablet hardening/softening), change
microenvironment in dissolution/release rate, change in content
t%FGFDUT EJTPSEFSFEPSBNPSQIPVTDPOUFOUTJO"1*  uniformity (suspensions in particular), alteration in
and excipients pose significant concerns on drug bioavailability, or change in other pharmaceutical
degradations elegances. These instabilities may impact the
t.PJTUVSFQSPGPVOEMZBGGFDUTDIFNJDBMTUBCJMJUZCZ handling, purity, bioavailability, safety, and efficacy
direct involvement in a reaction or by catalyzing of a drug product, or may be merely cosmetic changes
a reaction via increasing molecular mobility and which can lead to poor patient acceptance.
facilitating the creation of microenvironments. This column discusses the key stability concepts
Moisture preferentially penetrates into amorphous in drug products, with an emphasis on the basic
or disordered regions and may greatly enhance physical and chemical principles applicable to all
chemical degradation. stages of product development and manufacture.
t1SPDFTTJOEVDFEQIBTFUSBOTGPSNBUJPOPG"1* More in-depth treatments on these topics may be
is often a leading cause of chemical instability, found in many textbooks and review articles (1-6).
particularly when significant amorphous content is The role of physical changes in chemical
generated degradation is often inadequately acknowledged.
t$PNQBUJCJMJUZJTBOJNQPSUBOUGBDUPSGPSFYDJQJFOU Physical stability often contributes greatly to the
selection. Timely identification of excipient chemical stability of a drug product and cannot be
incompatibility can result in significant savings in ignored. This discussion begins with an introduction
time and resources. Excipients can directly react of the basic concepts and general mechanisms on
with drug molecules, which may be judiciously physical and chemical stability of drugs, followed by
avoided based on the chemical structures excipient compatibility, and finally a brief overview
and physicochemical properties of the drug of remedies to instability.
molecules and excipients. Excipients can also
enhance drug degradation by creating a favorable PHYSICAL STABILITY OF DRUGS
microenvironment such as pH, moisture, metal In principle, any stability issue not involving
ions, or simply by providing large accessible chemical changes can be considered physical in
surfaces. nature. On the surface, the physical instability may
t'VOEBNFOUBMVOEFSTUBOEJOHPGHFOFSBMBTQFDUT manifest itself in various forms, such as appearance,
of chemical degradation as well as specific mechanical strength of dosage forms, content
physicochemical properties of a drug molecule is key uniformity, dissolution rate, and bioavailability.
to its stabilization. Stability issues may be mitigated These phenomena may arise from various root
by proper solid form selection. Formulation and causes; phase transformation is frequently the leading
engineering approaches may also be used to mitigate cause of these problems.
a stability issue. A drug molecule may exist in different solid forms,
t7BMJEBUJPOBOEDPNQMJBODFQFSTPOOFMTIPVMECF including polymorphs, salts, solvates (hydrates), and
aware of the physical and chemical properties amorphous phases. A primary task of preformulation
of the active drugs for which they have investigation is to select an appropriate solid form
responsibility. They should become especially which bears viable properties in various aspects.
knowledgeable about active drugs and products Depending on the specific circumstances, a solid
that may be highly prone to phase transformation form may be selected with particular emphasis on
and chemical degradation, and apply this its ability to improve one or more characteristics of
knowledge in process control and change the drug molecule, such as solubility and dissolution
management. rate, melting, polymorphism, purity, mechanical

PROCESS VALIDATION Process Design 31


Deliang Zhou

properties, manufacturability, and chemical stability. active pharmaceutical ingredient (API) manufacture,
For example, the bioavailability of a poorly water- formulation development, drug product manufacture,
soluble compound may be greatly improved by using and storage. Changes in the environmental humidity
an amorphous phase. Salts have been conventionally could inadvertently cause hydration or dehydration
used to improve properties such as purity, melting to occur. In aqueous solutions, the hydrated form is
temperature, crystallinity, hygroscopicy, and/or usually less soluble than the anhydrous form because
chemical stability, as well as the bioavailability of the of the nature of equilibrium. The solubility difference
parent form. A number of properties (e.g., solubility between hydrated and anhydrous forms is the driving
and dissolution, spectroscopic, mechanical, chemical) force behind solution-mediated hydrate formation,
have been shown to be affected by polymorphism. which could occur during various wet processes or
When modification of a particular property is aimed during in vivo dissolution.
via solid form selection, properties in other aspects Salt and Parent Conversions or Salt and
should be balanced to facilitate pharmaceutical Salt Exchange. Salts may be converted to their
development. It should be understood that a solid parent forms or to salts of different counterion
form change during product manufacturing and during processing and storage. The propensity of
storage could defeat the primary purpose for its salt form conversion is determined by pKa, crystal
selection. lattice energy, solubility and solubility relationship,
and other experimental conditions. For example,
Types of Phase Transformations salts of very weak acids or bases may readily convert
The following paragraphs describe the types of phase back to the parent forms upon exposure to moisture
transformations. or water. The potential of salt conversion during
Polymorphic Transition. When a molecule wet granulation and other wet processes is more
can be packed differently in the crystal lattice, significant than other dry processes and should
those crystalline forms are called polymorphs. be watched closely. Similar to other solid form
Polymorphic transition refers to the conversion conversions, change of a salt to its parent, or to
between polymorphs. Polymorphs differ in free different salt forms, can profoundly alter the quality
energy and could impact melting, hygroscopicity, attributes of a drug product.
solubility and dissolution, bioavailability, chemical Amorphization and Devitrification.
stability, mechanical, morphological, and flow When the long-range order in a crystal lattice
properties. Any two polymorphs are related is destroyed, an amorphous phase is then
thermodynamically as either enantiotropes generated. Partially or fully amorphous phases
or monotropes. Enantiotropy exists when may be generated (amorphization) by various
one polymorph is more stable below a critical methods including many unit operations such
temperature (transition temperature, Tt) and the other as milling, melting, wet granulation, drying, and
more stable above Tt. In monotropy, one polymorph compaction. Amorphous phases have higher free
is always more stable. Under any circumstances, energy than their crystalline counterparts and are
there is always a most (more) stable polymorph. often exploited to improve the bioavailability of
Solvation and Desolvation. This is related poorly water-soluble compounds. However, the
to the conversion between the solvated and reversion to crystalline forms (devitrification)
unsolvated crystal forms, or solvated forms of is thermodynamically favorable based on the
different stoichiometry, with hydration/dehydration thermodynamic principle. When the use of an
being the most typical. Similarly to polymorphs, amorphous phase is desired, the physical stability
differently solvated forms usually differ in becomes critical because crystallization could
various physicochemical properties. This type lead to a decrease in bioavailability and thus lose
of form conversion can affect chemical stability, the intended advantage of the amorphous state.
pharmaceutical elegance, and other quality attributes. When crystalline forms are desired, however, the
Particularly, the differences in apparent solubility and formation of even a small amount of amorphous
dissolution rate could impact on the oral absorption content during various processes is also a concern
for many poorly or borderline soluble compounds. because this may bring undesired effects to drug
An important concept for hydration and dehydration products such as enhanced degradation.
is the critical relative humidity (RH), below which
the anhydrous form is more stable and above which Mechanisms of Phase Transformations
the hydrate is more stable. Similarly, critical RH also The conversion of metastable form to stable form
exists between hydrates of different stoichiometry. is thermodynamically favored, which poses a
This concept is vital to the stability of hydrates concern for any system where a metastable form
because moisture has to be dealt with everywhere: is employed. The transformation of a stable

32 PROCESS VALIDATION Process Design


Deliang Zhou

form to a metastable form, on the other hand, is PHASE TRANSFORMATIONS DURING


thermodynamically unfavorable, and is possible PHARMACEUTICAL PROCESSING
only when there is an energy input from the Phase transformations can occur in a number of
environment. The transformation kinetics in both pharmaceutical unit operations, including milling,
cases, however, vary greatly, which are compound- wet granulation/spray drying/lyophilization, drying,
specific and depend on the experimental and compaction, and therefore may impact drug
conditions. product manufacturing and performance (7). Because
The primary mechanisms for phase conversions of their very kinetic nature, the extents of these
include solid-state, melt, solution, and solution- phase transformations may be time and equipment
mediated transformations. dependent. These phase transformations, when not
Solid-State. Certain phase transitions occur in controlled properly, often lead to inconsistent product
the solid-state without passing through intervening quality.
transient liquid or vapor phases. These solid-state The application of mechanical and possibly the
transitions are generally nucleated by defects or strain. accompanying thermal stresses during operations such
The physical characteristics of the solids, such as as milling, dry granulation, and compaction, may
particle size, morphology, crystallinity, mechanical lead to phase transformation such as polymorphic
treatments, and presence of impurities may greatly transition, dehydration and desolvation, or
affect the rate of such transformations. vitrification via the solid-state or melt mechanisms.
Melt. A solid form is destroyed upon melting The rate and extent of these phase transitions will
and may not regenerate when cooled. The melt depend on characteristics of the original solid phase
is often supercooled to generate an amorphous and the energy input from these processes. Caffeine,
phase, which may or may not crystallize during sulfabenzamide, and maprotiline hydrochloride
further cooling. Even when it crystallizes, it have been reported to undergo polymorphic
usually goes through metastable forms, and may transformations during compression.
or may not convert to the most stable form on the A number of wet processes, such as wet granulation,
experimental time scale. A potential change in lyophilization, spray drying, and coating, may induce
crystalline form is therefore likely. The final solid phase transitions via solution or solution-mediated
phase may depend on: thermodynamic stability transformation. Important factors governing the
among different forms; the relative nucleation and likeliness and the extent of conversion include:
crystal growth rate of each form; and the kinetics of solubility of the drug substance in the liquid, duration
form conversion. The solid phases formed in such a in contacting with liquid, other operation parameters
process are usually accompanied by relatively high such as temperature, drying conditions, and the
amount of disorders. Impurities or excipients are specific properties of the drug molecule. The presence
also likely to affect the kinetics of crystallization of other excipients may also significantly influence
and phase transformation. Presence of excipients the propensity and rate of a phase transformation.
may cause eutectic melting at temperatures much All the aforementioned phase transitions, such as
lower than the melting of pure API and give rise to polymorphic conversion, hydration and dehydration,
potential phase transformations. salt/parent conversion or salt/salt exchange, and
Solution. When solvents are involved in vitrification and crystallization may occur. For
processing, solid API may dissolve partially or example, a highly water-soluble drug may completely
completely. Subsequent solvent removal may cause dissolve in granulating liquid and subsequently
the dissolved portion to change its solid form similar produce amorphous phase during drying that may
to a melt. It is important to note that formation of a or may not convert back to the original solid form.
metastable form (crystalline or amorphous) is often The formation of a hydrate is also likely to occur in
likely, especially when the original solid form is wet granulation. Subsequent drying may dehydrate
completely dissolved. Particular attention should be the formed hydrate. In any solid-solid conversion;
paid to drug molecules that have high solubility in the however, it is very likely that a less ordered form will be
processing solvents. produced even if conversion is close to completeness.
Solution-Mediated. When a metastable solid Therefore, various amounts of amorphous content may
form comes to contact a solvent during processing, exist in the end product, which could cause further
it may convert to more stable forms due to concerns on its chemical stability. In film coating, due
solubility difference between the metastable form to the relatively short contacting time between solid
and the stable form. As opposed to the solution surface and liquid, the likelihood of solid form change
mechanism, the solution-mediated mechanism is much lower. However, the potential for phase
only allows the transition from a meta-stable form transformation can be significant when an active is
to a more stable form. present in the coating solution.

PROCESS VALIDATION Process Design 33


Deliang Zhou

CHEMICAL DEGRADATION OF DRUGS particular, the presence of hydrogen or hydroxyl ions


Chemical degradation represents one of the most likely catalyzes the hydrolytic reactions. Each of
important stability aspects of pharmaceuticals, these may have different reactivity and may require
because inadequate stability not only limits the different conditions.
shelflife of a drug product but also potentially Transacylation is also likely when appropriate
impacts efficacy and patient safety. other functional groups (i.e., alcohols, amines,
An important aspect of preformulation esters, etc.) are present in the environment, either as
characterization is to examine the chemical stability solvent or solvent residuals, or more commonly, as
of new drug candidates, assess the impact of stability excipients or impurities.
on pharmaceutical development and processing, and Other thermolytic degradation pathways include
design strategies to stabilize an unstable molecule. The rearrangement, isomerization and epimerization,
understanding of commonly encountered degradation cyclization, decarboxylation, hydration/dehydration,
pathways, basic concepts in chemical kinetics, and and dimerization and polymerization.
characteristics of chemical degradation in drug
products are key to the overall success of formulation Oxidative Degradation
development and scale up. Oxidation accounts for 20-30% of reported drug
degradations and is secondary only to hydrolysis.
Common Pathways of Drug Degradation Oxidation can proceed with three primary
Drug degradation can be generally divided mechanisms: electrophilic/nucleophilic, electron
into thermolytic, oxidative, and photolytic. By transfer, and autoxidation.
examining structural features of a drug molecule, A drug molecule may be oxidized by electrophilic
possible degradation routes and products may be attack from peroxides, which is the typical scenario
predicted to a certain extent. These may then aid in of nucleophilic/electrophilic mechanism. Similarly,
the design and execution of stability studies. the electron transfer process shares certain features
of the nucleophilic/electrophilic process, except that
Thermolytic Degradation an electron is transferred from a low electron affinity
Thermolytic degradation refers to those that are driven donor (e.g., drug molecule) to an oxidizing agent via
by heat or greatly influenced by temperature, to which the mediation of transition metal. Fe3+ and Cu2+ are
the following Arrhenius relationship often applies: commonly used to probe such mechanisms.
The autoxidation process involves the initiation of
k = Aexp(Ea/RT) free radicals, which propagates through reactions with
oxygen and drug molecule to form oxidation products.
where k is the rate constant, T is the absolute temperature, Because of the complexity of the reaction mechanism,
Ea is the activation energy, A is the frequency factor, non-Arrhenius behavior is frequently observed. The
and R is the universal gas constant. Any degradation three stages of autoxidation may be represented as the
mechanism can be considered thermolytic if high following:
temperature enhances the rate.
Typical activation energies for drug degradation t*OJUJBUJPO
fall in the range of 12-24 kcal/mol (8). The Q10 rule
states that the rate of a chemical reaction increases In \RH In H R \
approximately two- to threefold for each 10 C rise
in temperature. The exact value of this increase may t1SPQBHBUJPO
be determined based on the value of the activation
energies and is the theoretical basis behind the R \O2 ROO \(fast)
International Conference on Harmonisation (ICH)
conditions. ROO \RH ROOH R \(rate-limiting)
Hydrolysis forms a subset of thermolytic
degradation reactions and is the most common t5FSNJOBUJPO
drug degradation pathway. Hydrolysis accounts for
more than half of the reported drug degradation. R \R \R R
Derivatives of relatively weakly-bonding groups such
as esters, amides, anhydride, imides, ethers, imines, R \ROO \ROOR
oximes, hydrazones, semicarbazones, lactams,
lactones, thiol esters, sulfonates, sulfonamides, and Free radicals may be generated by homolytic
acetals can undergo hydrolysis both in solution cleavage of a chemical bond via thermal, photolytic, or
and in the solid state in the presence of water. In chemical processes. A drug free radical is formed when

34 PROCESS VALIDATION Process Design


Deliang Zhou

one of its hydrogen atoms is abstracted by the initial


Figure: pH-rate profile of hydrolysis of aspirin
free radical. The formed drug free radical then quickly
(adapted from reference 10).
reacts with oxygen to form a peroxy free radical. The
latter can abstract a hydrogen atom from another drug -3
molecule so that the drug molecule gets oxidized
(forming hydroperoxide) and a new drug free radical
is re-generated. This chain reaction can continue until -4
the free radical is terminated. A single free radical

log Kobs (s-1)


can, in principle, cause the oxidation of many drug
molecules in its lifetime. Therefore, the autoxidation -5
is typically auto-catalytic in nature: its initial rate may
be low; however, the rate increases quickly due to the
gradual buildup of the free radicals. -6
Multiple oxidation mechanisms can co-exist. For
example, peroxide can undergo homolytic dissociation
at high temperature to form free radicals; the presence -7
of trace amount of transition metal could react with 0 2 4 6 8 10
peroxides (e.g., oxidation degradants) and form pH
free radicals. It may be true that multiple oxidative
processes occur simultaneously in many cases.
Basic Concepts in Chemical Kinetics
Photolytic Degradation The following sections describe the basic concepts of
Chemical reactions may occur upon light absorption chemical kinetics.
because light carries certain energy: Reaction Order. The rate of a reaction is often
used to describe the relationship between reaction rate
E = hv = hc/ and the concentration of various species. For example,
where h is Planks constant, c is the speed of light, the hydrolysis of aspirin under acidic conditions is
is the frequency, and is the wavelength. The first-order with respect to both aspirin and hydrogen
Grotthuss-Draper law states that photochemical ion, but is an overall second-order reaction:
reaction can occur only after light is absorbed. A
d[Aspirin]
simplistic calculation indicates that the energy = k[Aspirin][H+]
associated with UV-visible light corresponds roughly dt
to the bond energies in typical organic molecules. For Degradation of most pharmaceuticals are second-
example, the weakest single bond is roughly ~35 kcal/ order in nature, because degradation usually involves
mol (e.g., OO bond) and the strongest single bond two molecules to react, one of which being the drug
corresponds to ~100 kcal/mol (e.g., CH bond). molecule itself. However, the concentration of the
Photophysical processes refer to the changes in other component (i.e., the hydrogen ion, hydroxyl
molecular orbitals after light absorption (excitation). ion, or the buffer species) is usually in large excess and
Properties of the excited molecules are expected to can be considered as constant throughout. Therefore,
differ from those of ground states and are generally apparent first-order reactions are often reported for
more reactive. For example, the radical-like structures most pharmaceuticals. The apparent rate constant,
in this case, includes the contribution from the
of some of the excited states make (photo) oxidation
concentration of the other reactants. Apparent zero-
favorable. For more detailed discussions on the fates of
order degradations may arise in cases where drug
a molecule upon light absorption and the potentially
concentrations are maintained as a constant (e.g.,
increased chemical reactivity, readers may refer to the
suspensions).
textbook by Turro (9). Catalysis. A catalyst is a substance that influences
Photolytic degradation is directly initiated by the rate of a reaction without itself being consumed. A
light absorption; therefore, temperature has a positive catalyst promotes a reaction while a negative
negligible effect. In fact, some photo reactions can catalyst demotes a reaction.
occur even at absolute zero. Photolytic degradation Catalysis occurs by changing the activation energy
is not uncommon but may be minimized during of a reaction but not changing the thermodynamic
manufacturing, shipping, and storing of drug products nature of the reaction. A catalyst can only influence
by appropriate packaging. However, mechanisms of the rate, but not the equilibrium of the reaction. In
photolytic degradations are usually more complex. some cases, a reaction product can catalyze the rate,
which is often termed as autocatalysis. The free-radical
initiated oxidation certainly is such an example.

PROCESS VALIDATION Process Design 35


Deliang Zhou

Hydrogen ions and/or hydroxyl ions are O


often involved directly in the degradation of
pharmaceuticals. When the concentration of hydrogen O
ion or hydroxyl ion appears in the rate equation, the H
reaction is said to be subject to specific acid-base O O
H
catalysis. Drug degradations are often determined in
buffered solutions and studied by monitoring the drug CH3
O
itself. As a result, the degradation kinetics is often
apparent first-order with the apparent rate constant in pH-rate profiles can provide tremendous insights on
the following form: the nature of a reaction and can serve as a very useful
tool in developing solution formulations, lyophilized
kobs = k0 kH[H]kOH [OH] products, as well as conventional solid oral dosage
For a reaction subject to specific acid catalysis, a plot forms.
of the logarithm of the apparent first-order rate constant
with respect to pH gives a straight line of slope of 1, DEGRADATION IN SOLID DOSAGE FORMS
while a specific base catalysis generates a straight line of Drug degradation in solid dosage forms are certainly
slope of +1. When both are present, a V or U-shaped more complex than those in solution and have some
pH-rate profile is often observed. unique features related to the state of the matter.
General acid-base catalysis occurs when the Topochemical reactions are a class of reactions that
buffering components catalyze a reaction. Either the are directly related to the molecular packing in the
acidic or basic components of the buffer, or both, crystal lattice. Certain molecular rearrangements
can catalyze a reaction. General acid-base catalysis are required in order for a reaction to take place
often causes deviation of the rate-pH profile from the truly in the solid state (i.e., in the crystal lattice).
expected unit slopes. For example, the photo-dimerization of cinnamic
pH-Rate Profiles. The pH dependence of the rate acid derivatives requires certain minimum distance
constant of degradation of a compound can be concisely between the double bonds. As a result, only
represented by a plot of kobs vs. pH. form of p-methylcinnamic acid is feasible for this
In general, rate of drug degradation can be represented dimerization in solid state (11). Topochemical
by summing up all possible pathways, including reactions can be identified if a reaction does not occur
intrinsic, specific acid-base catalysis, general acid-base in solution or is much slower in solution or if the
catalysis, etc., as follows: reaction products are different from those obtained
in solution. For example, the rearrangement of
kobs = k0 kH[H]kOH [OH] k1 [bufferspecies 1]  methyl p-dimethylaminobenzenesulfonate to the
k2 [bufferspecies 2] \\\k0 4ki Ci p-trimethylammoniumbenzenesulfonate zwitterion
i
shows a reaction rate that is 25 times slower in its
The pH-rate profile provides a summary of the melt than in the solid state (12). Because the crystal
primary features of a specific degradation. Specific acid- structure of a solid phase determines its chemical
base catalysis is designated by the straight line with slope reactivity in the case of topochemical reaction,
of negative or positive unity, while general acid-base solid-state forms (e.g., polymorphs, solvates, salts,
catalysis may be indicated by the apparent deviation of co-crystals) frequently exhibit different reactivity.
the slopes. Commonly, many drug molecules are weak Degradations of most drug products, however,
acid or weak base. Ionized species may have different are not topochemical in nature. Drug degradations
reactivity, which is often revealed as a sigmoid in the occur mostly around the surface of a drug particle.
rate-pH profile. Therefore, the surface characteristics of API
For example, hydrolysis of aspirin (10) (see Figure) particles and the excipientsor collectively, the
is subject to specific acid catalysis at pH < 2, and microenvironmentare of vital importance to the
specific base catalysis at pH >10. The ionization understanding and remedy of drug degradation.
causes a difference in reactivity in pH range of 2.5-4. Solids often contain defects or strains, where
The broad shoulder in the pH range of 4.5-9 is caused molecules have higher free energy. These high energy
by the intramolecular catalysis: sites usually serve as the initiation (nucleation) sites
of a reaction. Therefore, disorders in crystals may
be a key point in the understanding of solid-state
reactions of drugs. To extend this concept further,
the amorphous phase is highly energetic, is expected,
and has been found to enhance chemical reactivity.
Often crystalline API is used in drug development.

36 PROCESS VALIDATION Process Design


Deliang Zhou

However, a small amount of disordered or degradation of ABT-232 was reported when wet
amorphous content is apparent, particularly after granulated (16).
various pharmaceutical operations such as milling, The presence of moisture also facilitates the
wet granulation, drying, and compaction. Even when creation of microenvironment for drug degradation.
the degree of crystallinity is not affected, a solid-state The pH of the microenvironment is often a key
reaction can be enhanced by rendering larger surface parameter that greatly influences drug degradation.
area (smaller particle size) because of the increased The pH-rate profile can provide significant insight
concentrations of possible defects. Regardless of in this case. Microenvironmental pH could be
how perfect the surface may be, surface itself can altered by solid forms, such as salts, or by buffering
be deemed as a type of defect based on the notion agents, such as citric acid and sodium phosphate, or
of crystal orders. It is not uncommon that small other excipients such as magnesium stearate. For
API particles (particularly via milling) exacerbate a example, stability of moexipril (17) was improved
stability problem. by wet granulation with basic buffering agents. This
Moisture has a profound effect on drug modification of microenvironmental pH is one of the
degradation in solid dosage forms, either as a modes of drug-excipient interactions.
reactant (e.g., hydrolysis) or as a promoter or both. The change of API solid phases in a dosage form
The catalytic role of water in drug degradation is can have profound impacts on its chemical stability.
related to its ability as an excellent plasticizer. Water Even when topochemical reaction is not observed,
greatly increases molecular mobility of reactants the surface characteristics of solid forms can differ
and enhances drug degradation and makes it more significantly, which affects its ability to adsorb,
solution-like. In the worst scenario, water can interact, and react. More importantly, solid-form
form a saturated solution and maximize the rate of conversion in dosage forms is often incomplete,
degradation. and leaves significant amount of disordered
Various mechanisms exist for water molecules or amorphous regions behind. The extent of
to interact with a solid. Water molecules can be amorphous content can be significant in the case of
incorporated into crystal lattice through hydrogen wet granulation where a soluble API may dissolve
bonding and/or Van de Waals interactions. Generally, completely and remain as amorphous upon drying,
lattice water is not a cause of chemical instability. or hydrate formation may occur but dehydrate upon
Some compounds rely on the interaction of water to drying while leaving behind a significant portion
form a stable crystal thus improving their chemical of amorphous phase. Amorphous content may also
stability. Water can be also adsorbed to the surface as result from other processing steps such as milling and
a monolayer and as multilayers. Water molecules in compaction. All these process-induced amorphous
the monolayer may behave significantly differently content, in combination with typical moisture
than those in the second or third layers. Water contents, could be detrimental to the chemical
molecules beyond these 2-3 layers are expected to stability of pharmaceutical dosage forms, especially
behave like bulk water. A more thorough discussion when the dose strength is low. The process-induced
on water-solid interactions in pharmaceutical systems phase transformations often depend on equipment,
can be found elsewhere (13). time, operational conditions, as well as raw material
Tightly bound water (e.g., hydrate, monolayer) attributes, which makes scale up as well as quality
have decreased activity and are therefore not control challenging.
detrimental to chemical stability. The loosely bound
water (other than the monolayer) or free water is EXCIPIENT COMPATIBILITY
believed to enhance a chemical reaction. In addition, Excipient selection is of great importance to
pharmaceutical solids usually contain various defects accomplish the target product profile and critical
and various degrees of disordered, amorphous quality attributes. Excipient functionality and
regions. Water molecules are preferentially absorbed compatibility with the active drug are two important
into the interior of these regions (sorption). Because considerations.
water is an efficient plasticizer, it significantly Incompatibility may be referred to as the
increases molecular mobility in the amorphous phase undesirable interactions between drug and one or
and, therefore, enhances degradation. A simplified more components in formulation. Incompatibility
calculation indicates that the typical moisture content may adversely alter the product quality attributes,
in pharmaceuticals (e.g., a few percent) far exceeds including physical, chemical, biopharmaceutical, or
the estimate by mode of adsorption (14, 15). Hence, other properties.
water sorption into the disordered or amorphous Excipient compatibility studies are usually
regions is a realistic concern for the stability of conducted in the early phase of drug product
pharmaceutical dosage forms. Greatly enhanced development. Their objective is to further

PROCESS VALIDATION Process Design 37


Deliang Zhou

characterize the stability profile of the drug molecule with a number of drugs including acetaminophen,
in typical formulation design, identify potential codeine, sulfadiazine, and polyethylene glycol (PEG).
material incompatibility, characterize the degradants, Norfloxacin is reported to react with magnesium
understand the mechanisms of degradation, and stearate to form stearoyl amide (18). Similarly,
provide guidance to formulation development. duloxetine reacts with hydroxypropyl methylcellulose
A carefully planned and executed compatibility acetate succinate (HPMCAS) (celluosic coating
study may result in significant savings in time and polymer) to form succinamide.
resources. In addition, as expected by quality-by- Impurities in excipients can be a major concern
design (QbD) principles, excipient compatibility to cause incompatibilities. It is well known that
information is required by the regulatory agencies to traces of peroxides in povidone, crospovidone,
justify the selection of formulation components and hydroxypropyl cellulose (HPC), dicalcium
to assess the overall risk. phosphase, PEG, polysorbates, and a number of
It does not mean, however, that one absolutely modified excipients containing polyoxyethylene
cannot use an incompatible excipient. In some units are responsible for the oxidation of a number
cases, a small amount of incompatible excipient may of drugs including ceronapril and raloxifene.
be acceptable based on other benefits it provides. Traces of low molecular weight aldehydes such as
There are also situations where there is simply no formaldehyde and/or acetaldehyde may exist in a
other alternate ingredient and one has to live with number of excipients such as starch, polysorbate,
an incompatible excipient. However, it is still glycols, povidone, crospovidone, ethylcellulose,
essential to understand the associated risks and their which could condensate with amine drugs. The
mitigations. 5-hydroxylmethyl-2-furfuraldehyde impurity in
lactose poses a similar concern. Transition metals
Direct Reactions Between Drug and may be present in a number of excipients. As
Excipient mentioned previously, these may catalyze the
Probably the most known excipient incompatibility oxidations of many drugs. In principle, it is very
is the Maillard reaction between a primary or beneficial to have some basic understanding on the
secondary amine and a reducing sugar such as production, isolation, and purification process of
lactose and glucose, resulting in the production of each excipient in order to anticipate the possible
browning spots. From a mechanistic point of view, impurities and the potential risks to drug product
the reactivity of the amine is related to its electronic stability.
density. Therefore, salt formation of the amines Complexes between drug and excipients have also
usually improves stability. Like most solid-state been reported, which could impact the dissolution,
reactions, amorphous characteristics of both the solubility, and bioavailability of a drug product.
amines and the reducing sugar enhance the reactivity, Tetracycline is reported to interact with CaCO3
a concern when using spray-dried lactose. to form an insoluble complex. Diclofenac and
Interactions between acidic drugs and basic salicylic acid have also been reported to complex
excipients and vice versa have been reported. with Eudragit polymers. The complexation between
Examples include the interactions between weakly basic drugs (e.g., amines) and anionic
indomethacin and sodium bicarbonate, ibuprofen super-disintegrants such as croscarmellose sodium
with magnesium oxide, and citric acid/tartaric acid and sodium starch glycolate has been reported;
with sodium bicarbonate, the latter of which is metaformin-croscarmellose sodium is an example.
utilized in effervescent tablets.
Drug-conterion interactions may also occur. Drug Degradations Enhanced by
Seproxetine maleate was reported to form a Michael Excipients
addition product. In a broad sense, drug-buffer Drug degradation usually occurs on the surface or
interactions can be grouped into this category. the interfaces; therefore, the surface characteristics
Examples of the latter include the interactions of API particles and excipients may be of vital
between dexamethasone 21-phosphate and sodium importance. Direct reactions between a drug
bisulfite, and between epinephrine and sodium molecule and excipient do not frequently occur and
bisulfite. may be anticipated based on knowledge of drug
Transacylation is another major group of drug- degradation pathways and understanding of the
excipient interaction. Carboxylic moiety can drug molecule. However, the presence of an inert
react with alcohols, esters, or other carboxylic excipient could lead to adsorption of drug molecules
derivatives to form esters, amides, etc., whether to the surface of excipient particles. Drug molecules
presented in excipients or in drug molecules. For adsorbed onto a surface are expected to have higher
example, aspirin can undergo transesterification mobility and increased reactivity. This increase

38 PROCESS VALIDATION Process Design


Deliang Zhou

in accessible surface area is expected to contribute t8IBUJTUIFUZQF FH IZESPMZTJT PYJEBUJPO PS
to increased degradation even for an otherwise others) of degradation?
inert excipient. The excipient may be considered t*TBHFOFSBMNFDIBOJTNBWBJMBCMF
a heterogeneous catalyst in this case. Often low t8IBUGBDUPSTBSFHFOFSBMMZJOGMVFOUJBM
strength formations are reported to have increased t*TUIFSFBHFOFSBMTUBCJMJ[BUJPOBQQSPBDI FH Q) 
degradation attributed to the increased accessible anti-oxidant)?
surface area to drug ratio. In addition, the presence t5PXIBUFYUFOUJTUIFSBUFPGEFHSBEBUJPOJT
of excipient might also change the moisture, affected by moisture and/or temperature?
microenvironment pH, and other characteristics t$PVMEBOZPGUIFQSPDFTTJOHTUFQTQMBZBSPMF
which are important considerations regarding drug
product stability. Because fixing a stability issue in late development
Microenvironmental pH may be an important phase could be costly both in terms of project delays
factor on drug degradation in dosage forms. Many and resources, early identification and remediation
excipients are acid or basic, such as citric acid, of stability issues is beneficial and preferred.
tartaric acid, stearic acid, sodium bicarbonate, Degradation studies are vital in the preformulation
sodium carbonate, magnesium oxide, and assessments of a drug candidate. These studies often
magneiusm stearate. In addition, many otherwise employ forced degradation studies (i.e., exposure to
neutral excipients may be weakly acidic or weakly strong acidic pH, strong basic pH, various oxidants,
basic due to the processes used in the production light, etc.) in order to delineate the intrinsic stability
and treatments thereof. All these can cause an profile, potential degradants, and degradation
acidic or basic microenvironment around the drug pathways. By utilizing information obtained from
particles, which is facilitated by the presence of the preformulation studies, instability concerns can
moisture. Depending on the particular case, a drug often be addressed and mitigated before formulation
molecule may be more or less stable in a particular activities start. In the authors experience, selection
pH environment. For example, reduced stability of viable solid forms can be made based on stability
of aspirin was noted in the presence of acidic in the solid state. Solid forms, particular salt forms,
excipients such as stearic acid or basic excipients allow one to modify the surface characteristics of
such as talc, calcium succinate or calcium the API, such as crystallinity, moisture sorption,
carbonate. Bezazepines hydrolysis was enhanced surface pH, and solubility, all of which may play
in the presence of microcrystalline cellulose, a significant role in drug degradation. Based
dicalcium phosphate, magnesium stearate, and on this principle, a number of solid forms have
sodium stearyl furate. Moexipril and lansoprazole been successfully selected to provide viable
(19) have been reported to be stabilized by basic stability profile for otherwise somewhat labile
excipients. compounds.
The moisture content and other attributes of A good understanding on the degradation
excipients may also influence drug degradation. mechanism or even the general features of the
Generally, it is believed high moisture content from class of the reactions can be very helpful to
excipients may be detrimental to chemical stability. the stabilization of a formulation. Hydrolytic
However, in a number of cases, improved stability degradations have been frequently minimized
has been attributed to more hygroscopic excipients by modification of the microenvironment pH via
that may preferentially absorb moisture in the acidic or basic excipients. Antioxidants have been
formulation. The characteristics of moisture may routinely used to alleviate oxidative degradations.
differ from one excipient to another as suggested In the latter case, however, the optimal antioxidant
by spectroscopic evidence, which could potentially can only be selected based on the understanding of
explain how excipients affect stability differently. oxidation mechanisms. For example, free-radical
scavengers such as butylated hydroxytoluene (BHT)
General Approaches to Stabilization are better for autoxidation, while ascorbic acid may
Once instability is observed, efforts should be be more suitable for nucleophilic/electrophilic type
made to identify the degradation products which of oxidation. Combination of antioxidants may be
may suggest possible degradation mechanisms. used and have been reported to be more effective,
Anticipation of the nature of degradation based particularly when there is a mix of oxidative
on molecular structure and prior knowledge processes. When metal ions play a significant role
can greatly assist the efforts and avoid excipient in oxidation, a chelate (such as EDTA or citric acid)
incompatibilities. Preformulation degradation may be used.
studies are essential to obtain useful information. When necessary, engineering approaches can be
Relevant questions include the following: used alone or in addition to those mentioned above.

PROCESS VALIDATION Process Design 39


Deliang Zhou

For example, coating, double granulation, and provide a viable means to the modulation of API
bi-layer tablet approaches may be used to separate characteristics and thus can significantly affect its
incompatible components in a formulation. When chemical stability. Phase transformation during
moisture plays a significant role, packaging or pharmaceutical processing is often a likely cause of
packaging with desiccant is routinely used to chemical degradation of the drug product even if
achieve viable shelf life of a drug product. For the starting API is stable. In particular, the small
drug molecules that react slowly with oxygen, amount of disordered or amorphous content, in
packaging combined with an oxygen scavenger can combination with typical moisture absorption in
be effective. Packaging has been routinely used to pharmaceutical products, could greatly impact drug
alleviate the photolytic (light) degradation of many degradation. Excipients influence drug degradation
drug products. by direct interactions or modification of the
microenvironment. A thorough understanding
PHASE TRANSFORMATIONS, CHEMICAL of the physicochemical principles is critical to the
DEGRADATION, AND EXCIPIENT anticipation, identification, and remediation of
COMPATIBILITYIMPLICATIONS FOR stability issues during formulation development,
VALIDATION AND COMPLIANCE manufacturing, and storage of drug products.
Validation and compliance personnel should
be knowledgeable of the physical and chemical REFERENCES
properties of the active drugs for which they have 1. Baertschi, S. W., Pharmaceutical Stress Testing. Predicting
responsibility. They should be especially aware Drug Degradation, Taylor & Francis Group: Boca Raton,
of active drugs and products that may be highly Florida, 2005.
prone to phase transformation and chemical 2. Yoshioka, S. and Stella, V. J., Stability of Drugs and Dosage
degradation (i.e., they must be aware of high Forms, Kluwer Academic/Plenum Publishers: New York,
risk drugs and products). This information is NY 10013, 2000.
routinely determined during the pharmaceutical 3. Zhou, D.; Porter, W. and Zhang, G. G. Z., Drug Stability
development process and is often filed in regulatory and Stability Studies, Developing Solid Oral Dosage Forms -
submissions. The manufacturing and processing Pharmaceutical Theory & Practice, Qiu, Y., Chen, Y., Zhang,
of high-risk drugs should receive heightened G., Liu, L., Porter, W., Eds.; Academic Press: San Diego,
vigilance by compliance personnel to minimize CA, 2009, pp 87-124.
the risk of phase transformations and/or chemical 4. Narang, A. S.; Rao, V. M. and Raghavan, K. S., Excipient
degradation. Any changes to the manufacturing Compatibility, Developing Solid Oral Dosage Forms
process of susceptible APIs and products should be Pharmaceutical Theory & Practice; Qiu, Y., Chen, Y., Zhang,
carefully evaluated. Referring to previous excipient G., Liu, L., Porter, W., Eds.; Academic Press: San Diego,
compatibility studies is highly recommended CA, 2009, pp 125-145.
prior to any changes in formulation. Changes 5. Hovorka, S. W. and Schneich, C., Oxidative Degradation
to high risk processes such as wet granulation, of Pharmaceuticals: Theory, Mechanisms and Inhibition,
milling, and processes with solvents should receive J Pharm Sci 2001, 90, 253-269.
heightened scrutiny as part of the change control 6. Waterman, K. C.; Adami, R. C.; A lsante, K. M.;
program. Laboratory evaluations of proposed Hong, J.; Landis, M. S.; Lombardo, F. and Roberts,
changes may be necessary in advance of large-scale C. J., Stabilization of Pharmaceuticals to Oxidative
changes. Validation of high-risk process changes Degradation, Pharm Dev Technol, 2002, 7, 1 - 32.
should include appropriate testing to assure the 7. Zhang, G. G. Z.; Law, D.; Schmitt, E. A. and Qiu, Y.,
acceptability of the process change. Fundamental Phase Transformation Considerations During Process
knowledge obtained during development should be Development and Manufacture of Solid Oral Dosage
available to validation and compliance personnel Forms, Adv Drug Del Rev 2004, 56, 371-390.
throughout the product lifecycle and should be 8. Connors, K. A., Chemical Kinetics: The Study of Reaction
utilized as necessary to evaluate formulation and Rates in Solution, Willey-VCH Publishers: New York, 1990,
process changes. p 191.
9. Turro, N. J., Modern Molecular Photochemisty, University
SUMMARY Science Books: Sausalito, CA, 1991.
Stability of drug product is complex due to the 10. Garrett, E. R., Prediction of Stability in Pharmaceutical
multi-particulate and heterogeneous nature of Preparations. IV. The Interdependence of Solubility and
pharmaceutical products. Drug degradation Rate in Saturated Solutions of Acylsalicylates, J Am Pharm
in solid dosage forms is mostly determined Ass 1957, 46, 584-586.
by the surface characteristics of both the API
and the excipient particles. Solid-state forms

40 PROCESS VALIDATION Process Design


Deliang Zhou

11. Schmidt, G. M. J., Topochemistry. III. The Crystal Angiotensin-converting Enzyme Inhibitor, Moexipril
Chemistry of Some Trans-cinnamic Acids, J Chem Soc Hydrochloride: Dry Powder vs Wet Granulation, Pharm
1964, 2014-2021. Res 1990, 7, 379-83.
12. Sukenik, C. N.; Bonopace, J. A.; Mandel, N. S.; Bergman, 18. Florey, K., Analytical Profiles of Drug Substances, Academic
R. C.; Lau, P. Y. and Wood, G., Enhancement of a Press: New York, 1991, p 588.
Chemical Reaction Rate by Proper Orientation of Reacting 19. Tabata, T.; Makino, T.; Kashihara, T.; Hirai, S.; Kitamori, N.
Molecules in the Solid State, J Am Chem Soc 1975, 97, and Toguchi, H., Stabilization of a New Antiulcer Drug
5290-5291. (lansoprazole) in the Solid Dosage Forms, Drug Dev Ind
13. Zografi, G., States of Water Associated with Solids, Drug Pharm 1992, 18, 1437-47. JVT
Dev Ind Pharm 1988, 14, 1905-1926.
14. Ahlneck, C. and Zografi, G., The Molecular Basis of ARTICLE ACRONYM LISTING
Moisture Effects on the Physical and Chemical Stability API Active Pharmaceutical Ingredient
of Drugs in the Solid State, Int J Pharm 1990, 62, 87-95. BHT Butylated Hydroytoluene
15. Kontny, M. J.; Grandolfi, G. P. and Zografi, G., Water HPC Hydroxypropyl Cellulose
Vapor Sorption of Water-soluble Substances: Studies HPMCAS Hydroxypropyl Methylcellulose Acetate
of Cr ystalline Solids Below their Critical Relative Succinate
Humidities, Pharm Res 1987, 4, 104-12. ICH International Conference on Harmonisation
16. Wardrop, J.; Law, D.; Qiu, Y.; Engh, K.; Faitsch, L. and Ling, PEG Polyethylene Glycol
C., Influence of Solid Phase and Formulation Processing QbD Quality by Design
on Stability of Abbott-232 Tablet Formulations, J Pharm RH Relative Humidity
Sci 2006, 95, 2380-92.
17. Gu, L.; Strickley, R. G.; Chi, L. H. and Chowhan, Z. T.,
Drug-excipient Incompatibility Studies of the Dipeptide

Originally published in the Summer 2009 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 41


John F. Bauer

Patent Potential
John F. Bauer

Pharmaceutical Solids discusses scientific principles system. Furthermore, there are situations when having
associated with pharmaceutical solids useful to a particular property or set of properties can greatly
practitioners in validation and compliance. We increase the quality and throughput of the manufacturing
intend this column to be a useful resource for daily process. When such examples of favorable ranges for a
work applications. The key objective for this column: physical property exist, they may qualify for patenting. In
Usefulness. this column we will investigate the concept of patenting
Reader comments, questions, and suggestions materials with specific physical properties.
are needed to help us fulfill our objective for this
column. Case studies illustrating principles associated Constitutional Basis
with pharmaceutical solids submitted by readers Article I, Section B, Clause 8 of the Constitution of the
are most welcome. Please send your comments and United States of America states, The Congress shall have
suggestions to column coordinator John Bauer at power to promote the progress of science and useful
Consultjbnow@gmail.com or to journal coordinating arts by securing for limited times to the authors and
editor Susan Haigney at shaigney@ advanstar.com. inventors the exclusive right to their respective writings
and discoveries. This is the basis of todays patent
KEY POINTS system in the United States. However, the constitution
The following key points are discussed in this article: directs neither the criteria to be used in judging
t Pharmaceutical compounds with specific physical patentability nor does it define the limited time to be
properties may be patentable granted the inventor.
t It is important to show a correlation between the United States Constitution Sec. 101 35 clarifies
specific physical property and a useful benefit in the criteria somewhat when stating whoever invents
manufacturing, stability, or some other area or discovers any new and useful process, machine,
t Although polymorphism is a common manufacture, or composition of matter, or any new
phenomenon, the existence of stable polymorphs and useful improvements thereof, may obtain a patent
is not obvious therefore.
t It is important to identify an analytical method In the committee notes accompanying the patent
in the patent that can be used to demonstrate act of 1952, it is noted that congress intended that the
infringement. statutory subject matter include anything under the
sun that is made by man. As broad as this statement
INTRODUCTION sounds, there are some things that are specifically not
The various physical properties of pharmaceuticals and included, such as the following:
their impact on the behavior and usefulness of these t Laws of nature. Sir Isaac Newton would not have
materials have been discussed in earlier installments of been allowed to patent gravity.
Pharmaceutical Solids. It is in some cases painfully t Physical phenomena. This exclusion would have
obvious that having an active pharmaceutical ingredient prevented Sir Edmund Hillary from patenting any
(API) or individual excipients with the wrong set of rock formations he found on Mt. Everest.
physical properties can negatively impact the ability t Abstract ideas. Thus denying Shirley MacLaine
to successfully manufacture a dosage form or delivery patents on reincarnation and aliens.

For more Author


information,
go to
gxpandjvt.com/bios [ ABOUT THE AUTHOR
John F. Bauer, Ph.D., is president of Consult JB LLC Pharmaceutical Consultants. Dr. Bauer has
more than 30 years of pharmaceutical industry experience, including work in solid-state chemistry,
analytical chemistry, stability, pharmaceutics, regulatory CMC, patents, and litigation. He may be
reached at Consultjbnow@gmail.com.

42 PROCESS VALIDATION Process Design


John F. Bauer

PATENTABILITY possess significantly different properties from each


Pharmaceuticals with unique physical properties fit the other. These physical properties are the usual basis of
requirements for patentability. The patent criteria are their usefulness and patentability. Additionally, these
that the discovery or invention must be new, useful, physical properties can, in and of themselves, be the
and non-obvious. It is easily understood that a new subject of additional patents. As the Figure illustrates,
chemical entity never prepared before would fit these patents can be thought of as umbrellas that protect the
criteria. Consequently, the first patent usually filed for innovators discovery. They can be of various sizes in
a new drug is for the compound itself and is called a so far as they protect the product to varying extents.
composition of matter patent. This is generally filed The composition of matter patent on a pharmaceutical
as soon as possible after the initial discovery in order to provides the greatest protection, because it excludes
prohibit others from developing the same compound. others from using the compound in any way and
When granted, the patent gives the patentee the right in any form. As a subset of that protection, one can
to exclude others from making, using, selling, offering potentially receive patents protecting individual
to sell, or importing the invention for a limited period aspects of the compound such as its solid form and
of time. In the mid 1990s, the time frame of exclusivity individual physical properties. These umbrellas do not
was redefined in the United States such that patents provide such broad protection but can often extend
filed prior to June 1995 carry 17 years of exclusivity slightly beyond the composition of matter patent and
starting at the date of issuance, and patents filed after thus extend exclusivity, depending on when their
June of 1995 received 20 years exclusivity from the date benefits are discovered.
of filing.
Because development of a pharmaceutical from EXAMPLE PATENTS ON PHYSICAL
discovery to market can average 10 years, the actual PROPERTIES
exclusivity of marketed drug is significantly shorter The following are examples of patents on physical
than the 20 years defined by Congress. properties of pharmaceuticals.
For this reason it is important that scientists
throughout the development chain take note of any Ranitidine
truly new, useful, and non-obvious aspects of the In the mid 1990s, GlaxoSmithKlines ulcer treatment
compound and/or product that might be patentable. Zantac was the beneficiary of a second polymorph.
Ranitidine hydrochloride, the active ingredient in
PATENTS BASED ON PHYSICAL Zantac was found to exist in a second polymorphic
PROPERTIES form. This second form was easier to manufacture and,
The physical properties of a pharmaceutical have been therefore, had a demonstratable usefulness. The patent
considered to fall into the category of patent-potential (2) on the second form extended Glaxos exclusivity on
material. Probably the most obvious and most widely Zantac.
patented physical aspect of pharmaceuticals is the
crystal form. Terazosin
Although polymorphism is coming to be seen as A similar, more involved situation existed with
a reasonably common and widespread phenomenon Abbotts antihypertensive drug Hytrin. The original
among organic compounds, it is not obvious. There composition of matter patent for terazosin, the active
is not presently a way of predicting for any particular ingredient, was on an anhydrate (3)a form in which
compound whether or not other stable crystal forms there were no water molecules involved in the crystal
(such as polymorphs or solvates) exist. Several structure. This form had high solubility but was
computer programs are in commercial use that also hygroscopic and tended to absorb water from
can estimate the energy of various crystal packings the atmosphere making it difficult to handle during
and speculate as to whether other polymorphs of a manufacturing. Later a hydrate form was discovered
compound are possible. However, these programs that was new, non-obvious, and had the usefulness of
cannot demonstrate that another form can be made not being hygroscopic. Abbott obtained a patent on
and remain stable under normal storage conditions. the hydrate form (4). The drawback of this form was
Because of this, the discovery of a new polymorph lowered solubility. Still later a second anhydrate form
continues to meet the nonobvious requirement. was found that was not hygroscopic and had increased
Once it is produced for the first time, it is certainly solubility; Abbott was also able to obtain a patent on
newso the remaining qualification is usefulness. this form (5). This is a good example of how changes in
As described in an earlier column (1) various physical form can lead to beneficial changes in physical
polymorphs and solvents, especially hydrates, can properties that qualify for patenting.

PROCESS VALIDATION Process Design 43


John F. Bauer

Figure: Depiction of how supplemental patents can this possible objection, patents have been obtained
extend product exclusivity. with claims relating specifically to particle size and/or
surface area. For example, Wyeth has a patent on the
macrolide antibiotic sirolimus, the active ingredient
Composition of Matter in Rapamycin. The patent claims sirolimus particles
having a D90 value of from about 2 microns to about
1 micron (7). D90 means 90% of the material is
Solid Form within this particle size range.
Wyeth based the usefulness and non-obviousness
Individual PP for its claim on the observation that compositions
of Sirolimus with conventional excipients
show unpredictable dissolution rates, irregular
bioavailability, and stability problems. These present
very real obstacles to processing. Using drug within the
specified particle size range minimizes these problems.
Exclusivity Time
Noticing this correlation during process development
Pp = physical property allowed Wyeth to obtain exclusivity on this valuable
manufacturing advantage.
Ritonavir
The previous examples involve different crystal forms. Nifedipine
There are times when the non-crystalline or amorphous Surface area can also be the basis for patenting and
form can also be useful. In most situations, amorphous product exclusivity as in the case of Bayers patent on
drug has non-favorable properties (6). It is hygroscopic, nifedipine capsules. The claim reads an effective
often less stable, and can be hard to handle in amount of nifedipine crystals with a specific surface
manufacturing. However, in the case of ritonavir, area of 1.0 to 4 m2/g (8).
an important protease inhibitor for the treatment of As mentioned previously, an ideal time to monitor
AIDS, the crystalline forms are not bioavailable from for patent potential of physical properties is while
the solid state and must be dosed in solution. The establishing the design space and specifications during
amorphous form, on the other hand, is very soluble process justification. The reason for this is that patent
and is bioavailable in a solid oral dosage form; this claims should not be limited only to the parameters
property presents a strong case for patent eligibility. used in manufacturing, but should cover the entire
These examples emphasize the necessity of continually range of parameters that achieve the stated usefulness
observing the manufacturing and formulation and non-obviousness cited in the patent. A patent
processes with a view for unique properties of the solid that can be easily circumvented provides very little
forms that can be useful and can present potential exclusivity.
patent possibilities.
ANALYTICAL CONSIDERATIONS
Sirolimus Another important aspect to be considered is the
Although solid form is the most obvious and ability to demonstrate infringement. One may
predominant basis for physical property patents, be able to obtain a patent on a valuable property.
other characteristics can also qualify if the non- However, if there are no clear cut analytical methods
obvious and usefulness criteria can be met. Particle that can demonstrate infringement, then the patent
size is another well known physical characteristic protection will be minimal. For example a patent
that can affect manufacturing and performance on a drug patch that claims 70% of drug is in the
of pharmaceutical preparations. However, some solid state on the patch and 30% is dissolved may
patent examiners would consider some properties not be easily enforceable because of the analytical
related to particle size such as increase dissolution challenge of demonstrating such a distribution. Once
rate with smaller particle size as being obvious to a patent is issued it is the responsibility of the patent
one skilled in the art. More individual and unique holder to enforce the exclusivity. A patent does not
relationships between ones individual process and give the inventor the right to market the product. It
these properties must be established. This is usually only excludes others from using the invention. In
done by comparison with drug having other particle the area of pharmaceuticals, the potential marketer
size ranges. A good time to obtain these data is when of a drug must certify to the US Food and Drug
establishing the design space for specifications, Administration that its product and process do not
during development, process justification studies, infringe anyone elses patents in what is called a notice
and studies supporting process validation. In spite of of certification. Neither the FDA nor the US patent

44 PROCESS VALIDATION Process Design


John F. Bauer

department has any obligation to help the inventor this kind of pre-thought and care are put into the patent
demonstrate infringement or enforce exclusivity rights. application preparation, long and often unsuccessful
It is, therefore, very important to prepare a patent patent defenses can be avoided, and patents on physical
application with proof of infringement in mind. As properties can be extremely beneficial.
mentioned before, a patent is only an impressive
government document if it cannot be enforced. An REFERENCES
analytical method should be described in the body 1. Bauer, John F., Pharmaceutical Solids: PolymorphismA
of the patent that is demonstrative of and specific for Critical Consideration in Pharmaceutical Development,
the property cited in the patent claim. Some examples Manufacturing, and Stability, Journal of Validation
are x-ray powder diffraction or solid-state nuclear Technology, Autumn 2008.
magnetic resonance for detecting small amounts 2. United States Patent 4672133, June 1987.
of a claimed crystal form in a drug sample. In the 3. United States Patent 4,026,894, May 1977.
case of crystal form patents, analytical sensitivity is 4. United States Patent 4,251,532, February 1981.
important. X-ray powder diffraction can generally 5. United States Patent 5,294,615, March 1994.
detect about 1% of a second form. Particle size and 6. United States Patent 5,294,615, March 1994.
surface area especially require very specific method 7. Bauer, John F., Pharmaceutical Solids-The Amorphous
descriptions concerning instruments and operating Phase, Journal of Validation Technology, Summer 2009.
parameters because results for these properties can vary 8. United States Patent 5264446, November 1993. JVT
significantly with technique and sample preparation. If

Originally published in the Autumn 2009 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 45


John A. Wass

First Steps in
Experimental Design
The Screening Experiment
John A. Wass

Statistical Viewpoint addresses principles of tWhen two design columns are identical, the
statistics useful to practitioners in compliance and corresponding factors or interactions are aliased
validation. We intend to present these concepts in a and their corresponding effects cannot be
meaningful way so as to enable their application in distinguished
daily work situations. tA desirable feature of a screening design is
Reader comments, questions, and suggestions are orthogonality in which the vector products of any
needed to help us fulfill our objective for this column. two main effect or interaction columns sum to
Please send your comments to coordinating editor zero. Orthogonality means that all estimates can
Susan Haigney at shaigney@advanstar.com. be obtained independently of one another
tDOE software provides efficient screening designs
KEY POINTS with columns that are not aliased and from which
The following key points are discussed in this article: orthogonal estimates can be obtained
tDesign of experiments (DOE) consists of three tFull-factorial designs include all combinations of
basic stages: screening (to identify important factor levels and provide a predictive model that
factors), response surface methodology (to define includes main effects and all possible interactions
the optimal space), and model validation (to tFractional factorial (screening) designs include
confirm predictions fewer trials and may be more efficient than the
tA critical preliminary step in the screening stage is corresponding full factorial design
for subject matter experts to identify the key list of tThe concept of aliasing is one of the tools that
factors that may influence the process can be used to construct efficient, orthogonal,
tA DOE design consists of a table with rows screening designs
that represent experimental trials and columns tCenter points are often included in screening
(vectors) that give the corresponding factor levels. designs to raise the efficiency and to assess lack of
In a DOE analysis, the factor level columns are model fit due to curvature
used to estimate the corresponding factor main tThe order of running and testing experimental
effects trials is often randomized to protect against the
tInteraction columns in a design are formed as the presence of unknown lurking variables
dot product of two other columns. In a DOE tBlocking variables (such as day or run or session)
analysis, the interaction columns are used to may be included in a design to raise the design
estimate the corresponding interaction effects efficiency

[
For more Author ABOUT THE AUTHOR
information, John A. Wass, Ph.D, is a consulting statistician with Quantum Cats Consulting in the Chicago
go to area, as well as a contributing editor at Scientific Computing and administrator of a regional
gxpandjvt.com/bios statistical software group. He may be reached by e-mail at john.wass@tds.net.

46 PROCESS VALIDATION Process Design


John A. Wass

tFactor effects in screening designs may be missed with everything in real-life, there is a price to pay for
because they were not included in the screening every extra bit of information required.
experiment, because they were not given We can show an experimental design as a table.
sufficiently wide factor ranges, because the design An example is presented in Figure 1. Each row in the
was underpowered for those factors, because trial table corresponds to an experimental trial. The table
order was not properly randomized or blocked, or columns indicate the levels of the experimental factors.
because of an inadequate model. For screening designs we usually consider only two
levels, usually coded +/-1. In this notation, + represents
INTRODUCTION high and represents low. We may also include
In days of old (i.e., the authors undergraduate other columns that indicate interactions among the
years), we were introduced to the joys of manual factors. The columns giving the experimental factor
calculations and analysis of variance (ANOVA). levels permit us to estimate main effects and the
Experimenters would change one factor at a time interaction columns permit us to estimate interaction
and identify what they felt were optimal processing effects. We will say more about main effects and
conditions. With the advent of personal computers interaction effects below. In addition to factor level
and the dissemination of more efficient techniques by and interaction columns, we may record one or more
statisticians, problems of increasing complexity were columns of measured variable values that result from
solved. This not only enlightened the basic researcher, each trial. We refer to these dependent variables as the
but permitted scientists and engineers to design more experimental responses.
robust products and processes. In fact, the statistical Screening designs are useful as they are a practical
design of experiments (DOE) has been called the most compromise between cost and information. Their
cost-effective quality and productivity optimization main contribution is in suggesting which of many
method known. In this brief introduction, we factors that may impact a result are actually the most
will concentrate on practical aspects and keep important. Because screening designs require fewer
mathematical theory to a minimum. runs, they are far less costly than the more informative
The advent of DOE brought a modicum of order full-factorial designs where the practitioner uses all
to the wild west of one-factor at a time changes. The combinations of factor levels. It has been suggested
technique has many variations but consists of the that no more than 25% of the total budget for DOE
following three basic stages: be spent on the screening runs. Screening runs are
tScreeningto exclude extraneous effects usually a prelude to further experimentation, namely
considered as noise the response surface and confirmatory runs, where
tResponse surface methodologyto finely define specific information is gained around target (desired)
the optimal result space outcomes.
tModel validationto confirm predictions.
Key Assumption For Screening Studies
Each is quite important. In this paper we will In screening designs, we make the assumption that
concentrate on the first stage, screening design and our real-world processes are driven by only a few
analysis. factors, the others being relatively unimportant. This
There are many commercial software packages, usually works quite well but it is a crucial assumption
either standalone or modules, within general statistics that requires careful consideration by subject matter
programs that will support DOE. Some of these are experts. Also keep in mind that the fractional
listed later in this discussion. Each has its unique factorial designs may be upgraded to full factorial
strengths and weaknesses. For this paper, JMP8 has designs (main effects plus all interactions) if there are
been used. The principles will be the same for most only a few main effects. This allows us to observe
programs; although, the user interfaces, algorithms, interactions at a reasonable cost.
and output will vary.
Number Of Runs
THEORY With screening designs, responses are taken only for
The literature of DOE is replete with names such as a small fraction of the total possible combinations to
full factorial, fractional factorial, runs, power, levels, reduce the number of runs and thus cost. The total
and interactions. In addition, we have categorical number of runs is calculated by raising the number of
and continuous factors and a variety of design names. levels to the power of the number of factors (e.g., for
Fortunately, in screening we usually confine ourselves three factors at two levels each we have runs = 2^3 =
to the fractional factorial designs. Unfortunately, as 2x2x2 = 8). This is actually a full factorial design as

PROCESS VALIDATION Process Design 47


John A. Wass

we are testing all combinations of factor levels. Full Figure 1: Aliasing in a simple design.
factorial designs allow us to build predictive models
that include the main effects of each factor as well Run Factors
A B C ABC D
as interactions. This brings us to three important
concepts of these models: interaction (the effects 1 - - - - -
of one factor on another), orthogonality (all factors 2 - - + + +
are independent of one another), and aliasing 3 - + - + +
(when the effects due to multiple factors cannot be 4 - + + - -
distinguished). 5 + - - + +
6 + - + - -
7 + + - - -
Interactions 8 + + + + +
One of the more important things that practitioners Sum 0 0 0 0 0
need to know about is that main factors may affect each
other in ways known and unknown (i.e., interaction Orthogonality
among effects). For example, the interaction of two Further, the dot product of any two of the columns
reagents in a chemical process may be a significant A, B, C, or ABC will also sum to zero (try it and
driver of the overall process (think enzyme and see). This more subtle design characteristic is called
substrate). In deciding which are important, statistically orthogonality and is critical to good experimental
and physically, it is necessary to consult with the bench design. To understand why orthogonality is so
scientists and technicians to get a handle on what important, we return to our concept of aliasing.
is already known and suspected to be important to Aliasing is the extreme absence of orthogonality.
the process. Too few factors risk missing something It is impossible to separately estimate the effects
important. Including too many factors will render the corresponding to two design columns that are aliased.
screening more costly and lead to a lack of orthogonality In a sense, such estimates are 100% correlated (the
due to aliasing. Aliasing occurs when two columns statistical term is confounded). In contrast, when
in the design (referred to by statisticians as a vector of two design columns are orthogonal, the corresponding
the input space) are identical or when one column is effect estimates have zero correlation. This means that
identical to another formed from the interaction of errors in estimating one effect do not, on average, bias
two columns (i.e., the vector or dot product of two our estimate of the other effect. Orthogonal designs
columns). prevent us from accidentally confusing the effects of
Figure 1 presents a design table for a full factorial two different factors.
design in three factors. This design requires eight trials DOE software provides us with orthogonal (or
(rows) and has three factors (A, B, C) for which we can nearly orthogonal) screening designs in which the
estimate main effects. It is also possible to estimate main effect and interaction columns are not aliased.
all possible two-way interactions with this design (AB, These allow us to estimate the corresponding effects
AC, BC) as well as the single three-way interaction ABC without worrying about possible correlations. This
that is shown in the design table. If later, the design is lack of correlation usually implies that the estimates
augmented with a fourth factor D (all runs not shown are independent.
below), we have a problem. Now the contribution Figure 2 gives a good mental image of the value of
to any measured outcome (effect) from ABC is orthogonality and the independence it provides in
indistinguishable from D and, therefore, we do not our estimates. Let the three mutually perpendicular
know if the driver was D or ABC. (orthogonal) axes X, Y, and Z represent dimensions
The A, B, and C columns give the levels (coded +/-) of along which three estimates from some experimental
four experimental design factors. The ABC interaction design may lie. The results of an experiment may
column is formed as the dot product of the A, B, and then be indicated as a point in that three dimensional
C columns. Notice how each row in the ABC column space. If we repeat the experiment many times,
is formed as a product of the corresponding levels of the resulting estimates will form a cluster of points
A, B, and C. In the analysis of such a design, the ABC in space, centered about the true effects being
interaction column would be used to estimate the estimated. With orthogonal designs, the cluster of
corresponding ABC interaction effect. points will be spherical in shape indicating a lack of
The sums of the levels of the A, B, C, and ABC correlation (or independence) among the estimates.
columns all equal zero. This means that the design is In designs with aliasing, the points will fall along a
balanced. Balanced designs give more precise estimates one-dimensional line or two-dimensional plane that
of main and interaction effects than unbalanced indicate a complete correlation of three or two effects,
designs. respectively. In between these extremes will be designs
that will produce ellipsoid-shaped clusters, whose axes

48 PROCESS VALIDATION Process Design


John A. Wass

are not parallel to X, Y, and Z. Such designs are less Figure 2: Orthogonality.
efficient than fully orthogonal designs and may result
in misinterpretation of screening results. Y
Similarly, when the columns in our experimental
design are unaliased and when the vector products of
any two columns sum to zero, the corresponding effect
estimates are mutually independent. This means that
random errors in estimating one effect will not (on X
average) bias the estimate of another effect. Designing
for orthogonality is excellent protection against
aliasing. Z

Aliasing
The concept of aliasing is also useful in constructing fast generation of an experimental sheet showing the
efficient fractional factorial designs. The design in factors and their levels, plus a column for the results.
Figure 1 includes an ABC interaction column. In This design has three input factors (X1-X3) and
the real world, two-way interaction effects may well a single output (Y). The levels indicate low (-1),
be present, but three-way interactions such as ABC median (0), and high (1) levels of the factor. The
are often considered unlikely to be important. This trials for which all factors are at their median level
is referred to as the sparcity of effects principle. are called center points and may represent target or
Based on our knowledge of the process, we may control factor settings. The 000 points are the center
be willing to assume that the ABC interaction points and are useful for testing linearity in the
effect is not present. In that case we might consider process. It is a simple and inexpensive way to check
including factor D in our experiment and setting for curvature. We now have nine design points, each
its levels to those indicated by ABC. In this way we replicated twice to yield 18 runs. This design is a
can estimate the main effects of all four factors, full factorial design in which each of the nine trials
although we sacrifice our ability to learn about in the full factorial is replicated twice. Replication
the ABC interaction. This is an example of using
aliasing as a design tool. We indicate this aliasing Figure 3: A typical screening design.
mathematically by writing ABC=D. This is
not meant to imply that the effects of D are the
same as the interaction of ABC, but only that our
experiment cannot distinguish the main effect of Pattern X1 X2 X3 Y
D from the ABC interaction. We have now created 1 --- -1 -1 -1 *
a design with four factors in only eight trials (a 2 --+ -1 -1 1 *
full factorial design with four factors would have
3 000 0 0 0 *
required 2^4=16 trials).
As this is an introduction we will not delve 4 -+- -1 1 -1 *
into the more advanced concepts of saturation, 5 +-+ 1 -1 1 *
efficiency, foldover, and the more complex 6 ++- 1 1 -1 *
screening designs. Any of the textbooks cited in the 7 +++ 1 1 1 *
reference section may be consulted for these topics.
8 +-+ 1 -1 1 *
GENERAL TECHNIQUES 9 +-- 1 -1 -1 *
In most modern software it is a straightforward 10 --+ -1 -1 1 *
matter to design an experiment and analyze the 11 -++ -1 1 1 *
results (with a little practice and much consulting
12 000 0 0 0 *
of the manual). Its the decision as to what factors
to include that is critical. It is strongly advised 13 -+- -1 1 -1 *
that you not throw in everything and the kitchen 14 +-- 1 -1 -1 *
sink for fear of missing something. This is where 15 -++ -1 1 1 *
consultation with subject matter experts like the 16 ++- 1 1 -1 *
process engineers and bench scientists familiar with
17 +++ 1 1 1 *
the process or product is crucial.
Figure 3 is an example of an experimental design 18 --- -1 -1 -1 *
produced by the JMP software. The software allows

PROCESS VALIDATION Process Design 49


John A. Wass

is sometimes needed to provide sufficient statistical data within any one block to be more homogeneous
power. These factor levels are obviously coded but than that between blocks. Thus, greater precision
the actual numbers could be entered and more is obtained by doing within-block comparisons, as
factors screened against multiple outputs, if desired. inter-block differences are eliminated. Also runs are
randomized within a block, and block order may
An Example be randomized if necessary. Note that although
The medical diagnostics industry makes great use this screen is designed for maximal efficiency and
of DOE for myriad products. In this example, we precision, only main effects order effects may be
will look at the design and analysis of a clinical estimated. After gaining management approval for the
laboratory kit pack for an unspecified analyte. cost, we obtain the design using JMP software. Note
We wish to determine and minimize the product that the block column here indicates the session in
variability given certain known conditions of reagent which to perform each trial (see Figure 4).
concentration and physical/environmental factors for Notice that we have five factors and that a full
the chemical reactions occurring in the kit pack. The factorial design would require 2^5 or 32 runs yet our
input factors are as follows: design only has 18. The reduction comes in an aspect
tReagent 1 of DOE called design efficiency. This is a measure
tReagent 2 of the efficiency of any given design in covering the
tEnzyme design space, compared to a 100% efficient design
tTemperature that would cover all of the points needed to extract
tMixing speed. maximal information about the dependency of the
results upon the input factors. Modern DOE software
In this example, the two reagents are combined employs highly efficient algorithms based upon
with the enzyme in a reaction vessel at a specified some very complex mathematics that will reduce the
temperature. The mixture is agitated at several number of points needed while minimally impacting
mixing speeds and the researcher wants to know how efficiency.
important each of the factors (i.e., reagents, enzyme, Having designed the experiment, we go into the lab
temp, and mix speed) is to the concentration of the and collect the data then return to the JMP design table
final product. and fill in the Y column of the table (see Figure 5).
All are held at either low, median, or high The data can be analyzed in JMP. The first step
levels. The single output (response) variable is the is to specify the model. The model can then be
concentration of the product. We will minimize the automatically generated as an effect screen using JMPs
variability of the concentration measurement (i.e., standard least squares platform. The model contains
for the given inputs we wish to hold the measured a list of all the factor effects that we wish (and are
concentration to be within a stated reference range able) to estimate. Our estimates will include the five
with minimal variability). This reference range could main factor effects, the block effect and the two-way
be from the clinical range for that particular analyte interaction between reagents 1 and 2 (see Figure 6).
in human fluids or dictated by the company quality Running this model shows us an actual vs. predicted
document. plot for the product (see Figure 7).
With five factors, the full factorial design would As the p-value from the analysis of variance shows
2^5=32 trials; however, that is about twice as many that the model is very significant (p<< 0.01), the
trials as our budget will allow. Also, our statistician R-squared is 92% and the estimate of the standard
informs us that the addition of center points in the deviation of the process noise (RMSE) is quite low,
design will increase efficiency and replicates are we are initially happy with the model fit. However,
necessary to evaluate random error. Our experiment Figure 8 shows the parameter estimates that test the
will require four sessions to complete, and we are significance of the individual factors to the overall
concerned that experimental conditions may vary model.
from session to session. We would like to design Based upon the usual p-value cutoff of 0.05, only the
the experiment in some way so that any session effects of the two reagents were statistically significant.
differences are not misconstrued as factor effects. So The intercept is usually not of interest in the physical
our statistician suggests the use of blocking in our sense of the model as it only gives information about
design. Blocking is the arranging of experimental the 0 point of the x-axis.
units (the runs) in groups, or blocks, that are similar to As the chemists suspected, the reagents are quite
one another. important to the product and the mix speed was not.
We block our designs to reduce random noise However, the lack of an enzyme and temperature effects
between sessions (in this case) as the experiment was is surprising and may indicate a flaw in the design.
carried out over several time periods and we expect the Either the wrong values of those factors were chosen

50 PROCESS VALIDATION Process Design


John A. Wass

Figure 4: Example problem screening design.

Pattern Block Reagent 1 Reagent 2 Enzyme Temp MixSpeed Y


1 00000 1 0 0 0 0 0 *
2 +--++ 1 1 -1 -1 1 1 *
3 ++--- 1 1 1 -1 -1 -1 *
4 -++-+ 1 -1 1 1 -1 1 *
5 --++- 1 -1 -1 1 1 -1 *
6 +-+-- 2 1 -1 1 -1 -1 *
7 +++++ 2 1 1 1 1 1 *
8 ----+ 2 -1 -1 -1 -1 1 *
9 -+-+- 2 -1 1 -1 1 -1 *
10 -++-+ 3 -1 1 1 -1 1 *
11 ++--- 3 1 1 -1 -1 -1 *
12 00000 3 0 0 0 0 0 *
13 +--++ 3 1 -1 -1 1 1 *
14 --++- 3 -1 -1 1 1 -1 *
15 ----+ 4 -1 -1 -1 -1 1 *
16 -+-+- 4 -1 1 -1 1 -1 *
17 +-+-- 4 1 -1 1 -1 -1 *
18 +++++ 4 1 1 1 1 1 *

(the range was not sufficiently wide to see the effect), tPlackett-Burman. Resolution is higher than
the design was underpowered for those factors (Power simple factorial and this is a good technique for
in this case is taken as: not enough runs to discern a large numbers of input factors. Also more flexible
difference given that one truly exists), or both. The one as the runs need not be a power of 2. All columns
interaction and the block tests cannot truly be discerned are balanced and pairwise orthogonal.
(as to the true effect on product) as we have not the tBox-Behnken. Efficient technique for modeling
resolution to do so with this design. We can test the quantitative three-level factors. Hold some of the
power, however, for the surprises in the main effects (see factors at their centerpoints and are easily blocked.
Figure 9). tBox-Wilson. These are central composite designs
These are both low and we would need more data to that may be used when significant interaction is
determine true lack of significance. For guidance on suspected. They have the desirable property of
what constitutes adequate power, see the Tips, Tricks, rotatability where the predicted response may be
and Pitfalls section. The Pareto Plot (see Figure 10) estimated with equal variance from any direction
shows the results graphically. from the center of the design space.
The prediction profiler (see Figure 11) shows us how
to set factors to maximize our product and the mean TIPS, TRICKS, AND PITFALLS
and 95% confidence interval for the mean response. Experimental design is an art. There are many
Our task is now to determine the best settings for detailed techniques and lessons learned from years of
the temperature and enzyme factors and to augment experience that novices may have to gradually learn
the design with enough data to ensure adequate power. on their ownthe hard way. The following comments
Once this is achieved, we can zero in on maximal are meant to minimize the pain for experimenters who
setting and minimal variation using response surface wish to take advantage of these modern experimental
methodology. tools.

Other Designs Curvature


Fractional and full factorials are not the only screening First and foremost is the fact that screening is a linear
designs available. The following are among the most process and as such does not take curvature and higher
popular: order interactions that generate curvature into account.

PROCESS VALIDATION Process Design 51


John A. Wass

Figure 5: Test data.

Reagent Reagent Mix


Pattern Block 1 2 Enzyme Temp Speed Y
1 00000 1 0 0 0 0 0 2.4
2 +--++ 1 1 -1 -1 1 1 2
3 ++--- 1 1 1 -1 -1 -1 2.3
4 -++-+ 1 -1 1 1 -1 1 1.8
5 --++- 1 -1 -1 1 1 -1 1.5
6 +-+-- 2 1 -1 1 -1 -1 1.9
7 +++++ 2 1 1 1 1 1 3
8 ----+ 2 -1 -1 -1 -1 1 0.9
9 -+-+- 2 -1 1 -1 1 -1 1.6
10 -++-+ 3 -1 1 1 -1 1 1.9
11 ++--- 3 1 1 -1 -1 -1 2.6
12 00000 3 0 0 0 0 0 2.6
13 +--++ 3 1 -1 -1 1 1 2.2
14 --++- 3 -1 -1 1 1 -1 1.4
15 ----+ 4 -1 -1 -1 -1 1 1.1
16 -+-+- 4 -1 1 -1 1 -1 1.8
17 +-+-- 4 1 -1 1 -1 -1 2.1
18 +++++ 4 1 1 1 1 1 3.1

Figure 6: Data analysis. Figure 7: Actual by predicted plot.

2.5
Y Actual

1.5

1 1.5 2 2.5 3
Y Predicted P=0.0019
RSq=0.92 RMSE=0.253

52 PROCESS VALIDATION Process Design


John A. Wass

If the model renders a poor fit, and the graph of the variation about a process mean, as well as facilitating
data appears to suggest curvature (points clustered the detection of true differences. For screening, we can
above and below the line), the experimenter is well usually afford but a single replicate, and in some cases
advised to go to a more complex model (e.g., add a it may not be necessary to replicate all of the points in
center point or quadratic). Also note that screening a design. The heavy replication of points is usually left
may underestimate the true number of main factors, to the response surface methodologies where we really
depending upon choices and basic knowledge of the need to get a sharper picture of the reduced design
process. Lack-of-fit should also be carefully assessed, space. Replication is one of the most underappreciated
as by its very nature screening will evidence poor lack necessities of statistical analyses and the one that will
of fit for any curved process despite the fact that the generate the most headaches if ignored.
selections of main factors were complete. Bottom line:
there are problems with only looking at first order SOFTWARE
effects. There are numerous software products available to
assist the practitioner in design and analysis of their
Randomization experiments. The author has had experience with the
One of the more important concepts in all of DOE is following commercial packages:
to randomize the order of runs. This averages over tJMP (www.jmp.com)
noise that may be due to a specific time of run, type tDesign Expert (www.statease.com)
of chemical prep, temperature, pressure, or other tMODDE (www.umetrics.com)
condition that may be concentrated in space and time. tUnscrambler (www.camo.no)
When runs may be dependent upon long physical tMinitab (www.minitab.com)
startup times, the experimenter may choose to block tSYSTAT (www.systat.com)
certain factors. tSTATISTICA (www.statsoft.com)
tGenStat (www.vsni.co.uk).
Power
Here we refer to statistical power (i.e., the probability of CONCLUSIONS
detecting a difference if one truly exists). Most software Modern experimental design is sometimes art as well
will perform this test and it should never be ignored. In as science. It is the objective of this column to acquaint
certain cases, if the power is found to be unacceptably the reader with the rudiments of the screening design,
low, it might dictate rerunning the entire experiment introduce them to the nomenclature and supplement
with a larger number of runs. Acceptable power is the learning experience with a real-world example.
determined by the experimenter or may be dictated by Furthermore, as the process is not exact, several
company or governmental rules. As a rule of thumb: rules-of-thumb are given to provide guidance in
for purely research purposes, 70-80% may be adequate. those situations where hard and fast rules may not be
For process development 90+% is desirable and in available.
certain critical cases (e.g., HIV diagnostics) 98% may be
the least that is acceptable. GENERAL REFERENCES
S.R. Schmidt and R.G. Launsby, Understanding Industrial
Aliasing Designed Experiments (4th ed.), Air Academy Press, 1997.
As was mentioned, due to low number of runs and G.E.P. Box, J.S Hunter, and W.G. Hunter, Statistics for
resolution in most screening designs, we encounter Experimenters (2nd Ed.), Wiley Interscience, 2005.
situations where it is difficult to assign effects D. C. Montgomery, Design and Analysis of Experiments (5th ed.),
unambiguously to single factors. To get around this John Wiley, 2001.
problem the experimenter may consider the following: JMP Design of Experiments Guide, Release 7, SAS Institute Inc.,
select different design generators, increase the number 2007.
of runs, or hold one or more factors constant. ECHIP Reference Manual, Version 6, ECHIP Inc., 1983-1993.
Deming, S.N., Quality by Design (Part 5), Chemtech, pp 118-
Replication 126, Feb. 1990.
One of the most important concepts in statistics is Deming, S.N., Quality by Design (Part 6), Chemtech, pp 604
replication. The use of replication provides an estimate 607, Oct. 1992.
of the magnitude of noise. An adequate number of Deming, S.N., Quality by Design (Part 7), Chemtech, pp 666
replicates will ensure a more precise estimate of the 673, Nov. 1992. JVT

Originally published in the Spring 2010 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 53


John A. Wass

First Steps in
Experimental Design II:
More on Screening
Experiments
John A. Wass

Statistical Viewpoint addresses principles of tInteraction columns in a design are formed as


statistics useful to practitioners in compliance and the dot product of two other columns. In a
validation. We intend to present these concepts in a DOE analysis, the interaction columns are used
meaningful way so as to enable their application in to estimate the corresponding interaction effects.
daily work situations. tWhen two design columns are identical, the
The comments, questions, and suggestions of the corresponding factors or interactions are aliased
readers are needed to help us fulfill our objective and their corresponding effects cannot be
for this column. Please contact our coordinating distinguished.
editor Susan Haigney at shaigney@advanstar.com tA desirable feature of a screening design is
with comments, suggestions, or manuscripts for orthogonality in which the vector products of
publication. any two main effect or interaction columns sum
to zero. Orthogonality means that all estimates
KEY POINTS can be obtained independently of one another.
The following key points are discussed: tDOE software provides efficient screening
tDesign of experiments (DOE) consists of three designs whose columns are not aliased and from
basic stages: screening to identify important which orthogonal estimates can be obtained.
factors, response surface methodology to define tFractional factorial screening designs include
the optimal space, and model validation to fewer trials and may be more efficient than the
confirm predictions. corresponding full factorial design.
tA critical preliminary step in the screening stage tThe concept of aliasing is one of the tools that
is for subject matter experts to identify the key can be used to construct efficient, orthogonal,
list of factors that may influence the process. screening designs.
tA DOE design consists of a table whose rows tCenter points are often included in screening
represent experimental trials and whose designs to raise the efficiency and to provide a
columns (vectors) give the corresponding measure of replication error and lack of model
factor levels. In a DOE analysis, the factor level fit.
columns are used to estimate the corresponding
factor main effects.

For more Author


information,
go to
gxpandjvt.com/bios [ ABOUT THE AUTHOR
John Wass is a consulting statistician with Quantum Cats Consulting in the Chicago area as well as
a contributing editor at Scientific Computing and administrator of the Great Lakes JMP Users Group.
He may be reached by e-mail at john.wass@tds.net.

54 PROCESS VALIDATION Process Design


John A. Wass

tThe order of running and testing experimental transforming parameter to be determined, e.g., if
trials is often randomized to protect against the =1/2, take the square root of the response variable).
presence of unknown lurking variables. The most useful has been found to be the Box-Cox
tBlocking variables (such as day or run or session) procedure that estimates and other model parameters
may be included in a design to raise the design simultaneously by the method of maximum
efficiency. likelihood. Modern software does this automatically.
tFactor effects in screening designs may be missed If the analyst prefers to choose the value of , simple
because they were not included in the screening values are preferred as, for example, the real-world
experiment, because they were not given differences between 0.50 and 0.58 may be small but
sufficiently wide factor ranges, because the design the square root is much easier to interpret. Also if the
was underpowered for those factors, because trial optimal value of is determined to be close to one, no
order was not properly randomized or blocked, or transformation may be necessary.
because of an inadequate model.
Blocking
INTRODUCTION It is often advantageous to minimize or eliminate
This article is the second in a series that deals with variability contributed by factors of little or no interest
the specialized types of screening designs (1). These even though they affect the outcome. These nuisance
designs have been developed to most efficiently accept factors may be reduced by a technique called blocking.
many inputs that may or may not be relevant to the By grouping these nuisance variables and reducing
final product and reduce this list to those few that system variability, the precision of factor (of interest)
are most important. Once the results are confirmed, comparisons is increased. In the example of the
the analyst proceeds to the response surface designs chemical reaction in our previous article, if several
to map the fine detail in the area of optimal batches of the substrate are required to run the design,
response (i.e., decide on the most desirable values and there is batch-to-batch variability due to supplier
of the inputs to get the optimal output of whatever methodology, we may wish to block the substrate by
is being manufactured or controlled). The three supplier, thus reducing the noise from this factor. We
most important targets usually sought are optimal tend to consider a block as a collection of homogeneous
concentrations, variance reduction, and robustness. conditions. In this case, we would expect the
difference between different batches to be greater
THEORY than those within a single batch (supplier). Please
Most screening designs are class III designs, where note, if it is known or highly suspected, that within-
main effects are not aliased (confounded with each block variability is about the same as between-block
other), but the main effects are aliased with the two- variability, paired analysis of means will be the same
way interactions. Factor levels are briefly discussed regardless of which design may be used. The use of
in the following sections. At this point the reader may blocking here would reduce the degrees of freedom and
wish to review the previous article in the series (1) to lead to a wider confidence interval for the difference of
re-examine the importance of randomization and means.
replication. Randomization ensures the independence
of the observations. Replication assesses variation and
more accurately obtains effect estimates. Figure 1: Screening design and test data.
Before the models (designs) are run, it may be
advantageous to decide on design level, blocking (if
any), and data transformation (if necessary). Lets
examine transformations first, as this is a common
problem.

Data Transformation
Transformations are usually employed to stabilize
the response variance, make the distribution of the
response variable more normal, or improve the fit
of the model to the data (2). Note that more than
one of these objectives may be simultaneously
achieved, and the transformation is many times done
with one of the power family (y*=y, where is the

PROCESS VALIDATION Process Design 55


John A. Wass

Figure 2: Actual by predicted plot.

Parameter Estimates

Term Estimate Std Error t Ratio Prob>|t|


Intercept 2.00375 0.06 33.40 <.0001*
Reagent 1 0.45 0.063246 7.12 0.0001*
Reagent 2 0.3125 0.063246 4.94 0.0011*
Enzyme 0.1375 0.063246 2.17 0.0614
Temp 0.125 0.063246 1.98 0.0835
Mix Speed 0.05 0.063246 0.79 0.4520
Block[1] -0.00375 0.1 -0.04 0.9710
Block[2] -0.15375 0.107703 -1.43 0.1913
Block[3] 0.13625 0.1 1.36 0.2102
Reagent 1*Reagent 2 0.0375 0.063246 0.59 0.5696

Figure 3: Actual by predicted plot.

Parameter Estimates

Term Estimate Std Error t Ratio Prob>|t|


Intercept 2.0111111 0.059471 33.82 <.0001*
Reagent 1 0.45 0.063078 7.13 <.0001*
Reagent 2 0.3125 0.063078 4.95 0.0004*
Enzyme 0.1375 0.063078 2.18 0.0519
Temp 0.125 0.063078 1.98 0.0731
Mix Speed 0.05 0.063078 0.79 0.4447
Reagent 1*Reagent 2 0.0375 0.063078 0.59 0.5642

56 PROCESS VALIDATION Process Design


John A. Wass

Figure 4: Actual by predicted plot.

Parameter Estimates
Term Estimate Std Error t Ratio Prob>|t|
Intercept 2.1291667 0.02533 84.06 <.0001*
Reagent 1 Biased 0.5 0.029806 16.78 <.0001*
Reagent 2 0.3125 0.021076 14.83 <.0001*
Enzyme 0.1375 0.021076 6.52 0.0003*
Temp[-1] -0.304167 0.030623 -9.93 <.0001*
Temp[0] 0.3583333 0.044432 8.06 <.0001*
Temp[-1]:Mix Speed[-1] Biased -0.1 0.042152 -2.37 0.0494*
Temp[1]:Mix Speed[-1] Zeroed 0 0 . .
Block[1] -0.0575 0.033984 -1.69 0.1345
Block[2] -0.1 0.036505 -2.74 0.0289*
Block[3] 0.0825 0.033984 2.43 0.0456*
Reagent 1*Reagent 2 0.0375 0.021076 1.78 0.1184

Factor Levels eliminated sources of variability and hopefully


The last preliminary item of importance is choosing gained a better understanding of the process. We
factor levels. There are an infinite number of values cannot always identify these nuisance factors. But by
for any continuous variable, and a restricted, but randomization, we can guard against the effects of
usually large number for categorical variables. In these factors, as their effects are spread or diluted
general, when the objective is to determine the small across the entire experiment.
number of factors that are important to the outcome If we remember our chemical process experiment
or characterize the process, it is advisable to keep factor from the previous article, we had two reagents with
levels lowusually two works well. This is because we two levels of an enzyme, temperature, and mix speeds.
are designing F*k runs (F=factor, k=levels) in a factorial We added a center point to check for curvature and
type experiment and as the levels of each factor rise, ran a single replicate for each point and blocked across
the number of runs increases dramatically. The drama four days (see Figure 1). We put all main factors plus an
intensifies further if interactions are included. interaction into the model (see Figure 2).
Parameter estimates in Figure 2 told us that the
TECHNIQUESTHE DESIGNS blocks were not significant, and when we rerun the
The following are three widely-used screening designs: model without a block effect, we see the results in
tRandomized blocks and fractional factorial designs Figure 3.
tNested and split-plot designs Although parameter estimates in Figure 3 show
tPlackett-Burman (P-B) designs. little difference, the enzyme component is closer
to significance. Again this may represent a power
Randomized Blocks and Fractional problem or design flaw (we needed a wider enzyme
Factorial Designs range). In this example, we may not have needed
As was stated, similar batches of relatively to block, but it is always wise to test if an effect is
homogeneous units of data may be grouped. This suspected or anomalous results are encountered.
grouping restricts complete randomization as the Fold-over. There is a specialized technique within
treatments are only randomized within the block. this group called fold-over. It is mentioned because
By blocking we loose degrees of freedom but have the analyst may find it useful for isolating effects of

PROCESS VALIDATION Process Design 57


John A. Wass

Figure 5: Residual by predicted plot. Figure 6: Normal plot.

interest. It is performed by switching certain signs in Nested and Split-Plot Designs


the design systematically to isolate these effects. The These designs are widely used in many industries.
signs are changed (reversed) in certain factors of the They introduce the need for random factor designation
original design to isolate the one of interest in an anti- and the joys of variance component analysis. The
aliasing strategy. The name derives from the fact that former refers to those factors that may be taken as
it is a fold-over of the original design. The details are an adequate representative of a larger population.
beyond the scope of this introductory article but may For example, if two instruments are used in an
be found in standard references (2, 3). experiment to characterize performance because
Latin-square design. Yet another specialized only two were available, the results may not be
technique used with fractional factorial designs is the generalized to the population of 100 instruments
Latin-square design. This design utilizes the blocking that were manufactured and the factor instrument
technique on one or more factors to reduce variation is not random and, therefore, we classify it as a fixed
from nuisance factors. In this case the design is an n effect. If, however, 20 instruments were available and
x n where the number of rows equal the number of 7 or 8 were chosen at random, the results are much
columns. It has the desirable property of orthogonality more likely to represent the population and the factor
(independence of the factors, great for simplifying may be considered random. As the minimal number
the math and strengthening the conclusions). needed may be calculated from sampling theory and
Unfortunately, the row and column arrangement power studies, a statistician may be consulted if there
represent restrictions on randomization, as each cell in are no industry recommendations.
the square contains one of the n letters corresponding Variance component analysis involves the
to the treatments, and each letter can occur only once calculation of expected mean squares of error to
in each row and column. The statistical model for this determine how much of the total system variance is
type of design is an effects model and is completely contributed by each term in the model (including the
additive (i.e., there is no interaction between the rows, error term).
columns, and treatments). Nested design. When levels of an effect B only
Saturated design. One last term that the novice occur within a single level of an effect A, then B is said
may bump up against is the concept of a saturated to be nested within A (4). This may be contrasted
design, unfortunately all too common in the industrial with crossed effects, which are interactions (i.e., the
world. This refers to a situation where the analyst is results of one factor are dependent upon the level of
attempting to include many variables in the model and another factor). Nested designs are sometimes referred
has few runs (translating ultimately to too few degrees to as hierarchical designs.
of freedom) to support the analysis. This allows for In our example of the chemical process, if we only
estimation of main effects only. In some cases, all had several temperatures and mix speeds available,
interactions may be aliased with the main effects, we might wish to check the effects of using only
thus condemning the analyst to missing important certain mix speeds with certain temperatures. This is
factors and interactions. If it is not possible to increase easily done by nesting mix speed within temperature,
the number of runs, it is a good idea to call in subject designated as mix speed [temp]. When the model is
matter experts (SMEs) (usually chemists or engineers) analyzed this way, we get the results in Figure 4.
to assist in eliminating variables. The fit is better only because we now have

58 PROCESS VALIDATION Process Design


John A. Wass

categorical variables, less to fit, and many other factors two-way interactions. The great advantage of these
are significant. We have, however, lost degrees of designs is the ability to evaluate many factors with
freedom by nesting terms, and this may negatively few runs. The disadvantages involve the assumptions
affect power. We would then use residual (error) made (i.e., that any interactions are not strong enough
analysis as our diagnostic tool, followed by standard to mask main effects and that any quadratic effects are
checks such as normal probability plots, outlier checks, closely related to factor linear effects). Although these
and plotting the residual versus fitted values (see assumptions usually hold, it is always best to try to
Figures 5 and 6). verify them with any available diagnostics.
Both of the diagnostics in Figures 5 and 6 exhibit Again, for our system the P-B design is seen in Figure
problems (i.e., increasing residuals with predicted 8. The analyses results are seen in Figure 9.
values and many off-axis values on the Normal Plot). It appears that P-B may not be a good design choice
These may be due to singularities during calculations as it requires more runs and is less sensitive to the
(e.g., terms going to infinity or division by zero). We present data structure than simpler designs.
may wish to increase the runs to see if the increasing
degrees of freedom will stabilize the values. SYNOPSIS: BUT WHICH DO I USE?
Split-plot design. In some experiments, due The following provides a pathway for application
to real-world complications, the run order may not and selection of appropriate screening designs. The
be amenable to randomization and we need to use
a generalization of the factorial design called the
split-plot design. The name refers to the historical Table: Fixed effect tests.
origins of the design in agriculture and posits splitting Source Nparm DF DFDen F Ratio Prob > F
some factor into sub-factors due to some problem Reagent1 1 1 2 243.0000 0.0041*
with running a full factorial design or data collection
Reagent2 1 1 2 14.0833 0.0087*
method (e.g., different batches on different days).
Therefore, we are running the experiment as a group Enzyme 1 1 3 0.0388 0.8564
of runs where within each group some factors (or only Temp 1 1 3 0.0580 0.8252
one) remain constant. This is done as it may be very Mix Speed 1 1 2 2.0833 0.2857
difficult or expensive to change these factors between
runs. In our example, we can declare the enzyme prep
and temperature as difficult to change. The software following questions are addressed:
then optimally designs the experiment around 5 tWhich screening design should be used when
plots in just 10 runs, far fewer than even a fractional resources are a consideration?
factorial design (see Figure 7). tWhich screening design should be used when
It declares only the plots as random so they take up flexibility is needed regarding variables to be
all of the variance. The plots are split by the enzyme tested?
and mix speed as these have been declared hard to tWhat are the advantages of the respective designs
change and are the subplots. As we have only the one regarding special needs (e.g., reducing noise,
random effect, we test all others as fixed effects (see blocking, and other needs)?
Table).
The results are essentially the same as for the Figure 10 is a flow diagram may be used to answer
fractional factorial, as the design may contain similar these questions.
flaws.
SOFTWARE
Plackett-Burman (P-B) Designs There are numerous software products available to
P-B designs are specialized screening designs where assist the practitioner in design and analysis of their
the number of runs is not required to be powers of experiments. The author has had experience with the
two. If there are funds for extra runs whose number following commercial packages:
does not increase by a power of two, these designs are t Design Expert (www.statease.com)
ideal as they also generate columns that are balanced t GenStat (www.vsni.co.uk)
and pairwise orthogonal. They are based upon a very t JMP (www.jmp.com)
flexible mathematical construct called a Hadamard t Minitab (www.minitab.com)
matrix where the number of runs increases as a t MODDE (www.umetrics.com)
multiple of four and thus will increase much more t STATISTICA (www.statsoft.com)
slowly than the fractional factorial. Note that these t SYSTAT (www.systat.com)
are Resolution III designs where the main effects are t Unscrambler (www.camo.no).
not aliased with each other but are aliased with any

PROCESS VALIDATION Process Design 59


John A. Wass

Figure 7: Screening design and test data.

Figure 8: Plackett-Burman design.

60 PROCESS VALIDATION Process Design


John A. Wass

Figure 9: Plackett-Burman design results.

PROCESS VALIDATION Process Design 61


John A. Wass

Figure 10: Screening design decision matrix.


REFERENCES
1. Wass, John A., Statistical Viewpoint: First
Resources (time/money/personnel) Are/Are Not A Concern Steps in Experimental DesignThe Screening
Experiment, Journal of Validation Technology,
Volume 16, Number 2, Spring 2010.
2. D. C. Montgomery, Design and Analysis of
ARE ARE NOT Experiments (5th ed.), John Wiley, 2001.
Fractional Factorial Full Factorial
Plackett -Burman 3. G.E.P. Box, J.S Hunter and, W.G. Hunter,
Need Flexibility in Run Numbers Statistics for Experimenters (2nd Ed.), Wiley
Interscience, 2005.
4. JMP Design of Experiments Guide, Release 7,
Yes No SAS Institute Inc., 2007.
Plackett -Burman Fractional Factorial

Special Needs
GENERAL REFERENCES
S.R. Schmidt and R.G. Launsby, Understanding
Industrial Designed Experiments (4th ed.), Air
Academy Press, 1997.
Reduce Extraneous Noise Isolate Aliased Effects Allow for embedded Cannot Randomize G.E.P. Box, J.S Hunter and, W.G. Hunter,
Blocking Foldover Levels the run order
Latin Squares Nested Design Split Plot Statistics for Experimenters (2nd Ed.), Wiley
Interscience, 2005.
D. C. Montgomery, Design and Analysis of
CONCLUSIONS Experiments (5th ed.), John Wiley, 2001.
Modern experimental design is sometimes art as well JMP Design of Experiments Guide, Release 7, SAS Institute Inc.,
as science. It is the objective of this column to acquaint 2007.
the reader with the rudiments of the screening design, ECHIP, Reference Manual, Version 6, ECHIP Inc., 1983-1993.
introduce them to the nomenclature, and supplement Deming, S.N., Quality by Design (Part 5), Chemtech, pp 118-
the learning experience with a real-world example. 126, Feb. 1990.
Deming, S.N., Quality by Design (Part 6), Chemtech, pp 604
607, Oct. 1992.
Deming, S.N., Quality by Design (Part 7), Chemtech, pp 666
673, Nov. 1992. JVT

ARTICLE ACRONYM LISTING


DOE Design of Experiments
P-B Plackett-Burman
SME Subject Matter Experts

Originally published in the Winter 2011 issue of The Journal of Validation Technology

62 PROCESS VALIDATION Process Design


John A. Wass

A Further Step in
Experimental Design (III):
The Response Surface
John A. Wass

Statistical Viewpoint addresses principles of tWhen two design columns are identical, the
statistics useful to practitioners in compliance and corresponding factors or interactions are
validation. We intend to present these concepts in aliased and their corresponding effects cannot
a meaningful way so as to enable their application be distinguished.
in daily work situations. tThe order of running and testing experimental
Reader comments, questions, and suggestions trials is often randomized to protect against
are needed to help us fulfill our objective for this the presence of unknown lurking variables.
column. Please send any comments to managing tBlocking variables (e.g., day or run or session)
editor Susan Haigney at shaigney@advanstar.com. may be included in a design to raise the design
efficiency.
KEY POINTS tFactor effects may be missed because they
The following key points are discussed: were not included in the original screening
tDesign of experiments (DOE) consists of three experiment, because they were not given
basic stages: screening (to identify important sufficiently wide factor ranges, because the
factors), response surface methodology (to design was underpowered for those factors,
define the optimal space), and model valida- because trial order was not properly random-
tion (to confirm predictions). ized or blocked, or because of an inadequate
tA critical preliminary step in the screening model.
stage is for subject matter experts to identify tUnusual interactions and higher-order effects
the key list of factors that might influence the occassionaly may be needed to account
process. for curvature and work around regions of
tA DOE design consists of a table whose rows singularity.
represent experimental trials and whose col- tWhere there are inequality constraints (e.g.,
umns (vectors) give the corresponding factor areas where standard settings will not work),
levels. In a DOE analysis, the factor level col- special designs are needed.
umns are used to estimate the corresponding tThe designs may become rather challenging
factor main effects. and a statistician becomes an invaluable part
tInteraction columns in a design are formed as of the team when considering problems of
the dot product of two other columns. In a non-normal responses, unbalanced data, spe-
DOE analysis, the interaction columns are used cialized covariance structures, and unusual or
to estimate the corresponding interaction effects. unexpected physical or chemical effects.

For more Author


information,
go to
gxpandjvt.com/bios [ ABOUT THE AUTHOR
John A. Wass is a consulting statistician with Quantum Cats Consulting in the Chicago area, as
well as a contributing editor at Scientific Computing, and administrator of the Great Lakes JMP Users
Group. He served for a number of years as a senior statistician at Abbott Laboratories (in both phar-
maceuticals and diagnostics). John may be contacted by e-mail at john.wass@tds.net.

PROCESS VALIDATION Process Design 63


John A. Wass

Figure 1: Surface point. Figure 3: Central composite design.


2 10 2
Silica 20 Sila
ne
1 30

0 40
220 50
200 220 
180 200
160 180
on

140 3
Abrasi

160

Abrasi
120 140
100
120 -

on
80
60
100 
40 80
10
20
60
40 - 
2
Si
la 30
ne
ca
1
40 1 Sili

50 0
-
Figure 2: Box-Behnken design.

Figure 4: CCD data.


(+1) Reagent Reagent Mix
Pattern 1 2 Enzyme Temp Speed Y

(+1) 1 a0000 -2 0 0 0 0 1.2


2 -+-+- -1 1 -1 1 -1 0.9
3 0A000 0 2 0 0 0 1.35
(-1) 4 0000a 0 0 0 0 -2 1.4
5 A0000 2 0 0 0 0 1.85
(+1) 6 -++++ -1 1 1 1 1 2.6
7 +--+- 1 -1 -1 1 -1 1.5
8 ---++ -1 -1 -1 1 1 1.2
(-1) 9 ++--- 1 1 -1 -1 -1 1.3

(-1) 10 ++++- 1 1 1 1 -1 4
11 000A0 0 0 0 2 0 2.6
12 00a00 0 0 -2 0 0 0.4
13 +-+-- 1 -1 1 -1 -1 2
INTRODUCTION 14 00A00 0 0 2 0 0 1.8
Response surface methodology (RSM) is the devel- 15 000a0 0 0 0 -2 0 0.6
opment of the specific types of special designs to 16 -++-- -1 1 1 -1 -1 1.1
most efficiently accept a small number of inputs
17 -+--+ -1 1 -1 -1 1 0.9
(relative to screening designs) that are known to be
18 0000A 0 0 0 0 2 0.8
relevant to the final product and optimize a pro-
cess result to a desired target (1). Once the results 19 +---+ 1 -1 -1 -1 1 1.5
are confirmed, the analysts load becomes lighter 20 00000 0 0 0 0 0 2.2
(excepting in the case of non-reproducibility or 21 ++-++ 1 1 -1 1 1 2.5
results drifting out of specification). In effect, the 22 --++- -1 -1 1 1 -1 2.3
response surface maps the fine detail in the area of 23 0a000 0 -2 0 0 0 1.5
optimal response (i.e., determines the most desir- 24 +-+++ 1 -1 1 1 1 2.7
able values of the inputs to get the optimal output 25 +++-+ 1 1 1 -1 1 3.3
of whatever is being manufactured, controlled, or 26 00000 0 0 0 0 0 3
studied). The three most important targets usually 27 ----- -1 -1 -1 -1 -1 0.9
sought are optimal concentrations, variance reduc-
28 --+-+ -1 -1 1 -1 1 2
tion, and robustness (2). The adequacy of the model

64 PROCESS VALIDATION Process Design


John A. Wass

is most often checked by residual analysis, influ- Figure 5: Actual by predicted plot.
ence diagnostics, and lack-of-fit testing (3). JMP 9
4.5
is utilized herein for the design and analysis of an
industrial example (4). 4.0
3.5
THEORY
Many response surface designs are collections of 3.0

Y Actual
specialized statistical and mathematical techniques
2.5
that have been well implemented in software using
efficient algorithms (4, 5). In many real world cases 2.0
the output includes more than one response and
1.5
these need not be continuous functions. Lets exam-
ine the case of a chemical engineer who wishes 1.0
to maximize an important property (y) based on 0.5
given levels of two chemical inputs, (x1) and (x 2).
The desired property is now a function of the two 0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
chemical entities plus error (), as follows:

y = f(x1, x 2) + Y Predicted P=0.0275


RSq=0.63 RMSE=0.6706
The surface is represented by the following:
Summary of Fit
y = f(x1, x 2) RSquare 0.626836
RSquare Adj 0.407328
The response surface is usually displayed graphi-
Root Mean Square Error 0.670639
cally as a smoothly curving surface, a practice that
may obscure the magnitude of local extremes (see Mean of Response 1.764286
Figure 1). Observations (or Sum Wgts) 28
In many problems using RSM, the experimenter
does not know the exact mathematical form of the Analysis of Variance
relationship between the input and output vari- Sum of Mean
ables and, therefore, must find a workable approxi- Source DF Squares Square F Ratio
mation. The first guesstimate is a low order poly- Model 10 12.843426 1.28434 2.8556
nomial (e.g., first order model), as follows: Error 17 7.645859 0.44976 Prob > F
C. Total 27 20.489286 0.0275*
y = 0 + 1x1 + 2 x 2 + + n x n +

Obviously, this is a linear model that will not accom- Lack of Fit
modate curvature. If curvature is suspected, a higher Sum of Mean F Ratio
order polynomial may be tried. The following is a sec- Source DF Squares Square 0.3815
ond order model: Lack of Fit 14 4.8958594 0.349704 Prob > F
Pure Error 3 2.7500000 0.916667 0.9083
y = 0 + 4Ai x i + 4Aii x i2 + 44Aij x i x j +
Total Error 17 7.6458594 Max RSq
where we sum over i, and i<j. The above two models 0.8658
are found to work well in a variety of situations. How-
ever, although these may be reasonable approxima- Another caveat is the importance of using the
tions over the entire design space, they will not be an software tools with an incomplete understanding of
exact fit in all regions. The goal is to find a smaller the problem. It is sometimes tempting to overlook
region of interest where they fit well. A proper experi- expert advice from the chemists or engineers and try
mental design will result in the best estimate of the to force-fit a simple solution. As in the real world, we
model parameters. The response surface is usually fit- often find extremely messy data. It might be neces-
ted by a least squares procedure, and optimal values sary to use a cubic factor, a three-way interaction, or
are pursued through sequential algorithms such as the a highly-customized design. At this point the reader
method of steepest ascent. The details of these algo- may wish to review the previous article in the series
rithms are beyond the scope of this series. (Journal of Validation Technology, Winter 2011) to re-

PROCESS VALIDATION Process Design 65


John A. Wass

Figure 6: Residual by predicted plot. Factorial Designs


These are one of the most common designs used in
situations where there are several factors and the
1.0 experimenter wishes to examine the interaction
effects of these on the response variable(s). The
0.5 class of 2k factorial designs is often used in RSM and
Y Residual

finds wide application in the following three areas


0.0 (6):
tAs a screening design prior to the actual
-0.5
response surface experiments
tTo fit a first-order response surface model
-1.0
tAs a building block to create other response sur-
face designs.
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
Y Predicted For more details on these designs, the reader is
again referred to the article on screening designs in
Figure 7: Sorted parameter estimates. this series.
Term Estimate Std Error t Ratio Prob> |t|
Enzyme 0.5041667 0.136894 3.68 0.0018* D-Optimal Designs
Temp 0.3625 0.136894 2.65 0.0169*
Reagent1 0.3416667 0.136894 2.50 0.0231*
This is a class of designs based on the determinant
Enzyme*Enzyme -0.108594 0.132547 -0.82 0.4240 of a matrix, which has an important relationship to
Reagent2 0.0916667 0.136894 0.67 0.5121 the moment matrix (M):
Enzyme*Temp 0.10625 0.16766 0.63 0.5347
Reagent1*Enzyme 0.06875 0.16766 0.41 0.6869 M = XX/N,
Reagent2*Reagent2 -0.027344 0.132547 -0.21 0.8390
Reagent1*Temp 0.03125 0.16766 0.19 0.8543
Reagent1*Reagent1 -0.002344 0.132547 -0.02 0.9861
where X is the inverse on X, the design matrix of
inputs and N is the number of rows in X.
It is noted that the inverse of the moment matrix,
Figure 8: Normal plot. M-1 = N(XX)-1 contains variances and covariances
4 of the regression coefficients scaled by N divided
+ Enzyme
by the variance. Therefore, control of the moment
3 matrix implies control of the variances and covari-
+ Temp
Normalized Estimates (Orthog t)

+ Reagent1
2 ances and here is the value of these D-optimal
designs (5).
1
+
++ Box-Behnken Designs
0 + ++
These are three-level designs for fitting response
-1 + surfaces. They are made by combining 2k factorials
designs with incomplete block designs. They pos-
-2 sess the advantages of run efficiency (i.e., number
of required runs and rotateability). This last prop-
-3
erty ensures equivalence of variance for all points
-4 in the design space equidistant from the center.
-3 -2 -1 0 1 2 3
The design is constructed by placing each factor,
Normal Quantile
or independent variable, at one of three equally
Blue line is Lenths PSE, from the estimates population. spaced values from the center. By design then, the
Red line is RMSE, Root Mean Squared Error from the residual. estimated variances will depend upon the distance
of the points from the center (Figure 2).
examine the use of data transformations, blocking, It is noticed that the Box-Behnken design has no
and factor levels. points at the vertices or corners of the design space.
This is a plus when it is desired to avoid those areas
TECHNIQUESTHE DESIGNS because of engineering constraints but unfortu-
There are a variety of specialized designs and ana- nately places a higher prediction uncertainty in
lytic techniques available for RSM (6). In this review, those areas.
we will concentrate on four of the more popular (i.e.,
commonly used). One will be chosen to analyze Central Composite Designs
standard but messy industrial data. Central composite design (CCD) is a popular and

66 PROCESS VALIDATION Process Design


John A. Wass

Figure 9: Prediction profiler.

4
0.858115
3.178847
3
Y

2
1
0
1
Desirability
0.98832

0.5
0
-1

1
-1

1
-1

1
-1

1
0
0.25
0.5
0.75
1
0.978 0.022 1 1
Reagent1 Reagent2 Enzyme Temp Desirability

Figure 10: Surface plot. Figure 11: Surface curvature.


2 1.5 En
1.5 1 0.5 zyme 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2
t1 1 0
-0.5 -1 3 3
en 0.5
ag -1.5
Re 0 -2
-0.5 3
-1
-1.5 2.5 2.5
-2
2.5
3

2.5 2 2 2
Y

2
Y

Y
1.5

1.5 1.5
Y

1.5 1

1 0
2
0 1.5 1 1
2 1
1.5
1 0.5
0.5 0 t1
En n
zy 0 -0.5 age 0.5 0.5
m -0.5
e Re 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2
-1 -1
-1.5 -1.5
-2
have changed due to knowledge gathered from the
efficient design for fitting a second-order model first series of experiments. We now wish to use this
(Figure 3). The CCD most often consists of a 2k new knowledge to optimize the process. The exper-
factorial n runs, 2k axial runs, and nc center runs. iment is designed in JMP 9 software, then run, and
The experimenter must specify the distance from the data appear as seen in Figure 4.
the axial runs to the design center (this is done by The design includes 28 runs with a center point
many software platforms) and the number of cen- as well as face centers on the design hypercube
ter points. As it is desirable to have a reasonably (there are more than three dimensions). When
stable variance at the points of interest, the design these data are analyzed, it is apparent that, while
should have the property of rotateability. the model is significant (analysis of variance
[ANOVA] p = 0.0275) and there is no significant
CCDAn Example of the lack of fit (p= 0.9803), the adjusted R 2 is low
Response Surface Technique (~0.41) and more terms might need to be added to
For an example, the use of the central composite the model (Figure 5).
design is employed in yet another industrial chemi- There is some confidence in the integrity of the
cal process. The chemists are now concerned with a data as the plot of errors (by increasing Y value)
formulation that is only slightly different than that shows randomness rather than any defining pat-
used in the screening design, and concentrations terns that would indicate a problem (Figure 6).

PROCESS VALIDATION Process Design 67


John A. Wass

Figure 12: Data table output. Figure 13: Three reagents and a Reagent1*
Reagent1 Reagent2 Enzyme Y Enzyme interaction.
1 L2 L3 L1 2.1
2 L1 L2 L2 1
3 L1 L2 L1 0.8 3
4 L3 L3 L1 2
5 L1 L1 L3 1.7 2.5
6 L2 L1 L1 1.6

Y Actual
7 L2 L3 L3 2.4
8 L3 L2 L2 2.4 2
9 L3 L1 L2 2.2
10 L1 L2 L3 1.2 1.5
11 L3 L3 L2 2.5
12 L3 L2 L1 1.9
1
13 L2 L3 L2 2.2
14 L3 L1 L3 2.8 1 1.5 2 2.5 3
15 L3 L3 L3 3.1
Y Predicted P=0.0143
RSq=0.97 RMSE=0.217
It appears that from the sorted parameter esti-
mates, what the chemists had long suspected was
true (i.e., the importance of Reagent 1, the enzyme, Summary of Fit
and the reaction temperature to the output). See RSquare 0.967811
Figure 7. This is also seen on the normal plot as RSquare Adj 0.887338
these three parameters are flagged (Figure 8).
Root Mean Square Error 0.21696
The interaction profiles evidence of a (expected)
putative interaction between the enzyme and reac- Mean of Response 1.993333
tion temperature, but this did not appear signifi- Observations (or Sum Wgts) 15
cant from the parameter estimates.
Use of such tools as prediction profilers will Analysis of Variance
allow the optimization of the process, which in Source DF Sum of Mean F Ratio
this case, seen by the position of the dotted red Squares Square 12.0265
lines, is accomplished by maximizing the values Model 10 5.6610476 0.566105
of Reagent 1, enzyme, and temperature. Reagent Error 4 0.1882857 0.047071 Prob > F
2 does not seem important in this respect (flat
C. Total 14 5.8493333 0.0143*
slope) (Figure 9).
The design space is easily visualized on the sur-
face plot, which corroborates the results of the pro- trations for the two reagents and the enzyme. In
filer (Figure 10). most modern software with experimental design
When the figure is rotated, the surface curvature modules, it is possible to roll your own custom
is seen. It is apparent by the absence of lack of fit design to non-standard specifications. Suppose the
that the inclusion of interaction and quadratic new data set consists of the two reagents and the
terms in the model were justified (Figure 11). enzyme, with the reaction temperature and mix
Although there is good evidence that the model is speed held at the pre-determined optima. The soft-
adequate, the low R 2 suggests that input values may ware designs a 15-run experiment, and the analyst
need to change, or more likely, factors may need to fills in the data table output (Figure 12).
be added. There are many ways to model this sys- The model is then analyzed with the three reagents
tem. A custom design may simplify the system. and a Reagent1*Enzyme interaction as seen in Figure
13.
Custom Designing The adjusted R 2 is now ~ 0.89 evidencing a good
The chemists are comfortable with setting the tem- fit, and the ANOVA indicates a significant model
perature at a level that was determined as optimal (p=0.014). In linear regression with standard least
in both theoretic calculations and previous batch squares fitting model, significance is a test of zero
experience. They now wish to zero in on concen- slope (i.e., a steep slope indicating good predictive

68 PROCESS VALIDATION Process Design


John A. Wass

Figure 14: Effects test.


Source Nparm DF Sum of Squares F Ratio Prob >F
Reagent1 2 2 2.5261587 26.8332 0.0048*
Reagent2 2 2 0.1583810 1.6823 0.2950
Enzyme 2 2 0.9055974 9.6194 0.0296*
Reagent1*Enzyme 4 4 0.1200593 0.6376 0.6632

Figure 15: Residual by predicted plot. Figure 17: Maximal output.


me Rea
Enzy L3 L3 gen
t1
L2
L2
L1
0.2 L1
3

3
0.1 2.5
Y Residual

2.5
2
0
Y
2
1.5

Y
-0.1 1 1.5

0 1
-0.2
0
L3

1 1.5 2 2.5 3
t1

L3
en g

e
ea

L2 m
Y Predicted zy
R

L2 En

Figure 16: Prediction profiler.

3
0.482975

2.5
3.078571

2
Y

1.5
1
0.75 1
Desirability
0.607334

0 0.25

L1

L2

L3

L1

L2

L3

L1

L2

L3

0.25

0.5

0.75

L3 L3 L3
Reagent1 Reagent2 Enzyme Desirability

ability). The effects test confirms that Reagent1 be based on convenience and cost. Although there
and the enzyme are most important to the reaction is no overall significance to the Reagent1*enzyme
(Figure 14). Chemistry may dictate that Reagent2 is interaction, the enzyme is necessary. Opening up
also important, but any of the concentrations used the range of concentrations used would demon-
in the experiment will work. The exact figure will strate this.

PROCESS VALIDATION Process Design 69


John A. Wass

Again, a quick check of data integrity is evi- tSYSTAT (www.systat.com)


denced by the random pattern of the residual by tUnscrambler (www.camo.no).
predicted plot (Figure 15).
With this model, it is possible to produce an out- CONCLUSIONS
put that overlaps that of the larger, CCD model. And Modern experimental design is sometimes art as
this was accomplished in just 15 runs (Figure 16). well as science. It is the objective of this column
The response surface here is a stair-step because to acquaint the reader with the rudiments of the
of the linear nature of this model with non-contin- screening and response surface designs, introduce
uous inputs. It again illustrates that maximal out- them to the nomenclature, and supplement the
put is achieved through maximal concentrations learning experience with real-world examples.
of Reagent1 and the enzyme for the ranges used in
this experiment (Figure 17). REFERENCES
1. S.R. Schmidt and R.G. Launsby, Understanding Industrial
SOFTWARE Designed Experiments (4th ed.), Air Academy Press, 1997.
There are numerous software products available 2. G.E.P. Box, J.S Hunter and, W.G. Hunter, Statistics for
to assist the practitioner in design and analysis of Experimenters (2nd Ed.), Wiley Interscience, 2005.
their experiments. The author has had experience 3. D. C. Montgomery, Design and Analysis of Experiments
with the following commercial packages: (5th ed.), John Wiley,2001.
tDesign Expert (www.statease.com) 4. JMP 9 Design of Experiments Guide, SAS Institute Inc.,
tGenStat (www.vsni.co.uk) 2010.
tJMP (www.jmp.com) 5. ECHIP Reference Manual, Version6, ECHIP Inc., 1983-
tMinitab (www.minitab.com) 1993.
tMODDE (www.umetrics.com) 6. R.H. Myers and D.C. Montgomer y, Response Surface
tSTATISTICA (www.statsoft.com) Methodology, John Wiley, 1995. JVT

Originally published in the Autumn 2011 issue of The Journal of Validation Technology

70 PROCESS VALIDATION Process Design


David LeBlond

Estimation: Knowledge
Building with Probability
Distributions
David LeBlond

This column presents statistical principles useful KEY POINTS DISCUSSED


to practitioners in compliance and validation. We Good decision-making is a key element in the
hope it will become a useful resource. Barriers assurance of pharmaceutical product safety
to the understanding and use of statistical and efficacy throughout the product life cycle.
principles include the complex mathematics, Good decisions require process knowledge.
abstract ideas, relationship to real applications, Process knowledge consists of a statement of a
required assumptions, unfamiliar terminology, and predictive model for the process plus estimates
Greek symbols. This column presents statistical of the underlying model parameters. Three types
principles with these barriers in mind. It is our of estimates are identified: point, interval, and
challenge to explain statistical concepts in a distributional estimates. Distributional estimates
meaningful way so that our readers understand are most desirable. Process modeling is an
these important topics and are able to apply iterative process that requires combining prior
statistical concepts to work situations. knowledge with new data. Prior knowledge is
The first issue of Statistical Viewpoint (1) subjective, and knowledge experts can disagree.
presented basic probability distribution concepts Modern statistics provides good tools for
and Microsoft Excel tools useful in statistical managing the modeling activity. This issue of
calculations. The second (2) reinforced these Statistical Viewpoint presents three knowledge-
concepts and tools through eight scientific decision- building illustrations that highlight the central
making examples. In this issue, the scientific role probability distributions can play. They
knowledge-building process will be illustrated from include estimation of a tablet defect proportion,
a statistical estimation point of view. unit dose content uniformity, and adverse
Reader comments, questions, and suggestions event rate monitoring. The illustrations utilize
are needed to help us fulfill our objective for probability distribution functions available in
this column. Suggestions for future discussion Excel, which is readily available and provides
topics or questions to be addressed are requested. an easy method for calculations. A list of Excel
Discussion topics and case studies demonstrating distribution functions and their corresponding
applications of statistics to problems in validation applications is provided in this paper and in
and compliance submitted by readers are also most previous issues of Statistical Viewpoint (1,
welcome. We need your help to make Statistical 2). These tools enable quantifiable estimation
Viewpoint a useful resource. Please email your and predictions for common pharmaceutical
comments and suggestions to the coordinating problems.
editor at shaigney@advanstar.com.

For more Author


information,
go to ivthome.com
[ ABOUT THE AUTHOR
David LeBlond, Ph.D., has 29 years experience in the pharmaceutical and medical diagnostics
fields. He is currently a principal research statistician supporting analytical and pharmaceutical
development at Abbott. David can be reached at David.LeBlond@abbott.com.

PROCESS VALIDATION Process Design 71


David LeBlond

Figure 1: Expressing process knowledge through a prob- for our state of knowledge about a process is our
ability distribution. degree of certainty about the values of underlying
process parameters. As an illustration, consider the
following simple process model:
Less knowledge
Y = intercept + slope*X + error
Probability Density

More knowledge
Here Y is some process outcome or attribute (e.g.,
process yield) that is linearly related to a variable X
(e.g., processing temperature). X is often referred to
as a process set point or control variable. Sometimes
X is referred to as a process parameter; however,
here I would like to avoid using the term parameter
for X.
The error term above might represent
measurement error. As part of our model we might
Model parameter or prediction value decide that each measurement of Y will include
a small error and that the error portion in all our
measurements will be random and have a Gaussian
KNOWLEDGE BUILDING IS AN (normal) distribution (1,2) centered on zero with a
INTEGRAL PART OF PHARMACEUTICAL standard deviation of sigma.
DEVELOPMENT In statistical modeling, the intercept, slope, and
Good decision-making is increasingly recognized sigma are referred to as model parameters. I would
as an essential component in the validation of like to reserve the word parameters for these
pharmaceutical processes during the product life quantities whose values we are interested in knowing.
cycle. ICH Q8 (3) states the following about the If we are willing to think of these quantities and
output of pharmaceutical development studies parameters as random variables having probability
(italicized emphasis added): distributions then we can take our degree of
... the applicant should demonstrate certainty about them as inversely proportional to
an enhanced knowledge of product some measure of spread of their distributions. As
performance ... This understanding can our estimates become increasingly reliable, the
be gained by application of, for example, width or shape of probability distributions can be
formal experimental designs, process used to track the knowledge-building progress.
analytical technology (PAT), and/or prior The idea of using probability distributions
knowledge ... It should be recognized that through Bayes rule to quantify knowledge is at least
the level of knowledge gained, and not 250 years old (4). This idea is illustrated in Figure
the volume of data, provides the basis 1. Less process knowledge is characterized by large
for science-based submissions and their uncertainty in the values of underlying parameters
regulatory evaluation. and predicted values of process performance
What is meant by the terms knowledge and measures. When we have more knowledge about
prior knowledge? How can knowledge (prior or a process, we have more certainty about such
otherwise) be objectively quantified? How do we random variables. When the underlying process
track the level of knowledge gained? What role mechanism is well characterized and predictable,
does data play in the knowledge-building process? process control can be assured. Documenting this
These questions are addressed in the following understanding and communicating it effectively
paragraphs. can be a supportive component of validation.
In the previous installments of Statistical Probability and statistics can help us express our
Viewpoint (1,2), it was suggested that a useful knowledge-building progress in a quantitative way.
knowledge metric is the certainty by which we
are able to predict the values of random variables MODELS ARE REQUIRED FOR
that serve as key indicators of safety and efficacy. KNOWLEDGE BUILDING
For instance, it is desirable to be able to predict Pharmaceutical, product life-cycle decision-making
the major or minor defect proportions in dosage is guided by models. A model is our (usually naive)
units, the levels of active or related substances they story of how a process behaves. The following
contain, and the rate of adverse events patients will are some examples of pharmaceutical development
experience using the medication. A related metric models:

72 PROCESS VALIDATION Process Design


David LeBlond

t"TUBCJMJUZNPEFMUIBUBTTVNFTBQTFVEP[FSP Figure 2: Process knowledge building.


order chemical reaction
t"EPTBHFVOJGPSNJUZNPEFMUIBUBTTVNFTB
normal distribution of levels per dosage unit 3. Acquire new data
1. Summarize 2. State process
t"SFTQPOTFQSFEJDUJPOFRVBUJPOUIBUBTTVNFT prior knowledge model
linear and interaction terms
t"UFTUNFUIPEDBMJCSBUJPODVSWFUIBUBTTVNFT
a proportional relationship between response 7. Compare 4. Combine prior
and concentration confirmatory data knowledge with
with prediction new data to obtain
t"UPYJDJUZNPEFMUIBUBTTVNFTBMPHJTUJD posterior knowledge
relationship to dose level
t"MPUBDDFQUBODFTBNQMJOHQMBOUIBUBTTVNFT
every unit in the lot has an equal probability of
appearing in the sample 5. Use posterior
6. Acquire knowledge to make
t"QIBSNBDPLJOFUJDQFSGVTJPONPEFMUIBU confirmatory predictions of
assumes Michaelis-Menten kinetic elimination data future samples
from the liver
t"WBMJEBUJPOQMBOUIBUBTTVNFTGVUVSFMPUT
and validation lots are derived from the same adverse event occurs or the specific drug content
population in a tablet) or underlying model parameters (e.g.,
t"DMJOJDBMUSJBMUIBUBTTVNFTUIBUUIFMBTU defect proportion, adverse event rate, or the mean
measurement made on patients who leave the and standard deviation of unit dose levels).
trial early is a reliable indicator of subsequent
measurements that would have been obtained, THE SCIENTIFIC KNOWLEDGE-
had they remained in the trial BUILDING PROCESS IS ITERATIVE
t"QIBSNBDPWJHJMBODFTJHOBMEFUFDUJPOTUSBUFHZ Figure 2 presents a central theme played out often
assumes that adverse events follow a Poisson in scientific endeavors, including those conducted
distribution in exposed populations. to manage life cycles of medical therapies and
diagnostics. The seven basic steps are given as
Many of the above models are statistical in follows:
nature and can be expressed in whole or in part t4UFQ4VNNBSJ[FQSJPSLOPXMFEHF3FDPSE
using probability distributions. Many common existing prior process knowledge about
probability distributions were described in previous the process under study. This knowledge
issues of this series (1, 2). Each distribution comes from previous data, accepted theory,
includes a random variable whose values experience, or expert opinion. Often this
follow the distribution model, and two or three prior knowledge is highly unstructured and
additional parameters that specify the location, subjective.
spread, or shape of the distribution on the scale t4UFQ4UBUFUIFQSPDFTTNPEFMCBTFEPO
of measurement of the random variable. The prior knowledge. This requires a quantitative
distribution encapsulates our knowledge about the description of the mathematical form of the
value of the random variable. Of course models can process model and its underlying parameter
also include mechanistic or empirical variables and values. At this stage, the model and values of
can be very complex. Here we limit discussion to its underlying parameters are uncertain. We
some simple probability distribution models. can characterize our state of knowledge about
If a model is accurate and the parameter values underlying model parameters or process output
are known, it can be used to estimate or predict by the Less Knowledge distribution in Figure 1.
values of the random variable. Our ability to predict t4UFQ"DRVJSFOFXEBUB5PJNQSPWFNPEFM
reliably is one indicator of our knowledge about certainty, specific data are gathered to test the
model or obtain improved estimates of the
the process that generates values of the random
underlying model parameters. This kind of
variable. The reliability of our predictions will in
study can be thought of as analytic (5) in that
turn depend on our uncertainty in the underlying
the objective is to improve our estimates of
model parameters. As model parameter uncertainty
underlying model parameters.
is diminished by scientific investigation and data
t4UFQ$PNCJOFQSJPSLOPXMFEHFXJUIOFX
gathering, the quality of our predictionsand thus
data to obtain posterior knowledge. Both prior
our process knowledgeincreases. Thus we are
information and new data are used to obtain
concerned with random variables that are either
improved estimates of underlying model
process outputs (e.g., whether a tablet defect or

PROCESS VALIDATION Process Design 73


David LeBlond

Figure 3: Types of estimates. There is no reason for process knowledge


building to stop when the product is approved for
Point Estimate use. Knowledge building can continue throughout
the product life cycle and real world manufacturing
Interval Estimate experience may provide the greatest challenge to
our predictive models.

Distributional Estimate ESTIMATION IS CENTRAL TO THE


KNOWLEDGE-BUILDING PROCESS
In Step 4 of Figure 2, data is used to form improved
Parameter Value estimates of process parameters. A wide variety
of statistical methods such as regression can
be employed. Whatever methodology is used,
parameters. These improved estimates increase the following three general types of estimates
the certainty about the process mechanism and illustrated in Figure 3 can be identified:
increase the reliability of our predictions of t1PJOUFTUJNBUF"TJOHMFOVNCFSUIBUHJWFTBO
process behavior. In quality by design (QbD) indication of the location of the parameter
applications (3), we may be concerned with value or process output (X) along its scale
defining the knowledge or inference space of measurement. Three examples of point
within which the model applies. At this stage, estimates are as follows:
our knowledge may be characterized by the
More Knowledge distribution in Figure 1. E(X): The expected (or mean) value of X. The
t4UFQ6TFQPTUFSJPSLOPXMFEHFUPNBLF
mean is the most common point estimate
predictions of future samples. Proper scientific
but its value can be greatly influenced by the
investigation requires that we test our model
skew ness (long tailing) of the distribution.
against reality. We do this by using the model
to make predictions of process output. It
mode(X): The most frequent value of X.
is important to select conditions that truly
Identified as the maximum of a continuous
challenge the model.
probability density function.
t4UFQ"DRVJSFDPOSNBUPSZEBUB8FWFSJGZ
these model predictions by actually running the
median(X): The middle value of X. Identified
process and measuring the output. Examples of
such confirmatory or enumerative (5) studies as the 50th quantile of a continuous
are robustness demonstrations, method/ probability distribution function. The
process validations and transfers, comparability median is influenced less by skewness.
studies, and clinical studies.
t4UFQ$PNQBSFDPOSNBUPSZEBUBXJUI A point estimate alone is a single number
prediction. The level of agreement is a measure and conveys little or nothing about the
of our process knowledge. When predicted and uncertainty in X. For example, a point
observed output fail to agree, re-evaluate the estimate for the potency of a tablet lot might
original prior assumptions (Step 1), modify the be 99.8%.
process model appropriately, and re-investigate it. t*OUFSWBMFTUJNBUF5XPOVNCFSTUIBUTQFDJGZ
a range (or interval) that gives an indication
The cycle of knowledge building continues of both location and spread of the parameter
until we have acquired a successful track record value. Two common types of interval estimates
of predicting process output under a range are as follows:
of conditions. We strive to document and   tDPOEFODFJOUFSWBM"OJOUFSWBM
communicate a comfort level in our ability to produced by a given statistical method that
control processes within their respective design is known to include the true parameter
spaces so that the final product will be safe and value 95% of the time. A confidence
effective. While documentation of the confirmatory interval requires that we consider unknown
studies (steps 6 and 7) is critical, a careful parameters to be fixed quantities that
description of the underlying prior assumptions cannot possess a probability distribution.
(step 1), model logic (step 2), development The interval itself is considered a random
experience (steps 3 and 4), and level of model variable whose location and size varies
challenge (step 5) can do much to convey the state on repeated application of the specific
of process knowledge to regulators. statistical method. The use of a confidence

74 PROCESS VALIDATION Process Design


David LeBlond

interval focuses attention on the


Table I: The Beta distribution used in estimating a
variability associated with applying the
binomial proportion, .
specific statistical method to hypothetical
data sets, rather than with the uncertainty Distribution Excel function Distribution Story
in the parameter being estimated from Beta P = BETADIST(,A,B,0,1) has a Beta distribution
data at hand. with parameters A= and
D=EXP(GAMMALN(A+B)- B=-+1
For example, a 95% interval for the potency of GAMMALN(A)- (see reference 2).
a tablet lot might be stated as 99.8% +/- 2.2%. If GAMMALN(B))* ^(A A = defect count parameter
this interval were a confidence interval, we would -1)*(1- )^(B-1) B = acceptable count pa-
interpret it as the result of one application of a rameter
statistical method known to produce an interval = BETAINV(P,A,B,0,1)
A=B=1 provides an uninfor-
that contains the true tablet potency 95% of
E () = A /(A+B) mative prior distribution (a
the time that the method is used. To properly rectangular or uniform dis-
interpret a confidence interval, we must imagine Mode () = (A - 1) / tribution, see reference 2).
a hypothetical repetition of the experimental and (A+B-2)
interval estimation process. The confidence level
is based on the properties statistical method when Median () = BETA-
employed on appropriate types of data. INV(0.5, A, B,0,1)
  tDSFEJCMFJOUFSWBM"OJOUFSWBMXJUIJO
which a random (uncertain) parameter and resemble the Less Knowledge
value lies with 95% probability. A credible distribution in Figure 1.
interval is based on the quantiles of the   t1PTUFSJPSEJTUSJCVUJPO"EJTUSJCVUJPOBM
probability distribution of the parameter estimate based on both prior assumptions
which is fixed by prior assumptions or and data. A prior distributional estimate
observed data. A credible interval has of a parameter might come from step
a simple interpretation that focuses 4 in Figure 2 and resemble the More
attention directly on the uncertainty in the Knowledge curve in Figure 1.
parameter being estimated. If the distributional estimate is based on a
Returning to our 95% interval example of 99.8% well-documented, probability distribution, such
+/- 2.2%, if this interval was a credible interval as the normal distribution, we can provide the
we could make a more direct interpretation that distributional estimate in a simple statement such
the true potency of the lot in question has a 95% as: The prior (or posterior) estimate of tablet lot
probability of being between 97.6 and 102.0%. We potency follows a normal distribution with a mean
do not need to imagine a hypothetical repeated of 98.8% and a standard deviation of 1.1%. If
application of the statistical method used, but we want to be more descriptive, or in cases where
base our probability level on the observed data the distribution is not well documented, we can
and the model we use to quantify our uncertainty. supply a graph of the distributional density (D) as a
When we use non-informative prior assumptions, function of the parameter value. In fortunate cases
the credible interval we estimate is often similar such graphs can be produced using Excel functions.
or even identical to the corresponding confidence We will illustrate a number of such distributional
interval. As our prior assumptions become more estimates in the examples below.
informative, a credible interval will generally Note that in going from a point estimate to an
become narrower. interval estimate to a distributional estimate, the
Both confidence and credible intervals are amount of information contained increases. For
useful. However, because credible intervals relate instance, the point estimate contains no indication
more directly to our subject of process knowledge of the uncertainty associated with the estimate. It
building, we will use them exclusively here. is common practice to round the values of point
tDistributional estimate: The vertical axis of a estimates to avoid conveying the impression of false
probability density curve is proportional to the certainty. In fact the process of rounding only adds
likelihood that the parameter has the value on uncertainty to an estimate.
the horizontal scale. The following two types A better approach is to provide an interval
of distributional estimates are distinguished: estimate in addition to the point estimate. Interval
  t1 SJPSEJTUSJCVUJPO"EJTUSJCVUJPOBMFTUJNBUF estimates convey an aspect of uncertainty by giving
based largely on prior assumptions. A prior a range within which the parameter value is likely
distributional estimate of a parameter to lie. Often some confidence or probability level is
might come from step 2 in Figure 2 associated with the estimate.

PROCESS VALIDATION Process Design 75


David LeBlond

Table II: Knowledge building for a binomial propor- the analytical results. The idea here is that if we
tion, , using the Beta distribution. use reliable methods of estimation, then the
Figure 2 Step Use of the beta distribution functions in Excel parameter values we estimate should, in turn,
be reliable. If we knew, for example, that a given
1. Summarize Express knowledge about as a beta distribution
prior knowledge
method of estimation always produced an estimate
based on prior experience, data, or expert opinion.
Let n0 be a prior sample size and d0 be the prior within 10% of the true value, that would be
number of defects observed in the prior sample. Then good measure of our state of knowledge about a
take parameters of the beta distribution to be A = d0 parameter whenever its value is estimated by that
and B = n0 -d0. In cases where little or nothing is pre- method. This point of view places an emphasis
viously known about , or in cases where decisions on using reliable estimation methodology as a
are to be made on data alone, let A = B = 1. critical aspect of scientific knowledge building.
2. State process Assume number defective in a sample of n follows a Developing such methodology has been a major
model binomial model with unknown proportion . focus of the discipline of statistics. Statisticians refer
3. Acquire new Take a random binomial sample of size n and inspect to such methods, whose reliability has been well
data to find in this actual sample d defects. characterized, as calibrated methods. In analytical
4. Combine prior Express posterior knowledge about as a beta dis- contexts calibration refers to establishing the
knowledge with tribution with parameters A=d0+d and B=n0+n-d0 -d. traceability and standardization of a measurement
new data to Obtain point, interval, and distributional estimates of method. As used here, calibration refers to
obtain posterior as described in the text. establishing the performance characteristics of an
knowledge estimation method.
5. Use posterior The predictive posterior defect
knowledge to proportion = (d0+d)/(n0+n). PRIOR ASSUMPTIONS ARE THE
make predictions FOUNDATION OF THE ITERATIVE
of future samples
SCIENTIFIC KNOWLEDGE-BUILDING
6&7. Acquire Refine model if necessary. Incorporate the posterior PROCESS
confirmatory knowledge (step 4) as the new prior knowledge. Some statistical methods are calibrated to
data and com-
consider only the information contained in the
pare it with pre-
diction. Return to data at hand and do not consider auxiliary prior
step 1. information. Such methods are objective in that
the estimated parameters make minimal prior
However, it is by looking at the distributional assumptions. In particular such methods do
estimate of the parameter in Figure 3 that we realize not consider prior knowledge about the values
that an extremely high value is more likely than of model parameters. It may be important
an extremely low value. Such information could to use such methods in confirmatory trials
be critical to predicting process performance at (steps 6 and 7 of Figure 2). However, it must be
its operating extremes. Often it is our ability to remembered that all estimation methods depend
state the likelihood of performance in the tails on assumptions of one kind or another, and it
that allows a meaningful process risk assessment. is critical to justify them. Notice a pattern in
A distributional estimate summarizes all the the list of pharmaceutical models previously
information currently known about a parameters cited. Each model makes assumptions, and
value. So when available, distributional estimates those assumptions serve as the justification for
are most useful. estimation. So what is the justification for the
modeling assumptions themselves? We base these
USE OF CALIBRATED ESTIMATION assumptions on some prior body of knowledge. A
METHODS IS AN IMPORTANT ASPECT OF careful examination will show that the prior body
KNOWLEDGE BUILDING of knowledge is in turn dependent on its own set
In estimating a population location, which point of prior assumptions. In the end, we are drawn
estimate is best: average, mode, or median? to the conclusion that the assumptions we are
Similarly for a point estimate population spread, willing to make play a critical role in our scientific
should we use standard deviation, range, or knowledge building.
something else? There are many possible estimation In calibrating estimation methodology, the
methods. assumption is made that the model used is the correct
Reliable methodology should be used to obtain model. This is rarely, if ever, the case. In scientific
these estimates. Such methods might involve investigations of a process, we strive to use models
the way we take samples, obtain our analytical and make assumptions that represent the best
measurements, or calculate our estimates from state of our knowledge; yet we are fully aware that

76 PROCESS VALIDATION Process Design


David LeBlond

these models and assumptions are always only Figure 4: Estimating a binomial proportion () using
approximations of the truth (6). beta distributions.
In addition to objectivity, consider the efficiency
of estimation methodology. For instance, while use 7

A = 1, B =1 A = 3, B =2 A = 2, B =3
of the sample median as an estimator of location 6
A = 20, B =20 A = 20, B =5 A = 5, B =20
makes minimal assumptions about measurement 5
distribution, it may require more measurements

Density
4
to obtain the same size interval estimate than
using a sample average that assumes a Gaussian 3

measurement distribution. Also, if our objective 2

in step 4 of Figure 2 is to build process knowledge, 1


then ignoring prior knowledge about a parameter
0
value makes little sense. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Using objectively calibrated methodology alone 


is not a sufficient metric of our state of process
knowledge. We require some way of incorporating
our prior knowledge into our scientific knowledge Figure 5: Estimation of a proportion defective ().
building. This can be difficult because even
18
knowledge experts can disagree about the reliability Tom's Posterior Density
16
of various prior assumptions. It is challenging
14 Nelly's Posterior Density
to quantify and compare our prior expectations Density 12 Tom's Prior Density
and uncertainties (7), but the result is shared 10
understanding and improved decision-making. Nelly's Prior Density
8
The differences in our prior beliefs are only 6
practically important when they have large effects 4
on our estimates and predictions. Again, the 2
discipline of statistics has provided some good 0
tools that allow us to understand the influence of 0 0.1 0.2 0.3 0.4 0.5

our prior beliefs as we build knowledge through 

modeling and experimentation as in Figure 2.


EXAMPLE 1, BUILDING KNOWLEDGE
EXAMPLES ABOUT A TABLET DEFECT PROPORTION
The following examples illustrate three hypothetical A new coating process may exhibit an unacceptable
situations in which modern statistical methodology defect (blemish) proportion in the resulting tablets.
could be applied to the estimation process of Step A proportion above 0.3 (30% of tablets blemished)
4 in Figure 2. The advantages that distributional is considered unacceptable. Tom and Nelly needed
estimates offer are shown. The examples illustrate to estimate the proportion defective (proportion of
how prior knowledge about model parameters blemished tablets, ) of the new process. Tom and
can be incorporated, in a quantitative and helpful Nelly agreed that, prior to collecting data on the new
way, into the knowledge-building process. Excel process, they would express their prior knowledge
functions available for applying these approaches about with beta distributions (see Table I).
are presented. The step that Tom and Nelly took in agreeing
These examples are not presented as preferred to model their prior knowledge quantitatively
methods of analysis, but as illustrations of how using beta distributions may seem unscientific and
distributional thinking can, in simple cases, help arbitrary. However, we must remember that their
document and communicate the knowledge- objective is to learn about a new process. With
building process. The particular decision limits new processes we do not always have the luxury
described are purely fictitious and not to be of directly applicable historical data that might be
interpreted as recommended limits or industry- available for decision-making with an established
accepted practice. The statistical methods used process. Whatever experience, theory, or expert
in these examples are presented in what is hoped opinion they can bring to the table will aid in their
will be a practical way; however, the mathematical learning process. The challenge is to capture this
justification is not provided. The methodology prior knowledge in a useful way.
is not new, and interested readers can learn Tom and Nelly find that the beta distribution is
more about these distributional techniques from useful in modeling proportions for the following
introductory textbooks (8, 9, 10). reasons:

PROCESS VALIDATION Process Design 77


David LeBlond

Table III: Distributions useful in estimating a normal mean, , and standard deviation, .
Distribution Excel function Distribution Story
Inverted-Root P =1-GAMMADIST(1/ ,C,1/G,TRUE)
2
1/2 2 has a Gamma distribution with parameters C=
Gamma (IRG) and G=
=SQRT(1/GAMMAINV(1-P,C,1/G)) (see reference 2).
C= shape parameter
D =2*G^C*EXP(-G/ ^2-GAMMALN(C))/ ^(2*C+1) G= squared scale parameter

E() =EXP(GAMMALN(C-0.5)-
GAMMALN(C))*SQRT(G) , C>0.5

Mode() =SQRT(2*G/(2*C+1))

Median() =SQRT(1/GAMMAINV(0.5,C,1/G))
Normal or D or P =NORMDIST( ,L,Q,*) has a normal distribution (see reference 2) with
Gaussian L= location parameter
= NORMINV(P, L,Q) = L + Q *NORMINV(P,0,1) Q = scale parameter

E() = Mode() = Median() = L


Location- P=IF(( -T)/U<0,TDIST(-( -T)/U, R,1), 1-TDIST(( ( -T)/U has a Students-t distribution with R = degrees
Scale Stu- -T)/U,R,1)) of freedom
dents-t (LSSt) (see reference 2)
D=EXP(GAMMALN((R+1)/2)-GAMMALN(R/2))/ T = location parameter
(PI()*R)^0.5/(1+(( -T)/U)^2/R)^((R+1)/2)/U U = scale parameter

=T+U*IF(P<0.5, -TINV(2*P,R), TINV(2*(1-P),R))

E() = Mode() = Median() = T


*FALSE produces the probability density, D, of observing exactly . TRUE produces the probability, P, of observing x or less (cumulative probability
density function).

t5IFCFUBEJTUSJCVUJPONPEFMTSBOEPNWBSJBCMFT t*ODBTFTXIFSFUIFSFJTWFSZMJUUMFPSOPQSJPS
that take on values between 0 and 1, the same information about , or where they want an
range as the proportion, estimate based solely on data, they could select
t5IFCFUBEJTUSJCVUJPODBOUBLFPOBHSFBUWBSJFUZ a beta distribution that is essentially non-
of shapes and thus provides a large selection informative. In doing so their posterior estimate
of possible distributions for Tom and Nelly to will be consistent with that of other statistical
choose from estimation methods that do not take advantage
t5IFUXPQBSBNFUFSTPGUIFCFUBEJTUSJCVUJPO " of distributional reasoning.
and B in Table I, actually have an interpretation (Note: We will not detail the rationale behind
in terms of binomial sampling. To express ones the choice of priors in these examples, but similar
prior knowledge about , one can imagine a comments apply to all the priors used below.)
hypothetical prior sampling experiment in Tom and Nelly agreed to use the process given
which A is the prior number of defective tablets in Table II to document their knowledge building
and B the prior number of acceptable tablets in for . However, they disagreed as to which beta
the hypothetical prior sample of A+B tablets. distribution best represents their individual prior
Thus the beta distribution is intuitive. beliefs about .
t5PNBOE/FMMZXJMMBTTVNFUIBUEFGFDUT To select their priors, they followed step 1 of
in samples of tablets follow a binomial
Table II and used the Excel probability density
distribution (1,2). A beta prior distribution lets
function (The D= function in Table I) to graph
them combine their prior information about
some probability density function curves (plots of
(expressed as a beta distribution) with new data
D vs ) for different values of A and B. These curves
in a very simple and intuitive way (see below).
are shown in Figure 4.
t5IFJSJNQSPWFEFTUJNBUFPG (the posterior
When A=1 and B=1, the beta distribution is
distribution), that combines information from
a uniform distribution with constant density
both the beta prior and new data, will also have
between =0 and =1. This uniform distribution
a beta distribution. In this way, they establish a
expresses an ignorance (or lack of preference) for
consistent learning cycle.
the true value of . Nelly, who has little experience

78 PROCESS VALIDATION Process Design


David LeBlond

Table IV: Knowledge building for a normal mean, , and standard deviation, .
Figure 2 Step Use of distribution functions in Excel
1. Summarize A. Express knowledge about as an IRG distribution based on prior experience, data, or expert opinion. Let
prior s 0 be a prior point estimate of based on v0 prior degrees of freedom. Take the parameters of the prior IRG
distribution to be C =v0 /2 and G=v0*s0^2/2. In cases where little or nothing is previously known about , or
in cases where decisions are to be based on data alone, let v0 = -1 and choose G near 0 which essentially
results in a flat prior for s (note some Excel functions will fail when C0).

B. Express knowledge about as a normal distribution conditional on . Choose the distribution using prior
experience, data, or expert opinion. Let m0 be a prior estimate of based on a prior sample size of n0. Take
the parameters of the prior normal distribution to be L = m0 and Q = s /sqrt(n0). Since is unknown, use
Q=s0 /sqrt(n0) to plot and verify this prior distribution. In cases where little or nothing is previously known
about m, or in cases where decisions are to be based on data alone, let L = 0 and choose n0 near 0 which
essentially results in a flat prior for .
2. State process Assume data comes from a normal model with mean m and standard deviation .
model

3. Acquire Take a random normal sample of size n and calculate the sample mean, m, and sample standard deviation, .
new data
4. Combine prior Update estimates of s0, v0, and n0 to sn, vn, and nn respectively:
knowledge with vn = v0 + n, nn = n0 + n, sn = v0s02 + (n 1) s2 + n0n (mm0)nnvn
new data to
obtain posterior
knowledge Express posterior knowledge about m as a LSSt distribution with parameters R=vn, T=(n0*m0 + n*m)/nn, and
U=sn /sqrt(nn). Express posterior knowledge about as an inverted root gamma distribution with parameters
C =vn /2 and G=vn*sn^2/2. Obtain point, interval, and distributional estimates of and as described in text.

5. Use posterior The estimated distribution of future individual values will be a LSSt distribution with parameters
knowledge to R=vn, T=(n0*m0 + n*m)/nn, and U=sn*sqrt(1+1/nn).
make predic-
tions of future
samples
6&7. Acquire Take the posterior knowledge (step 4) as the new prior knowledge.
confirmatory
data and com-
pare it with pre-
diction. Return
to step 1.

with this new coating process, felt that this closely In step 2 of Table II, they discussed various
represented her subjective knowledge about the assumptions about sampling and independence and
defect proportion. This is the distribution that agreed to model the number of defects in a sample
represents a prior sample of n0 =2 containing d 0 =1 of tablets using a binomial probability distribution
prior defect. Nellys prior density distribution is with underlying defect proportion, (2). They
illustrated in Figure 5. collected an actual random sample of n=200 tablets
Tom had used the new coating process on a from the process and observed d=36 defective
previous project and wanted to incorporate his tablets (step 3 of Table II).
expert opinion about the defect proportion into Following step 4 of Table II, the A and B
the estimate of p He plotted beta distributions for parameters of the posterior distributions for Tom
a number of combinations of A and B (see Figure and Nelly are as follows:
4). Based on his expert opinion, a sample of 30 Toms posterior distribution: A beta distribution
tablets from the new process should contain about with A = d +d = 4 + 36 = 40, and B = n +n-d -d =
0 0 0

4 defective tablets. So he took the beta distribution 30+200-4-36 = 190


with A = 4 and B = 26 as his prior estimate of . Nellys posterior distribution: A beta distribution
Toms prior density distribution is illustrated in with A = d0+d = 1 + 36 = 37, and B = n0+n-d0 -d =
Figure 5. 2+200-1-36 = 165.

PROCESS VALIDATION Process Design 79


David LeBlond

Figure 6: Estimating a normal standard deviation () Both Tom and Nellys posterior density
using inverted-root gamma (IRG) distributions. distributions are plotted in Figure 5 and are
1 seen to match relatively closely. Clearly, the
0.9 v0 = 0.02, s0 = 1 knowledge contributed by the actual sample
0.8 v0 = 1, s0 = 2 of 200 has overwhelmed that contributed by
0.7
Density

v0 = 1, s0 = 10
0.6 prior assumptions. From Table I, Tom and Nelly
v0 = 10, s0 = 2
0.5
v0 = 10, s0 = 10
reported point estimates for (expected value)
0.4 A/(A+B) = 0.174 and 0.183 respectively. These
v0 = 100, s0 = 10
0.3
0.2 expected values give the predicted probability
0.1 that any given tablet sampled from the process
0 will be defective (Step 5 in Table II). In order to
0 2 4 6 8 10 12 14
make a final decision about the acceptability of

the new process, they obtained the following
95% credible intervals for : using the
Figure 7: Estimation of the standard deviation of tablet =BETAINV(P,A,B,0,1) function in Table I and
drug substance content (). setting P to either 0.025 (lower limit) or 0.975
(upper limit).
0.6
Toms 95% credible interval: 0.128 to 0.225
0.5 Dick's Prior Density Nellys 95% credible interval: 0.133 to 0.239
0.4 Jane's Prior Density
Density

Because the upper limits of both intervals were


0.3
Dick's Posterior Density below 0.3 (30% defect proportion), they concluded
0.2 Jane's Posterior Density that the new coating process was acceptable.
0.1
They recorded their knowledge-building process
in a development report that documents not only
0 the data, method of analysis (Tables I and II), and
0 3 6 9 12 15
sigma estimates of , but also their underlying prior
assumptions and beliefs (Figure 5). In this way, a
reviewer of their report acquires an intimate sense
Random Poste- Poste- Dicks Janes Data of the level of understanding and confidence about
Variable rior rior Value Value Only the process and its ability to produce tablets of
Distri- Distri- acceptable quality. The reviewer may have his or her
bution butional
own subjective prior knowledge about , but can see
Model Param-
eter from the report how the acquired data overwhelm
nn 11 11 10 such prior opinions.
sn 3.35 4.33 3.6
LSSt vn=R 12 12 10 Example 2, Building Knowledge About the
T 100.5 100.9 100.5 Uniformity of Individual Capsule Potency
U 1.01 1.31 1.14 Levels
IRG C 6 6 4.5 Drug product capsules were exhibiting unacceptable
G 67.28 112.55 58.65 levels of unit dose potency non-uniformity (relative
standard deviation greater than 6%, individual
95% Credible Interval capsules beyond 90-110%LC). A change to the
Estimate encapsulation process was being considered. Before
Ran- Distri- Table III Equa- Dicks Janes Data validating the change in production, it was decided
dom bution tion Used to Value Value Only to test capsules at pilot scale and use the results to
Vari- Model Obtain Credible estimate the underlying population distribution of
able Interval Esti- capsule potencies made by the process. Dick and
mates
Jane were the members of the team assigned the
LSSt 98.3- 98.0- 98.0-
task of estimating this distribution. They agreed to
=T+U*IF(P<0.5, 102.7 103.7 103.1
use the distributions described in Table III and the
-TINV(2*P,R),
TINV(2*(1-P),R)) associated knowledge building process of Table IV
IRG =SQRT(1/ 2.4- 3.1-7.1 2.5- to address this objective.
GAMMAINV(1- 5.5 6.6 Both Jane and Dick were willing to assume (per
P,C,1/G)) step 2 of Table IV), based on their experience with
such processes, that the capsule potencies would

80 PROCESS VALIDATION Process Design


David LeBlond

be normally distributed. So the objective required Figure 8: Estimation of mean tablet drug substance content.
them to estimate the distribution mean, , and
0.4
standard deviation, . Given and estimates, and
Dick's Prior Density
assuming no batch-to-batch variation, they could
0.3 Jane's Prior Density
then predict the potencies of future tablets made by
the process.

Density
Dick's Posterior Density
Step 1A of Table IV suggests using an inverted- 0.2
Jane's Posterior Density
root gamma (IRG) distribution to summarize their
prior knowledge and expectations about  (i.e., in 0.1
Table III, IRG functions, they will take X to be  ).
The IRG distribution has two parameters, C and G.
0
Jane and Dick examined various IRG distributions
90 95 100 105 110
by plotting the density (D) against  using the
mu
appropriate Excel function in Table III. They

obtained C and G parameter values as described


in step 1A of Table IV by selecting a prior standard
deviation, s0, and a prior degrees of freedom, v0. Figure 9: Predicting individual tablet drug substance content.
The degrees of freedom associated with a standard
deviation is related to the sample size used to 0.12
obtain that standard deviation. As v0 increases D ick 's Prediction

the confidence in our prior estimate s0 rises. The 0.1


J ane's Prediction
resulting distributions are given in Figure 6. 0.08 D ata O nly P rediction
Density

They noticed, as expected, that very low v0 (e.g., 0.02)


produces a nearly flat prior distribution. As v0 rises, 0.06
the distribution spread decreases and becomes more
0.04
bell shaped, centered near the chosen value of s 0.
By examining a number of possible distributions using 0.02
Excel, Dick and Jane arrived at their individual subjective
prior distributions. They also agreed to obtain estimates 0
based on a non-informative prior that contributes little or 85 90 95 100 105 110 115
no information relative to the data they will collect. The
respective prior v and s values were as follows:
0 0 Individual future result

Dicks IRG prior for : S 0 = 3, v0 = 2 with the prior estimate of m . Q also requires
Janes IRG prior for : S 0 = 7, v0 = 2 knowledge of . Because they did not know s, they
Data Only prior for : S 0 = 0.0001, v0 = -1. substituted their prior estimate, s 0 , to obtain their
prior distributions. So the prior distribution for
Dick and Janes prior distributions for are is conditional (or dependent) on the true value
shown in Figure 7. As illustrated in Figure 7, Jane of . This makes sense because is a component
(who had had some bad experiences with this new of measurement error. Dick and Jane recognized
encapsulation process) was less optimistic and that their prior knowledge about was inherently
felt that the true would be elevated. Dicks prior dependent on the measurement variation, , in the
distribution appeared relatively optimistic. The process. Dick and Jane graphed their priors for
low number of degrees of freedom for each, v0 =2, using the appropriate Normal distribution function,
indicate that neither Dick nor Jane had strong prior D = NORMDIST(, L, Q, FALSE) to select values
knowledge about . The choice of v0 =-1 for the non- of m0 and n 0 they felt represented their subjective
informative, data-only prior seems strange; but it is prior knowledge about . They also included non-
mathematically consistent with a prior that shows informative (Data Only) values. The respective prior
no preference for any particular value of . m and n they chose were as follows:
0 0

Applying Step 1B of Table IV, a normal


distribution will be used to express prior knowledge Dicks normal prior for : m0 =100, n 0 =1
about . This normal distribution depends on Janes normal prior for : m 0 =104, n 0 =1
the parameters L and Q, as described in Table III. Data Only prior for : m0 = 0, n 0 = 0.0001
L is simply the prior estimate of the true . Q is
a prior estimate of the standard error of estimate The small prior sample sizes, n0 , for Dick and
of and so is equal to /sqrt(n0). This requires Jane indicate that their confidence in their initial
that we provide a prior sample size, n0, associated guesses, m0 , was low. The data only prior has

PROCESS VALIDATION Process Design 81


David LeBlond

Table V: The Gamma distribution used in Janes 95% credible intervals showed the greatest
estimating a Poisson defect per unit rate, . spread and Dicks the least with the data only
Distribu- Excel function Distribution Story intervals showing intermediate spread. We note in
tion passing that the data only 95% credible intervals for
Gamma D or P =GAMMADIST( has a Gamma dis- and are identical to traditional 95% confidence
,V,1/W,*) tribution with intervals often used. However, the traditional
parameters V= and approach cannot provide distributional estimates of
=GAMMAINV(P,V,1/W) W= or .
(see reference 2). Dick and Jane used the equations for
E() = V/W V= shape parameter
density (D=...) in Table III for the LSSt and IRG
W= inverse scale pa-
mode () = (V-1)/W rameter distributions in Table III to graph their posterior
distributional estimates for and in Figures 8
median() = and 7, respectively. Jane noted that her posterior
GAMMAINV(0.5,V,1/W) distribution for in Figure 7 was wider than Dicks
and showed substantial probability of being above
6, which would be unacceptable. If Janes prior
* FALSE produces the probability density, D, of observing exactly . assumptions were accurate, there seemed some risk
TRUE produces the probability, P, of observing x or less (cumulative that batches made by the process would exhibit
probability density function).
unacceptable content uniformity.
To examine this risk further, Dick and Jane
essentially zero sample size and is thus non- followed step 5 of Table IV to obtain predictive
informative. Dick and Janes priors for are shown posterior distributional estimates of the potency
in Figure 8. levels of future capsules made by the process.
Clearly, Jane felt the process mean m was greater As indicated in Table IV, these follow an LSSt
than did Dick. Her prior distribution had greater distribution with the R, T, and U parameters
spread because her prior estimate of , s 0 , was calculated as shown. For comparison, the predictive
larger than Dicks. The low sample size for each, posterior distribution for the data only, non-
m0 =1, indicated that neither Dick nor Jane had informative, assumption was also obtained. The
strong prior knowledge about . three distributions are given in Figure 9.
A sample of n=10 capsules was tested and the Dicks distributional estimate showed the least
observed sample mean, m=100.6 and sample spread, Janes the greatest, and the data only
standard deviation, s=3.6, were calculated (Step 3 estimate showed intermediate spread. Janes prior
of Table IV). These results, along with the above assumptions led to a substantial number of capsules
prior estimates (s 0 , v0 , m0 , and n 0) were used in above 110% of target potency. Jane predicted the
step 4 of Table IV to obtain improved (posterior) percentage of capsules she would expect to be
distributional estimates of and . A location- beyond 90-110%LC using the following Excel
scale Students t (LSSt) and IRG distributions were formulas from the LSSt distribution in Table III:
used to model the posterior estimates of and
, respectively. The calculated parameters of these Probability that an individual capsule potency is
posterior distributions were as follows: < 90%LC =
The resulting posterior distributions were IF( (90-T)/U<0,TDIST(-(90-T)/U,R,1),1-
plotted using the appropriate density functions TDIST((90-T)/U,R,1) ) = 0.017
(D=...) in Table III and are shown in Figures 8 and
7, respectively. Point estimates of and were Probability that an individual capsule potency
obtained from the formulas for expected values of is >110%LC = 1-IF( (110-T)/U<0,TDIST(-(110-T)/
the respective distributions: U,R,1),1-TDIST((110-T)/U,R,1) ) = 0.033
Thus Jane predicted that 5% of capsules would be
Dicks point estimates: E() = T = 100.5, E() = 3.6 beyond the desired range of 90 to 110%LC, which
Janes point estimates: E() = T = 100.9, E() = 4.6 would be unacceptable.
Data Only point estimates: E() = T = 100.5, E() Dick and Jane were disturbed by the fact that
= 4.0 different prior assumptions led to differing
The 95% credible interval estimates of and were posterior distributional estimates for (see
obtained by substituting, into the proper equation in Figure 7) and also to different predictive posterior
Table III, either P=0.025 (lower limit) or P=0.975 (upper distributions of individual capsule potency levels
limit) plus the appropriate posterior distributional (see Figure 9). They felt that the information
parameters (R, T, U, C, and G) given above. provided by the 10 capsules they tested was

82 PROCESS VALIDATION Process Design


David LeBlond

insufficient relative to their individual subjective Figure 10: Estimating a Poisson event rate () using
prior opinions. Therefore, they decided it would gamma distributions.
be prudent to test another random sample of 1
capsules before making any decision about the new 0.9
0.8 n0=2, r0=2
encapsulation process.
0.7 n0=2, r0=4

Density
EXAMPLE 3, BUILDING KNOWLEDGE 0.6 n0=10, r0=2
ABOUT AN ADVERSE EVENT RATE 0.5 n0=10, r0=4
0.4
Alina and Craig were epidemiologists initiating
0.3
a new pharmaco-vigilance surveillance program
0.2
on spontaneously-reported, quarterly adverse
0.1
event rates. If they discovered strong evidence of a
0
drug-adverse event combination rate greater than
0 2 4 6 8
two events per quarter, they would perform an
investigation. They defined strong evidence as a
rate with a 95% probability of being greater than
two events per quarter. Figure 11: Estimation of an adverse event rate ()
Prior to examining any quarterly data, they felt
that their subjective prior knowledge about the 0 .8
Poisson event rate, , should be expressed using the 0 .7 Alina's Prior
gamma distribution described in Table V. 0 .6 Craig's Prior
Following step 1 in Table VI, Alina and Craig Density
0 .5 Alina's Poseterior
expressed their prior expectations about by Craig's Posterior
0 .4
choosing n 0 (prior number of quarters examined)
and r0 (prior guess of the true rate ). To familiarize 0 .3

themselves with the shapes of the gamma 0 .2


distribution, they tried a few values of n 0 and r0 as 0 .1
shown in Figure 10. 0
They noted how the distributions moved to the
0 1 2 3 4 5 6 7 8
right as r0 was increased and that the spread of the
distributions decreased as n 0 increased, as would be
expected.
Alina felt that it would be wise to choose Using Alinas Using Craigs infor-
a non-informative prior distribution for uninformative mative prior
because, a priori, she did not want to prejudice prior
the estimation with her prior expectations. V =n0r0+d 27.0 32.3
Consequently, she chose n 0 =0.001 (no prior W = n0+n 8.0 10.5
quarters examined) with an initial prior guess E( ) = V/W 3.4 3.1
of of r0 =1000 events per quarter. Thus her
expected number of defects in this prior sample
was V = n 0 r0 = 1 with W = n 0 near zero. She used and below the decision limit of two events per
the Excel gamma density function in Table V, quarter.
D=GAMMADIST(r ,1, 1/0.001,FALSE), to graph Rather than choose a strategy with a single prior,
her prior distribution of which appears as a flat, they agreed to estimate from data using both
non-informative prior in Figure 11. priors. If either estimate showed strong evidence
Craig felt, based on his experience, that of being greater than two events per quarter, they
values above 6 are very unlikely and he wanted to would take that as a signal to be investigated. They
incorporate this knowledge into the estimation. fixed these two priors as part of their surveillance
Consequently he felt comfortable with taking protocol for any future drug-adverse event
n 0 =2.5 prior quarters examined and r0 =2.5 combination they studied.
prior events per quarter. This corresponded to They were ready to assume (step 2, Table VI) that
V=2.5*2.5 = 6.25 prior expected events in the counts of quarterly adverse events follow a Poisson
W = n 0 = 2.5 prior quarters. His prior density, distribution (1,2) with a mean adverse event rate
D=GAMMADIST( ,6.25, 1/ 2.5,FALSE), is also (number per quarter), . Now they were willing
graphed in Figure 11. His prior gives what he felt to begin their surveillance of drugadverse event
was a proper weighting of prior probability above combinations.

PROCESS VALIDATION Process Design 83


David LeBlond

Table VI: Knowledge building for a Poisson defects per unit rate, , using the
Gamma distribution.
Figure 2 Step Use of the Gamma distribution functions in Excel
1. Summarize prior knowl- Express knowledge about as a gamma distribution based on prior experience, data,
edge or expert opinion. Let n be the prior number of units of material examined and d be the
0 0

prior number of defects observed in the n units. Then take parameters of the gamma
0

distribution to be V=n r and W=n . In cases where little or nothing is previously known
0 0 0

about , or in cases where decisions are to be made on data alone, let V=1 and choose
W near 0.
2. State process model Assume defects are distributed randomly throughout the material according to a Poisson model
with unknown rate parameter, .
3. Acquire new data Take a random sample of n units of material and observe d defects in these n units.
4. Combine prior knowl- Express posterior knowledge about as a gamma distribution with parameters V = n0r0+d and
edge with new data to ob- W=n0+n. Obtain point, interval, and distributional estimates of as described in the text.
tain posterior knowledge
5. Use posterior knowl- The predictive posterior defect rate per unit is = (n0r0+d)/(n0+n).
edge to make predictions
of future samples
6&7. Acquire confirma- Refine model if necessary. Incorporate the posterior knowledge (step 4) as the new prior
tory data and compare knowledge.
it with prediction. Re-
turn to step 1.

After some examination (step 3, Table VI), they SUMMARY AND CONCLUSIONS
discovered one particular drug for which d=26 Process knowledge building and estimation
headaches were reported over a two-year period are important components of validation and
(n=8 quarters). Using the equations for V and W in compliance. In this column, a methodology by
step 4 of Table VI, they obtained the following point which estimation of random variables is guided by
estimates for : their probability distribution models in an overall
Alinas point estimate (3.4 events/quarter) knowledge-building process was illustrated. These
was a little higher than Craigs (3.1 events/ three simple examples show how subjective prior
quarter). This was consistent with Alinas use knowledge can be quantitatively combined with
of an uninformative prior that gives substantial knowledge contained in data in a way that facilitates
probability to high rates. The posterior communication and improved decision making.
distributional estimates were obtained using The provided Excel functions allow decision makers
the gamma density function from Table V, D and knowledge experts to participate and become
=GAMMADIST( ,V,1/W,FALSE), and these are familiar with this approach to the knowledge-
shown in Figure 11. Despite the very different building process. It is hoped that readers will find
appearance of the prior distributions, the posterior these examples instructive and useful.
distributions appeared similar. Craigs distribution In the three examples illustrated, the
showed less spread than Alinas because of the mathematical operations are relatively simple
additional information contributed by Craigs and can be performed by someone familiar with
informative prior. using Excel statistical functions. For more complex
To estimate the 95% one-sided lower credible models, analytical expressions for the various
bound of , they used the quantile equation probability distribution functions may not be
from Table V, = GAMMAINV(0.05, V, 1/W), and available in closed form and computer simulation
obtained the following: using a package such as WinBUGS (11) is required.
Alinas one-sided 95% credible lower bound on : In such cases, or in any case where aspects of
2.4 events per quarter statistical modeling or use of prior knowledge
Craigs one-sided 95% credible lower bound on : are unclear, the advice of a trained statistician is
2.2 events per quarter. recommended.
Because the lower bound was above
two events per quarter, they launched an ACKNOWLEDGMENT
investigation of this particular drug-adverse Thanks to Diane Wolden and Paul Pluta for their
event combination. careful reading of this text and for their many

84 PROCESS VALIDATION Process Design


David LeBlond

Mode: A point estimate of a random variable that is


suggestions and corrections that have made it more
the value at which its probability density is
readable than otherwise possible. maximized.
Model: A description of a data generating process
GLOSSARY that includes parameters (and possibly
Bayes rule: A process for combining the information other variables) whose values determine the
in a data set with relevant prior information distribution of data produced.
(theory, past data, expert opinion and
Point estimate: An estimate of a random variable
knowledge) to obtain posterior information.
consisting of a single number that gives some
Prior and posterior information are expressed
idea of the location of the value of the random
in the form of prior and posterior probability
variable along its measurement scale.
distributions, respectively, of the underlying
Prior distribution: A subjective distributional estimate
physical parameters, or of predictive posterior
of a random variable, obtained prior to any
distributions of future data.
data collection, that consists of a probability
Calibrated estimation method: An estimation method
distribution.
whose reliability and accuracy are known from
Posterior distribution: A distributional estimate of a
theory or computer simulation.
random variable that updates information from
Conditional probability: The probability of an event (say
a prior distribution with new information from
A) occurring, given that some other relevant
data using Bayes rule.
event (say B) has occurred, symbolized P(A|B).
Predictive posterior distribution: A distributional
For example, let A = house is on fire and B =
estimate (or prediction) of a random variable
smoke alarm is sounding.
(typically future data) that combines posterior
Confidence interval: A random interval estimate of a
knowledge about model parameters with
(conceptually) fixed quantity which is obtained
variability in data generated by the model.
by a estimation method calibrated such that
the interval contains the fixed quantity with a
certain probability (the confidence level). REFERENCES
Credible interval: An interval estimate of a random 1.LeBlond, D., Data, Variation, Uncertainty, and Probability
variable, based on its probability distribution, Distributions, Journal of GxP Compliance, Vol. 12, No. 3, pp
which contains its value with a certain 30-41, 2008.
probability (the credible probability level). 2.LeBlond, D., Using Probability Distributions to Make
Data: Measured random variable values, assumed to be Decisions, Journal of Validation Technology, Spring 2008,
generated by some hypothetical model, which pp 2 14, 2008.
contain information about the parameters of
3.International Conference on Harmonization, ICH
that model.
Harmonised Tripartite Guideline on Pharmaceutical
Distributional estimate: An estimate of a random
Development Q8, Current Step 4 version, November 10, 2005.
variable that is of a probability distribution that
4.Price, R., (1763) An essay towards solving a problem in the
summarizes complete knowledge about the its
possible values. doctrine of chances, Dale, A Most Honourable Remembrance:
The Life and Work of Thomas Bayes, Springer, New York, 2003.
Estimation method: A statistical procedure that uses
data to produce estimates of quantities. 5.Deming, W. Edwards, Some Theory of Sampling, Dover
Publications, 1966.
Expected value: A point estimate of a random variable that is
the weighted mean of its possible values, with the 6.Essentially, all models are wrong, but some are useful,
weight being the probability density at each value. quoted from Box, George E. P.; Norman R. Draper, Empirical
Interval estimate: An estimate of a random variable Model-Building and Response Surfaces, p. 424, Wiley, 1987.
consisting of a range (or interval) of values 7.To endure uncertainty is difficult, but so are most of
that provides a measure of both location and the other great virtues, quoted from Bertrand Russell,
spread. Often the range is associated with some Philosophy for Laymen, Unpopular Essays, George Allen and
probability or confidence level. Unwin, London, 1950.
Knowledge: As used here, knowledge about a random 8.Box, G. and Tiao, G., Bayesian Inference in Statistical Analysis,
variable is inversely related to the spread of its Addison-Wesley Pub. Co., Reading, MA, 1973.
probability distribution. 9.Gelman, A., Carlin, J., Stern, H., and Rubin, D., Bayesian Data
Median: A point estimate of a random variable that Analysis, 2nd Edition, Chapman and Hall/ CRC, New York, 2004.
is the 50th percentile (or 0.5 quantile) of its 10. Bolstad, W., Introduction to Bayesian Statistics, 2nd Edition,
probability distribution. John Wiley & Sons, Hoboken, New Jersey, 2007.
11. Cowles, M., Review of WinBUGS 1.4, American
Statistician 58(4), 330-336, 2004. JVT

Originally published in the Summer 2008 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 85


David LeBlond

Understanding Hypothesis
Testing Using Probability
Distributions
David LeBlond

Statistical Viewpoint addresses principles of t"1WBMVFJTUIFQSPCBCJMJUZPGPCTFSWJOHBSFTVMU


statistics useful to practitioners in compliance and as extreme or more extreme than that observed,
validation. We intend to present these concepts in a assuming the null hypothesis is true
meaningful way so as to enable their application in t5IF5ZQF*FSSPSSBUFJTUIFQSPCBCJMJUZPG
daily work situations. incorrectly rejecting the null hypothesis
Reader comments, questions, and suggestions are t5IF5ZQF**FSSPSSBUFJTUIFQSPCBCJMJUZPG
needed to help us fulfill our objective for this column. incorrectly failing to reject the null hypothesis
Suggestions for future discussion topics or questions t5IF#BZFTGBDUPSJTBNFBTVSFPGFWJEFODFJOGBWPS
to be addressed are invited. Readers are also invited of the null hypothesis contained in the data
to participate and contribute manuscripts for this t1WBMVFFTUJNBUJPOJTCBTFEPOVOPCTFSWFESFTVMUT
column. Case studies sharing regulatory strategies more extreme than those observed, so it may over-
are most welcome. Please contact coordinating state the evidence against the null hypothesis
editor Susan Haigney at shaigney@advanstar.com t5IF1WBMVFGSPNBQPJOUOVMMIZQPUIFTJT PSUXP
with comments, suggestions, or manuscripts for sided test of equality, is very difficult to interpret. A
publication. confidence or credible interval should be provided
in addition to the P-value.
KEY POINTS t"OBMZTJTPGWBSJBODFJTB'JTIFSJBOIZQPUIFTJTUFTU
The following key points are discussed: for the equality of means of two or more groups
t4DJFOUJGJDJOGFSFODFSFRVJSFTCPUIEFEVDUJWFBOE t5IF#BZFTJBOBQQSPBDIPGGFSTBOJOTJHIUGVM
inductive inference re-interpretation of the analysis of variance in
t5ISFFNBJOBQQSPBDIFTUPJOEVDUJWFJOGFSFODFBSF terms of the joint posterior distribution of the
used group means.
t'JTIFSJBOJOEVDUJPOVTFTUIF1WBMVFBTBNFBTVSF
of evidence against the null hypothesis INTRODUCTION
t/FZNBO1FBSTPOJOEVDUJPODPOUSPMTUIFMPOHSVO The first issue of Statistical Viewpoint (1) presented
decision risk over repeated experiments basic probability distribution concepts and Microsoft
t#BZFTJBOJOEVDUJPOPCUBJOTEJSFDUQSPCBCJMJTUJD Excel tools useful in statistical calculations. The
measures of evidence from the posterior second (2) reinforced these concepts and tools
distribution through eight scientific decision-making examples.

[ ABOUT THE AUTHOR


For more Author
information, David LeBlond, Ph.D., has 29 years experience in the pharmaceutical and medical diagnostics
go to fields. He is currently a principal research statistician supporting analytical and pharmaceutical
gxpandjvt.com/bios development at Abbott. David can be reached at David.LeBlond@abbott.com.

86 PROCESS VALIDATION Process Design


David LeBlond

The third and fourth (3, 4) illustrated how probability THE PROCESS OF SCIENTIFIC INFERENCE
distributions aid in process knowledge building. This Let us start by defining several important terms as
issue shows how probability distributions are central follows (Note that these and additional terms are
to the understanding of hypothesis testing. Some of defined in the Glossary section at the end of this
the concepts of probability distributions introduced in article):
the first four issues of this column will be helpful in tParameter: In statistics, a parameter is a quantity
understanding what follows here. of interest whose true value is to be estimated.
Product, process, or method development can be Generally a parameter is some underlying variable
viewed as a series of decisions based on knowledge associated with a physical, chemical, or statistical
building: What type of packaging should be employed? model. When a quantity is described here as
What is the optimum granulation time? Can a true or underlying the quantity discussed is a
proportional model be used for calibration? The tools parameter.
we use to make such decisions are many. We rely on tHypothesis: A provisional statement about the
prior theory and expert knowledge to conceptualize value of a model parameter or parameters whose
the underlying mechanisms, select prototypes for truth can be tested by experiment.
testing, design experiments, and prepare our minds tNull hypothesis (H0): A plausible hypothesis
to interpret experimental results. We use dependable that is presumed sufficient, given prior knowledge,
measuring systems to acquire new data. Often, when until experimental evidence in the form of a
the results of our experimental trials are unclear (and hypothesis test indicates otherwise.
even sometimes when they are obvious), we employ tAlternative hypothesis (Ha): A hypothesis
statistical and probabilistic methods to guide us in our considered as an alternative to the null hypothesis,
decision making. though possibly more complicated or less likely
We humans are reasonably good at exploring our given prior knowledge.
world, finding explanations for things, and making tInference: The act of drawing a conclusion
predictions from our theories. Unfortunately, while we regarding some hypothesis based on facts or data.
all have a sense of rational intuition that (for the most tInductive inference: The act of drawing a
part) serves us well, the process we use (or should use) conclusion about some hypothesis based primarily
to build understanding and make optimal decisions on data.
from data has been the subject of heated debate by tDeductive inference: The act of drawing a
philosophers, scientists, and mathematicians for conclusion about some hypothesis based entirely
centuries (reference 5 gives a nice, readable overview; on careful definitions, axioms, and logical
see references 6 and 7 for more details). While the reasoning.
debate shows no sign of concluding in our own time,
three noteworthy approaches have emerged. Here We can identify the following four types of activities
we discuss some history and key concepts of each in the decision-making process which are illustrated in
approach and illustrate the central role probability Figure 1.
distributions play with two simple examples.
State Hypotheses About True Mean
EXAMPLE 1: TABLET POTENCY State one or more hypotheses about the underlying
Consider the case of a development team concerned true mean potency parameter. In Figure 1, three
with the true mean potency, averaged over batches, possible hypotheses (true process mean potency = 92,
produced by a tablet manufacturing process. In this 96, and 102) are illustrated. Such hypotheses are called
case, the tablet label claim (LC) and target for the point hypotheses because they specify a single fixed
manufacturing process is 100%LC. Individual batches value of an underlying parameter. Useful hypotheses
may have a mean potency that deviates slightly from often specify a range of values and are referred to as
100%, but batch means <90%LC are unacceptable. composite hypotheses. Note the following definitions:
While the team believes the process is adequate, tComposite hypothesis: A statement that gives
their objective is to provide evidence that the process a range of possible values to a model parameter.
produces acceptable batches. How can the team For example, Ha: true mean > 0 is a composite
validate their belief that the process mean potency hypothesis.
is acceptable? We will use this example to illustrate tPoint (simple) hypothesis: A statement that
the three difference systems of scientific inference for a model parameter is equal to a single specific
doing this. value. For example, H0: true mean = 0 is a simple
hypothesis.

PROCESS VALIDATION Process Design 87


David LeBlond

Figure 1: Scientific inference applied to the measurements will be normally distributed about
mean potency of a tablet manufacturing the true process mean potency. Thus they have a
process. mechanistic/probabilistic model to predict the likely
range of measured batch mean potency values to
1. State hypotheses about true mean
expect. This kind of model is known as a likelihood
2. Make H: 92 H: 96 H: 102 model, defined as follows:
deductive tLikelihood model: A description of a data
Inferences generating process that includes parameters (and
possibly other variables) whose values determine
4. Make the distribution of data produced. Specifically, the
80.5 89.8 96.2 102.7 108.3
inductive likelihood is
inferences
3. Obtain a sample estimate of mean likelihood = Probability of the observed data
if the hypothesis is true. [Equation 1]

For example 1, if low potency is a concern, the Note that the predicted range of the observed data
team may be interested in testing a composite null depends on the hypothesized true mean potency. The
hypothesis such as range will be different for H0 and Ha, with a mean
process potency of 96%LC being considered borderline
H0: true mean = or > 96%LC acceptable. The act of predicting (or simulating)
future data from such a likelihood model is purely
against a composite alternative hypothesis such as deductive. Such predictions are always true as long as
the underlying model and hypothesized value of the
Ha: true mean < 96%LC. underlying parameter are true.

The value of 96%LC might be considered the lowest Obtain Sample Estimate Of Mean
process mean potency consistent with an acceptable The team obtains potency measurements for 10
process. That is, if the team guesses that the true batches made using the process by testing composite
process standard deviation is about 2%LC, then 96% samples. This is the experimental part of the decision-
would be safely 3-sigma above the unacceptable making process. The measured batch potencies
lower limit of 90%LC. constitute raw data. For inferences a summary of
There is an asymmetry to the hypotheses such that the data will be sufficient. In the present case, the
the null hypothesis, H0, is considered a priori most observed mean of 93%LC and standard deviation of
likely, requiring the fewest assumptions (i.e., the 5%LC were obtained. The team noted that 93%LC is
process is performing acceptably). Following Ockhams below 96%LC. But is it far enough below 96%LC to
Razor (8), the simplest hypothesis is often the default reject H0?
H0. The hypothesis chosen as H0 is usually the one
that requires the lower burden of proof in the minds of Make Inductive Inferences
decision makers. From Figure 1 we see that induction is the opposite
If the team believes the process is adequate, the of deduction: On the basis of the data, we evaluate
alternative hypothesis, Ha, is considered less likely which hypothesis (H0 or Ha) is most likely. When
than H0 and would require postulation of some we reason inductively, we reflect back from the
special cause, defined as follows: observations to some underlying truth about
tSpecial cause: When the cause for variation nature, in this case the true process mean potency.
in data or statistics derived from data can be Unlike deduction, even if our data are valid, there
identified and controlled, it is referred to as is no guarantee that our conclusions about nature
a special cause. When the cause cannot be are correct. What we hope to do is make optimal
identified, it is regarded as random noise and scientific decisions and/or acquire evidence in
referred to as a common cause. favor of one or more of the hypotheses we have
considered, with some appreciation for the decision
Still the team needs supportive evidence because, risks.
if the true process mean is < 96%LC, the process may Given the larger than expected standard
result in subpotent or failing batches. deviation estimate (5 instead of the expected
2%LC), could the value of 93%LC have been due to
Make Deductive Inferences random variation? What inductive inference can the
The team believes that batch mean potency team make about the true process mean potency?

88 PROCESS VALIDATION Process Design


David LeBlond

To help the team with this decision we must t3FEVDJOHSBXEBUBUPTPNFEFDJTJPOTUBUJTUJD


review some background on methods of inductive whose sampling distribution is known from the
inference. likelihood, Equation 1
t6TJOHBi1WBMVFwGPSEFDJEJOHXIFUIFSBO
THREE SYSTEMS OF INDUCTIVE observed set of data deviates from a hypothesized
INFERENCE probability distribution more than would be
In the following sections we describe the three expected from random error. P-value, is defined
systems of inductive inference most commonly as: The probability of obtaining a result at least
employed today. Each description opens with as extreme as the one that was actually observed,
a brief historical perspective followed by an given that the null hypothesis is true. The fact that
application of the methodology to our Example 1. p-values are based on this assumption is crucial to
their correct interpretation.
Fisherian Induction
The term Fisherian seems appropriate because it Fisher offered his opinion about the proper P-value
was R. A. Fisher who described the approach with criterion for decision making (11, page 80):
the greatest clarity and laid its statistical foundations.
Before 1900, inductive inference from data was We shall not often be astray if we draw a
informal. The discipline of statistics, as we know it conventional line at 0.05.
today, was in its infancy. While many probability
distributions and models were known, workers Over 80 years later his conventional line is widely
typically summarized their data using tables and used as a criterion for approval of new pharmaceutical
graphs and made visual comparisons with theoretical products.
predictions. In 1900 the British mathematician Karl
Pearson described what is now called the Chi-square In our tablet-manufacturing example, Fisherian
test (9). This innovation was followed in 1908 with a induction would have us summarize our data as a
t-test for means by an Irish brewer, William Gosset, t-statistic, which can be done by using Excel syntax as
better known to us as Student (10), and in 1925 with follows:
ANOVA and an associated F-test for comparing
groups of means by the English geneticist and t statistic = sqrt(Sample Size)*(Observed Mean
mathematician Ronald Fisher (11). The F-distribution H0)/(Standard Deviation)
was so named in his honor (12).
The following are definitions of some terms that are = sqrt(10)*(93 - 96)/5
central to the Fisherian approach:
tStatistic: A summary value (such as the mean or = -1.8974 [Equation 2]
standard deviation) that is calculated from data.
A statistic is often used because it provides a good If the team were to repeat their experiment using 10
estimate of a parameter of interest different (independent) batches, they would of course
tSampling distribution: The distribution of data not get the same t-statistic because of sampling and
or some decision statistic calculated from data measurement variation. Conceptually, if they repeated
tt-statistic: The decision statistic used in Students the experiment many times, and if the true value of
t-test consisting of the ratio of a difference between the process mean was equal to 96%LC, the sampling
an observed and hypothesized mean divided by distribution of the t-statistics would be the probability
its estimated standard error distribution given in Figure 2. This distribution is known
tF-statistic: The decision statistic used in Fishers as the Students t-distribution.
analysis of variance hypothesis test consisting of Notice that the observed t-statistic (-1.8974) is
the ratio of two independent observed variances relatively far to the left side of the t-distribution. This
calculated from normally distributed data is because the observed mean of 93%LC is somewhat
tAnalysis of variance (ANOVA): A hypothesis below the hypothetical limit of 96%LC. Does this
test that uses the F-statistic to detect differences mean that the team should reject H0? Fisherian
among the true means of data from two or more induction suggests that they should reject H0 (in favor
groups. of Ha) if it is unlikely to obtain such a t-statistic or
one even more extreme by random chance alone. In
The Chi-square, t-test, and ANOVA hypothesis tests this case even more extreme would include all those
(and many others developed subsequently) rely on the values equal to or less than -1.8974. The probability
following ideas: of observing such extreme values by chance alone is

PROCESS VALIDATION Process Design 89


David LeBlond

Figure 2: Fisherian inductive inference model. of the sample size. If our team had tested 1,000
instead of only 10 batches, it is likely that the
Observed value = 1.8974 t-statistic would have been significant even if
the observed mean had been only slightly below
Probability density

Sampling 96%LC. Thus it is always wise to supplement a


distribution hypothesis test with a confidence interval for
for H0 the mean potency (see references 3 and 4 for a
discussion of confidence intervals).
t#ZJUTWFSZOBUVSF UIF1WBMVFJOEFYJODMVEFTOPU
P-value = 0.045
only the observed t-statistic value, -1.8974, but
0 also all those t-statistic values of lesser value that
Observed statistic (t) were not observed. In a sense, we are rejecting
H0, because it has failed to predict low t-statistic
values that were not in fact observed. The P-value
is similar to the index often used to classify the
relative performance of students: the observed
equal to the area under the distribution curve to the t-statistic is in the lower 5% of its class. Within
left of -1.8974. This probability is known as the P-value that category there are many poorer performers
and we can obtain it easily using the Excel cumulative with whom we are not concerned, but the P-value
distribution function as follows: disregards this information.
t5IFiDPOWFOUJPOBMMJOFwEFDJTJPOQPJOUGPSB
P-value=TDIST(-observed t-statistic, sample size-1,1) P-value of 0.05 may not be appropriate in all cases.
=TDIST(1.8974, 9, 1) The P-value per se does not take into account the
= 0.045. consequences of making an incorrect judgment
concerning H0. Further, it is difficult to integrate
Thus, such extreme (or even more extreme) values of the P-value index with measures of decision error
the t-statistic would occur by chance alone on average consequences.
less than 1 time in 20 repeats of this experiment, t.PTUJNQPSUBOUMZ UIF1WBMVFJTOFJUIFSPGUIF
or less than 5% of the time. If we use Fishers following:
conventional line, the team should reject H0 and a. The probability that the initial experimental
conclude that the true process mean is below 96%LC result will repeat, or
and thus unacceptable. The P-value is also called b. The probability that H0 is true.
the significance level of the hypothesis test and
when it is low (say < 0.05) it is considered a measure To obtain these probabilities we need to use one of
of evidence against the null hypothesis, defined as the other two systems of induction.
follows:
tMeasure of evidence: In scientific studies, Neyman-Pearson Induction
hypothesis testing is used to build evidence for Between 1927 and 1933, two of Fishers
or against various hypotheses. The P-value and contemporaries extended his groundbreaking ideas
Bayes factor (see below) are examples of measures about hypothesis testing. Egon Pearson was the son of
of evidence in Fisherian and Bayesian induction, Karl Pearson (the developer of the Chi-square test) and
respectively. a colleague of Fisher in London. Jerzy Neyman was a
Polish mathematician in Warsaw who, among other
While it offers an objective criterion, the P-value things, developed the idea of confidence intervals.
may not be an optimal measure of evidence of the Their collaboration (15) led to a more general concept
validity (or not) of the null hypothesis (13), and must of inductive inference. They developed an extremely
be interpreted with care (14). The following cautions useful theory of optimal testing. Some key aspects of
apply to the Fisherian perspective: their ideas are shown in Figure 3.
t"TXJUIBMMTUBUJTUJDBMQSPDFEVSFT UIFNFUIPEPMPHZ As with Fisherian induction, Neyman-Pearson
is only applicable as long as all assumptions induction recognizes a decision statistic having a
of the likelihood model (the data generation known sampling distribution. The range of values of
model) apply. For instance, in our example we the decision statistic regarded as unlikely (if H0 is true)
assume normality and independence of the is called the rejection region. This region is in the tail
measurements. or tails of the sampling distribution and corresponds to
t/PUJDFJOFRVBUJPOUIBUUIFNBHOJUVEFPGUIF some fixed tail probability (e.g., 0.05) called the Type I
t-statistic depends directly on the square root error. Type I error is defined as follows:

90 PROCESS VALIDATION Process Design


David LeBlond

tType I error: A decision error that results in


Figure 3: Neyman-Pearson inductive
falsely rejecting the null hypothesis when in fact it
inference model.
is true.
They envisioned decision makers using this Decision point
Sampling Sampling
decision statistic for all future hypotheses tests. In this

Probability density
distribution distribution
way, all future hypothesis tests would have a fixed for Ha for H0
probability (e.g., 0.05) of incorrectly rejecting H0.
Different Type I errors could of course be chosen for
different situations, but the important point was to
ensure that the rate of incorrectly rejecting H0 would
be understood for each decision. Type I error Type II error
For instance, in the current example, our team may
agree that a Type I error rate of 0.05 (i.e., one error in t statistic
20 hypothesis tests) is appropriate in their situation.
Using the TINV EXCEL function, this error rate Power is equal to 1 minus the Type II error rate.
corresponds to the following fixed t-statistic: The power curve of a hypothesis test is a plot of
the Power versus the true value of the underlying
Fixed t-statistic =-TINV(2*(Type I error rate),Sample parameter of interest.
size - 1)
The calculation of such power curves is an
= - TINV(2*0.05, 10-1) important part of experimental planning; however, it
involves the use of probability distribution functions
= - 1.8331. (such as the non-central t distribution) that are not
available in Excel.
According to the Neyman-Pearson scheme, any One can picture these decision risks as a 2x2 table
observed t-statistic less than -1.8331 would result in a such as in Figure 4. In adopting a Neyman-Pearson
rejection of H0. For our tablet manufacturing example paradigm, the team has moved from the objective
we would reject H0 because -1.8974 < -1.8331. We of developing evidence with respect to their specific
should note here that the calculation of the t-statistic experiment, to the objective of using a methodology
in the equation would be modified if larger (rather with assured decision risks. The Neyman-Pearson
than smaller) potencies, or if both larger and smaller approach does not concern itself with whether
potencies were considered unacceptable. or not the true mean process potency is below or
In comparing the Neyman-Pearson paradigm above 96%LC. It only assures the team that the
(Figure 3) with that of Fisher (Figure 2) notice that an many decisions they will make over their careers to
additional distribution (Ha) is added. Neyman and reject H0s will be incorrect only 1 time in 20 (i.e., a
Pearson recognized the need to consider the sampling probability of 0.05). Examples of practical situations
distribution of the statistic (the t-statistic in the present where this point of view is appropriate are listed as
example) for both a specific H0 (such as when the true follows:
mean potency equals 96%LC) and at a specific Ha tControl charting. For monitoring critical
(representing some arbitrary true mean potency that quality measures of a process or method, it may
might be considered unacceptable). The probability be useful to know the probability of incorrectly
of incorrectly accepting H0 when in fact Ha is true, is identifying an out of control situation (Type I
called a Type II error, defined as follows: error) or of failing to detect a condition that is
tType II error: A decision error that results in unacceptable (Type II error).
failing to reject the null hypothesis when in fact it tDiagnostics screening. A diagnostic test is
is false. analogous to a hypothesis test. The sensitivity
(1 - Type II error rate) and specificity (1 - Type I
Of course this Type II error will depend on the error rate) of a diagnostic test are key measures that
specific Ha being considered. One can make a plot of determine the medical value of the reported test
Type II error as a function of the value of the parameter result when the prevalence of disease in the tested
(e.g., the true mean potency) associated with Ha. population is known.
Such a plot is called a Power Curve or an Operating tValidation and product acceptance
Characteristic curve for the hypothesis test, defined as testing. For judging production costs and
follows: allocating resources, it may be desirable to fix the
tPower (or operating characteristic) curve: manufacturers risk (Type I error, risk of incorrectly

PROCESS VALIDATION Process Design 91


David LeBlond

Figure 4: Neyman-Pearson hypothesis testing many experiments. The P-value is a measure of


error types. evidence (albeit imperfect) against H0.
t3 JHJEBEIFSFODFUPB5ZQF*FSSPSSBUFMFBET
True state of nature* to conceptual problems. Type I error rates of
0.0499 and 0.0501 are very close in any practical
H0 Ha situation, yet they could lead to very different
decisions.
Conclusion
from H0 no error Type II error t*GUIFEFDJTJPOTUBUJTUJDEPFTOPUGBMMJOUIF
experiment rejection zone, the Neyman-Pearson formulation
Ha Type I error no error recommends that H0 be accepted. However,
from a scientific point of view it is more
*Choosing the wrong H0 or Ha to study is
sometimes called a Type III error. appropriate to fail to reject H0. A larger
experiment would have a larger rejection zone that
failing an acceptable batch) or consumers risk might include the observed result.
(Type II error, risk of incorrectly passing a batch of
t"TXJUIUIF'JTIFSJBOTZTUFN UIF/FZNBO
some defined level of unacceptability).
Pearson system relies solely on the likelihood
tNew drug application regulatory
(probabilistic model of data generation) for
acceptance. For maintaining standards of risk, a
both deductive and inductive inferences.
regulatory agency may find it desirable to require
However, developing and building evidence for
all studies of a given type (e.g., clinical trials, shelf-
mechanistic, predictive models often requires
life estimation, bio-equivalence tests) to maintain
strong theory and experience. Finding and using
Type I error benchmarks. Requirements with
such models as a part of risk management control
respect to Type II error can assure that sample sizes
are adequate (e.g., for safety studies). strategies is the key to regulatory initiatives such
as quality by design (QbD) (16). To incorporate
The Neyman-Pearson approach permeates much of such prior knowledge in a quantitative way, we
todays scientific decision making. It represents a high must use Bayesian induction that grew out of an
watermark in terms of objectivity and consistency in earlier age.
inductive inference. When we use hypothesis testing
methodology whose Type I and II error risks are Bayesian Induction
known, we say that we are using a calibrated method, In 1739, the Scottish empiricist philosopher David
and this can have many advantages. A calibrated Hume posed the following problem in inductive
hypothesis test is defined as follows: inference (17):
tCalibrated hypothesis test: A hypothesis test tis only probable that the sun will rise tomorrow
method whos Type I error, on repeated use, is we have no further assurance of this fact than what
known from theory or computer simulation. experience affords us.

However, the Neyman-Pearson approach may not Knowing the underlying probabilities of events
be the appropriate paradigm for all situations. Some was critical to the active 18th-century insurance and
important considerations are listed as follows: finance industries whose profits depended on accurate
t8IJMFUIFBQQSPBDIEPFTGJYEFDJTJPOFSSPSSBUFT inductive inferences from available experience and
over a series of experiments, it does not by itself theory. It was also a problem of some interest to the
provide a measure of evidence concerning H0 or liberal theologians of that time. In 1763, the problem
Ha in any specific experiment. was addressed quantitatively for the first time by two
t*ONPTUBDUVBMTUVEJFTUIFSFXJMMCFNVMUJQMF non-conformist ministers, Thomas Bayes and Richard
hypotheses tests, which may or may not be Price (18). Their solution, Bayes rule, is to probability
independent. We refer to groups of hypothesis theory what the Pythagorean theorem is to geometry.
tests that are associated with the same decision as The following definitions apply:
a family. Thus we must consider both the family- tBayes rule: A process for combining the
wise error rates as well as the individual test rates information in a data set with relevant prior
obtained from Neyman-Pearson methodology. information (e.g., theory, past data, expert
These family-wise rates suffer from a condition opinion, and knowledge) to obtain posterior
called multiplicity in that they can be difficult to information. Prior and posterior information
predict. are expressed in the form of prior and posterior
t5IF/FZNBO1FBSTPO5ZQF*FSSPSSBUFNVTUOPU probability distributions, respectively, of the
be confused with the Fisherian P-value. The Type underlying physical parameters, or of predictive
I error rate is the rate of falsely rejecting H0 over posterior distributions of future data.

92 PROCESS VALIDATION Process Design


David LeBlond

tPrior distribution: A subjective distributional tool to gauge the effects of different prior opinions
estimate of a random variable, obtained prior to on a final conclusion. If prior knowledge is lacking,
any data collection, which consists of a probability it is logical to assign equal probability to each of the
distribution. hypotheses under consideration (e.g., 0.5 to H0 and
tPosterior distribution: A distributional 0.5 to Ha).
estimate of a random variable that updates
information from a prior distribution with new For our tablet-manufacturing example, it is
information from data using Bayes rule. straightforward to apply a Bayesian approach. We have
tBayesian induction: A process for inductive shown previously how to obtain the prior and posterior
inference in which the P-value is replaced with distributions of a normal mean (see reference 3, Table
the posterior probability that the null hypothesis IV and reference 20, pp. 78-80). As illustrated in Figure
is true. In Bayesian induction, the respective 5, the prior (or posterior) probabilities of H0 and Ha
prior distributions and data models (likelihoods) are simply the areas under the prior (or posterior)
constitute the null and alternative hypotheses. In distribution of the mean over the respective ranges of the
addition, one must specify the prior probability (or mean (in the present example, below and above 96%LC
odds) that the null hypothesis is true. for Ha and H0, respectively).
Lets calculate the probability of truth of H0 and Ha
Two centuries later, Harold Jeffreys, a British before and after the data are examined by the team. This
astronomer, greatly extended the utility of Bayes particular example is illustrated in Figure 6.
rule (19). In terms of hypothesis testing, it may be Before examining data: All knowledge about the
summarized as follows: true process mean comes from the prior distribution. The
team used a noninformative prior for both the mean and
Probability that the hypothesis is true, given standard deviation. In Figure 6, the prior distribution for
observed data the mean is essentially flat and indistinguishable from
= K*(Probability of the observed data if the the horizontal axis. This prior distribution was so broad
hypothesis is true) that about half the probability density (i.e., area under
x(Prior probability that the hypothesis is true), the prior distribution of the mean) lies below 96%LC
[Equation 3] and half above. Thus the prior probabilities of H0 and
Ha were each very close to 0.5. We can use short-hand to
and by reference to Equation 1 we see that state this:

Probability that the hypothesis is true, given PriorProbH0 = 0.5 and PriorProbHa = 0.5.
observed data
= K*(Likelihood)x(Prior probability While the team actually felt that H0 was more likely,
that the hypothesis is true). they used this noninformative prior to provide a more
[Equation 4] objective test.
After examining data: All knowledge about
Thus we see that evidence for (or against) a given the true process mean comes from the posterior
hypothesis may be obtained directly from probability distribution. Notice in Figure 6 that the area under the
theory, as long as we can supply the following: distribution to the right of 96%LC (i.e., H0 range) is
t5IFMJLFMJIPPE XIJDIJTBWBJMBCMFGPSNPTU 0.045 while that to the left of 96%LC (i.e., Ha range) is
practical problems. It is the same likelihood 0.955, so that
required for Fisherian and Neyman-Pearson
approaches. However, in the Bayesian approach we PostProbH0 = 0.045 and PostProbHa = 0.955.
must also have the following:
t5IFWBMVF,JO&RVBUJPO XIJDITPNFUJNFT
requires computing technology and numerical Thus the team can be 95.5% confident (in a true
methods that have only recently become available. probabilistic sense) that the null hypothesis, H0, is
In many common cases, however, such as those false. It is also useful to consider such probabilities in
we consider here, K can be easily evaluated in terms of odds as illustrated in Figure 7.
Excel. Odds is defined as follows:
t5IFQSJPSQSPCBCJMJUZPGUIFIZQPUIFTJT5IJTQSJPS tOdds: The ratio of success to failure in probability
probability should be based on existing theory and calculations. In the case of hypothesis testing
expert knowledge. It can be problematic because where only H0 or Ha are possible (but not both), if
experts will differ in their prior beliefs. On the other the probability of truth of H0 is ProbH0, then the
hand, this Bayesian approach provides a quantitative odds of H0 equals ProbH0/(1-ProbH0).

PROCESS VALIDATION Process Design 93


David LeBlond

Figure 5: Bayesian induction inference model. that H0 is true to its prior odds. A B value of
1/10 means that Ha is supported by the data 10
times as much as H0. Because the Bayes factor
Prior or Ha H0 is normalized by the prior odds, it is measure of
Probability density

posterior
evidence that primarily reflects the observed data.
distribution

The Bayes factor is a measure of evidence supplied


by data in a hypothesis testing situation. Like the
Probability that Probability that P-value, various decision levels have been proposed
Ha is true H0 is true
(see reference 19, page 432).
Notice something important here: The Bayesian
Parameter
PostProbH0 and the Fisherian P-value for our
manufacturing example are both equal to 0.045.
Figure 6: Visualizing a one-sided hypothesis
This seems remarkable considering that these indices
test for a normal mean using its posterior dis-
are not measuring the same thing. The P-value is the
tribution.
probability of observing data at least as extreme as
was observed, while the PostProbH0 is the probability
0.3 that H0 is true. However, it can be shown that in many
Prior Ha H0
common situations (e.g., one-sided hypothesis tests
Probability density

Posterior
0.2 involving normally distributed data) the P-value will
Probability that Probability that
be equal to the PostProbH0 when an appropriate non-
Ha is true = 0.955 H0 is true = 0.045 informative prior is used to calculate PostProbH0 (see
0.1
examples in references 20 and 22).

0 PROBLEMS WITH THE TWO-SIDED


86 88 90 92 94 96 98 100 HYPOTHESIS TEST FOR EQUALITY
Mean In Example 1, the hypothesis tested by our team is
one sided because the ranges for the mean for Ha and
H0 were each completely on one side or the other of
the hypothesized value, 96%LC. A one-sided test is
Before data are examined, the prior odds of H0 are defined as follows:
defined as tOne-sided test: A null hypothesis stated in such
a way that observed values of the decision statistic
Prior Odds of H0 = PriorProbH0/PriorProbHa = on one side (either large or small but not both)
0.5/0.5 = 1/1. constitutes evidence against it.
Our example uses composite hypotheses because
So the prior odds that H0 is true are 1 to 1. After Ha and H0 both consist of ranges rather than single
information from data has been incorporated, the points. It is also common to consider a two-sided
posterior odds of H0 are situation in which the null hypothesis, H0, consists of
a single point. Say for instance our team obtained the
Posterior Odds of H0 = PostProbH0/PostProbHa = following data from their testing of n=10 batches:
0.045/0.955 = 45/955.
Mean of 10 measured batch potencies = 95%LC, and
So the posterior odds that H0 is true are 45 to 955. Sample standard deviation = 5%LC,
This reduction in the odds of H0 from 1/1 to 45/955
is due to the evidence about H0 contributed by the They might have considered testing a point-null
data. As shown in Figure 7, we can form a measure of hypothesis such as
evidence by taking the odds ratio as follows:
H0: true mean = 100%LC
B = (Posterior Odds of H0)/(Prior Odds of H0) =
(45/955)/(1/1) = 45/955 = 0.0452 against an alternative hypothesis such as

B is referred to as the Bayes factor (21,22). Bayes Ha: true mean is not = 100%LC.
factor is defined as follows:
tBayes factor (B): In Bayesian induction, the A Fisherian test of this point-null H0 is easily
Bayes factor is the ratio of the posterior odds executed in Excel as follows:

94 PROCESS VALIDATION Process Design


David LeBlond

Figure 7: The Bayes factor (B) as a measure of


t-value = SQRT(10)*ABS(95-100)/5 = 3.16
evidence for H0.

P-value = TDIST(3.16,10-1,2) = 0.012,

Probability density
Prior
that would lead to rejection of H0 if the common Ha H0
line of 0.05 is employed. ProbHa ProbHO
Testing of point-null hypotheses such as this is
very common. Unfortunately, as shown below, this is
almost never a realistic or meaningful test. + Data

Probability density
Most would agree that there is little practical
difference between a process mean potency of Posterior
99.999%LC and 100.001%LC. Yet it can be shown that
ProbHa ProbH0
if the sample size is large enough, H0 will be rejected
with high probability, even if the true deviation from
100.000%LC is only 0.001%LC. This sensitivity to ProbH0/ProbHa
Mean B=
sample size is well known to statisticians because ProbH0/ProbHa
decision statistics, such as the mean, become very
precise (small standard errors)but not necessarily
more accuratewhen sample size is increased. An mass depends on the values of two (bivariate) or
essentially correct hypothesis can be rejected when more (multivariate) parameters simultaneously. A
the summary statistics are too precise. This type of bivariate probability distribution can be visualized
counter-intuitive behavior in hypothesis testing is as a surface mesh or contour plot.
often due to an incorrect statement of the problem
known as a Type III error (see Figure 4 footnote). Type The H0 prior for mean and standard deviation is
III error is defined as follows: shown in Figure 8. The prior for the mean (top panel)
tType III error: A decision error that results is a single spike at 100%LC, consistent with the point
in choosing the incorrect null or alternative H0. The prior distribution for the standard deviation
hypothesis for use in a hypothesis test. is a mildly informative IRG distribution with C=1 and
G=50 (3, Table IV) and is shown on the lower panel of
Rather than consider a point H0, it may be more Figure 8.
appropriate to specify a small interval for H0. Type III The Ha prior for mean and standard deviation is
errors are common in hypothesis tests for normality. In shown in Figure 9. Because under Ha, the mean is
very large samples, normality is almost always rejected permitted to vary, the joint distribution of mean and
despite the fact that a histogram agrees visually with variance is displayed as a surface mesh plot. This is a
the fitted normal curve. Sometimes the rejection is joint LSSt-IRG distribution with R=1, T=100, U=7.07,
caused by minor imperfections in the data, such as C=0.5, and G=50 (3, Table IV).
rounding, that are not material to the objectives of the This same Ha prior is displayed in Figure 10 as
hypothesis test. a contour plot. From this it is easier to see the two-
One advantage of the Bayesian approach is that dimensional shape and range that is identified as the
it forces one to think carefully about the correct alternate hypothesis.
formulation of a hypothesis testing problem. From Unlike the one-sided case, the point-null situation
the Bayesian hypothesis testing perspective, the prior requires that we also specify our prior beliefs about
distributions for H0 and Ha are the hypotheses being the truth of H0 and Ha. If the team had no prior
tested. An appropriate Bayesian approach to test this preference for either H0 or Ha, they would assign equal
point-null H0 is given by Schervish (23, example prior probabilities to each. We can express that using
4.22 pp 224-5). Under the Bayesian paradigm, we shorthand notation as
require prior distributions for the mean and standard
deviation for both H0 and Ha. This is because, in PriorProbH0 = 0.5 and PriorProbHa = 0.5.
general, neither the true mean or standard deviation
parameters are known. Instead of a single parameter The calculation of the Bayes factor for this test can
we have two parameters to consider and these will easily be done in Excel (23, equation 4.23) and a value
have joint prior and posterior probability distributions. of
Joint probability distribution is defined as follows:
tJoint probability distribution: A probability B = 0.3495
distribution in which the probability density or

PROCESS VALIDATION Process Design 95


David LeBlond

Figure 8: Point-null hypothesis visualized as the P-value groups the actual observation obtained
probability distributions for the mean and with much more extreme observations that were not
standard deviation. actually obtained. In the case of point-null hypothesis
testing, Bayesian and Fisherian conclusions rarely
Prior null hypothesis for mean agree (24, see pp 151, table 4.2). When two sound
methodologies lead to different conclusions we must
1.5
Probability mass

wonder whether we have misidentified the problem


1 (i.e., our friend, the Type III error again?). It is best to
0.5 consider carefully what we mean by not equal. We do
0
this in the following section.
When applicable, the Bayesian approach to
80 90 100 110 120
hypothesis testing gives answers in terms of
Mean probability statements about the parameters and the
hypotheses themselves, which are impossible with the
Prior null hypothesis for standard deviation Fisherian and Neyman-Pearson approaches. This is
very advantageous for risk analysis. Bayesian inductive
Probability density

1.5
inference is also useful in data-mining applications. As
1
an example, the US Food and Drug Administrations
0.5
Center for Drug Evaluation and Research now employs
0
a Bayesian screening algorithm as part of their internal
2 7 12 17 22
drug safety surveillance program (25). However, the
Standard deviation Bayesian approach can be more demanding for the
following reasons:
t8IJMFNBOZ#BZFTJBOQSPCMFNTDBOCFTPMWFEJO
Figure 9: Alternative hypothesis visualized as Excel, more complicated situations may require
a surface mesh plot of the joint probability for advanced computing packages such as WinBugs
the mean and standard deviation. (26). Calculation of the K in Equation 4 or of the
Bayes factor can sometimes be challenging (27).
t#BZFTJBOBQQSPBDIFTSFRVJSFTQFDJGJDBUJPOPG
prior distributions and prior probabilities of the
0.006 0.005-0.006 hypotheses under study. While noninformative
0.004-0.005
0.005
priors may be used for objectivity, this may result
0.003-0.004
0.002-0.003 in a loss of information as well as lost opportunity
0.004 0.001-0.002 to debate and thereby take advantage of the prior
Probability
density 0.003
0-0.001
knowledge of different members of a project team.
It is always critical to understand any effect that
0.002
the prior may have on conclusions.
0.001 18
t8IFOVTFEGPSDPOGJSNBUPSZPSEFNPOTUSBUJPO
13 studies such as clinical trials, validations, quality
0
80 88
7 control, or data mining, Bayesian hypothesis
96
104 2 Standard testing methods must be calibrated (usually by
112
120 deviation
Mean computer simulation) so that the Type I and II
error risks are known.

is obtained. Given this information, we can invert A good, readable, basic introductory textbook for
the equation for B given in Figure 7 to solve for the readers interested in the theory and methodology of
posterior probability of H0. Noting that PostProbHa = Bayesian hypothesis testing is Bolstad (28).
1 PostProbH0, we find that
THE TOST HYPOTHESIS TEST FOR
PostProbH0 =0.259. EQUIVALENCY
Sometimes, as with method transfers or process
So the team would conclude that there is a 25.9% changes in existing products or validation of new
probability that H0 is true and likely would not reject products, the objective may be to establish evidence
it. This is a troubling result because the Fisherian for equivalency, rather than equality, with the usual
point-null test above yielded a P-value of 0.012, clearly burden of proof reversed. It is best to define a range
suggesting rejection of H0. However, as noted above, for the mean difference, say L to H, indicative of

96 PROCESS VALIDATION Process Design


David LeBlond

Figure 10: Alternative hypothesis visualized as a contour


equivalence. Then the null hypothesis is
plot of the prior joint probability distribution for the mean
and standard deviation.
H0: true mean difference < L or true mean
difference > H 22
0 . 0 0 5 -0 . 0 0 6
against the alternative hypothesis 0 . 0 0 4 -0 . 0 0 5
0 . 0 0 3 -0 . 0 0 4 18
Ha: L < true mean difference < H. 0 . 0 0 2 -0 . 0 0 3
0 . 0 0 1 -0 . 0 0 2
In this way we treat nonequivalence as the default 0 -0 . 0 0 1 14
state of nature and require the data to provide Standard
evidence that allows us to reject H0. A two one-sided deviation
testing (TOST) procedure (29) is based on requiring 10
a confidence or credible interval to be completely
contained within the range of Ha in order to reject H0.
TOST is defined as followed: 6
tTwo one-sided hypothesis test (TOST): A
hypothesis test for equivalency that consists of
2
two one-sided tests conducted at the high and
low range of equivalence, each of which must be 80 88 96 104 112 120
rejected at the Type I error rate in order to reject the Mean
null hypothesis of non-equivalence.
Figure 11: Visualizing the ANOVA null hypothesis (H0)
EXAMPLE 2: A TEST THAT THE MEANS OF
relative to the posterior distribution of the group deltas.
THREE GROUPS ARE EQUAL
The ANOVA procedure developed by Fisher is a
Ratio 0 .2
procedure for testing the null hypothesis of equality for
30-32
group means. Fishers ANOVA calculation procedure is 28-30 0 .1
illustrated elsewhere in this issue with an example (30) 26-28
delta_A
in which there are three groups (A, B, and C). We will 24-26 0

illustrate the Bayesian approach to ANOVA here. 22-24


20-22 -0 . 1
Zelen (31, pp 306-9) shows how to perform multiple
18-20 H0
regressions from a Bayesian perspective. His procedure 16-18
-0 . 2
can easily be adapted to the simple ANOVA case and a 14-16 Posterior
-0 . 3 mode
Bayes factor for the ANOVA can be calculated in Excel. 12-14
However, as with the point-null hypothesis, we will 10-12 -0 . 4
8-10
find that the conventional P-value tends to overstate
6-8 -0 . 5
the case for rejection of H0. Again, it is wise to ask if 4-6 0.1 0.2 0.3 0.4 0.5
-0.5 -0.4 -0.3 -0.2 -0.1 0
the ANOVA hypothesis test correctly frames the real 2-4
questions we want to ask. We will assume here that the 0-2 delta_B
null hypothesis of equality

H0: meanA = meanB = meanC,


The usual Fisherian paradigm regards the true
is appropriate. The corresponding alternative group means as fixed entities. So one can only make
hypothesis is comparisons with respect to the measured statistics,
using such things as confidence intervals. However,
Ha: one or more of the three means are not equal. the Bayesian paradigm regards the true group means
as having a joint posterior distribution. It is possible
An ANOVA F-test is often used when the real to calculate and plot the joint posterior distribution
objective is to make comparisons among the group using Excel and to visualize the single point within this
means. When an appropriate noninformative prior distribution that corresponds to all the means being
(3, 4) is used, the Bayesian approach to group means equal (i.e., the H0). Such a graph for the example (30)
comparison agrees exactly with the standard ANOVA is shown in Figure 11, but it requires some explanation.
F-test (32, pp 123-43). But the Bayesian approach offers First, the axes for the plot are not the true group
a useful advantage. means, but the deviations (deltas) of these true means

PROCESS VALIDATION Process Design 97


David LeBlond

from their true grand mean, M, where approaches to inductive inference are: Fisherian, which
uses the P-value as a measure of evidence against the
M = (mean_A + mean_B + mean_C)/3, null hypothesis; Neyman-Pearson, which controls the
long-run decision risk over repeated experiments; and,
or Bayesian, which obtains direct probabilistic measures
of evidence from the posterior distribution.
mean_C - M = - (mean_A - M) - (mean_B - M),
Understanding of the P-value as the probability of
and denoting the differences of the true group observing a result as extreme or more extreme than
means from their true grand mean with a delta, we that observed, assuming the null hypothesis is true,
have is critical to its proper interpretation. The P-value
is based on unobserved results more extreme than
delta_C = -delta_A - delta_B. those observed, so it may overstate the evidence
against the null hypothesis. The P-value from a point-
Because delta_C can always be obtained knowing null hypothesis, or two-sided test of equality, is very
delta_A and delta_B, it is not an independent random difficult to interpret. A confidence or credible interval
variable. Consequently, we only need to concern our should be provided in addition to the P-value.
selves with two of the three deltas, say delta_A and The Bayes factor, like the P-value, is a measure of
delta_B. With three groups, we can actually visualize evidence in favor of the null hypothesis contained
the joint distribution as a two-dimensional contour in the data. It is based on an odds ratio and permits
plot. direct probability statements about the hypothesis
Second, the point in Figure 11 corresponding to H0 tests being considered.
(true means all equal) is the point delta_A = delta_B The Type II error rate is the probability of incorrectly
= 0, because only under this condition can all three failing to reject the null hypothesis when in fact the
group means be equal. alternative hypothesis is true. The Type II error rate
Third, the contour lines form ellipses about the is related to Power and allows the construction of
joint distribution mode (the point delta_A = -0.22 operating characteristic curves.
and delta_B = -0.26 indicated as a red dot in Figure The analysis of variance is a Fisherian point-null
11). (Recall the mode is the point of a probability hypothesis test for the equality of means of two
distribution having the maximum probability density). or more groups. The Bayesian approach offers an
They do not correspond to a probability density as insightful reinterpretation of the analysis of variance
would be true for an actual distribution such as that in terms of the joint posterior distribution of the group
in Figure 10. Instead the contour lines are critical F means.
values. In Excel, the probability that the true value The three approaches to hypothesis testing represent
of delta_A, delta_B is beyond a given contour ellipse major advances in the way we use data to make
corresponding to F is FDIST(F,3-1,15-3). By analogy inductive inferences about the products, processes,
with a univariate distribution, the further any point is and test methods we develop in regulated industries.
from the distribution mode, the larger the F value and These approaches, and others not discussed here, have
the less likely that point is as a candidate for the true done much to improve our decision-making. They
delta_A, delta_B. help us build evidence about underlying mechanisms
and provide us a common language for objective
Finally the contour line in Figure 11 that equals communication of the evidence supporting our
the critical F value of 3.885, obtained in Excel as conclusions.
FINV(0.05,3-1,15-3) (30), forms a 95% credible ellipse. Still, humility and caution are in order. Despite
This is a bivariate analogue of a univariate 95% credible these impressive methodologies, no single, coherent,
interval (3). Notice in Figure 11 that H0 corresponds generally-accepted approach to inductive inference
to an F value of 6.142 and is, therefore, beyond the has yet emerged in our time. All experimenters
95% credible ellipse. As with the traditional ANOVA know that Nature does not divulge her secrets easily.
calculation, we reject H0. For the present, it is perhaps unwise to apply any
By restating the hypotheses in terms of joint induction methodology by rote without first carefully
posterior distributions that can be displayed visually, considering whether it is consistent with our objective
the Bayesian perspective offers deeper insight into the and problem.
mechanics and interpretation of ANOVA.
ACKNOWLEDGMENT
SUMMARY The text before you would be poorer if not for the help
We have seen that scientific inference requires both of others. I am most sincerely grateful to Paul Pluta for
deductive and inductive inferences. The three main ideas, encouragement, and expert feedback; to Diane

98 PROCESS VALIDATION Process Design


David LeBlond

variable, based on its probability distribution,


Wolden who tirelessly kept the reader in mind; and to
which contains its value with a certain
Susan Haigney for expertly laying the product to print.
probability (the credible probability level).

GLOSSARY Data: Measured random variable values, assumed to


be generated by some hypothetical likelihood
Alternative hypothesis: A hypothesis considered as
model, which contain information about the
an alternative to the null hypothesis, though
parameters of that model.
possibly more complicated or less likely.
Deduction: The act of drawing a conclusion about some
ANOVA (analysis of variance): A hypothesis test that
hypothesis based entirely on careful definitions,
uses the F-statistic described by Fisher used to
axioms, and logical reasoning.
detect differences among the true means of
data from two or more groups. F-statistic: The decision statistic used in Fishers analysis
of variance hypothesis test consisting of the
Bayes factor (B): In Bayesian induction, the Bayes factor is
ratio of two independent observed variances
the ratio of the posterior odds that H0 is true to
calculated from normally distributed data.
its prior odds. A B value of 1/10 means that the
Ha is supported 10 times as much as H0. Since Fisherian induction: A process for inductive inference,
the Bayes factor is normalized by the prior odds, described most clearly by Ronald Fisher that
it is measure of evidence that primarily reflects uses the P-value as a criterion for rejecting a
the observed data. hypothesis.

Bayes rule: A process for combining the information Hypothesis: A provisional statement about the value of
in a data set with relevant prior information a model parameter or parameters whose truth
(theory, past data, expert opinion, and can be tested by experiment.
knowledge) to obtain posterior information. Induction: The act of drawing a conclusion about some
Prior and posterior information are expressed hypothesis based primarily on limited data.
in the form of prior and posterior probability Inference: The act of drawing a conclusion regarding
distributions, respectively, of the underlying some hypothesis based on facts or data.
physical parameters, or of predictive posterior Joint probability distribution: A probability distribution
distributions of future data. in which the probability density or mass
Bayesian induction: A process for inductive inference in depends on the values of two (bivariate) or
which the P-value is replaced with the posterior more (multivariate) parameters simultaneously.
probability that the null hypothesis is true. A bivariate probability distribution can be
In Bayesian induction, the respective prior visualized as a surface mesh or contour plot.
distributions and data models (likelihoods) Likelihood model: A description of a data generating
constitute the null and alternative hypotheses. process that includes parameters (and possibly
In addition, one must specify the prior other variables) whose values determine the
probability (or odds) that the null hypothesis is distribution of data produced.
true.
Measures of evidence: In scientific studies, hypothesis
Calibrated hypothesis test: A hypothesis test method testing is used build evidence for or against
whos Type I error, on repeated use, are known various hypotheses. The P-value and Bayes
from theory or computer simulation. factor are examples of measures of evidence in
Composite hypothesis: A statement that gives a range Fisherian and Bayesian induction, respectively.
of possible values to a model parameter. For Mode: A point estimate of a random variable that is
example, Ha: true mean > 0 is a composite the value at which its probability density is
hypothesis. maximized.
Confidence interval: A random interval estimate of a Multiplicity: When multiple hypothesis tests (e.g., control
(conceptually) fixed quantity, which is obtained chart rules) are applied to different aspects (e.g.,
by an estimation method calibrated such that trending patterns) of a data set, the overall false
the interval contains the fixed quantity with a alarm rate of any one test failing may be greater
certain probability (the confidence level). than that of any single test when applied alone.
Control chart: A time ordered plot of observed data This statistical phenomenon is referred to as
values or statistics that is used as part of a multiplicity.
process control program. Various hypothesis Neyman-Pearson induction: A methodology for
tests are employed with control charts to detect inductive inference, developed by Jerzy
the presence of trends or unusual values. Neyman and Egon Pearson, that considers
Credible interval: An interval estimate of a random both a null and an alternative hypothesis.

PROCESS VALIDATION Process Design 99


David LeBlond

The null hypothesis is rejected in favor of the Probability density surface mesh plot: A three-
alternative hypothesis if the observed value of dimensional analogue of a two-dimensional
some statistic lies in its rejection region. The probability density plot in which there are two,
statistic and the associated rejection region are rather than only one, model parameters. The
identified from statistical theory and are chosen bivariate distribution therefore appears as a
to provide desired Type I or II decision error rates surface rather than as a curve.
over repeated applications of the methodology. Sampling distribution: The distribution of data or some
Null hypothesis (H0): A plausible hypothesis that is summary statistic calculated from data.
presumed sufficient to explain a set of data Special cause: When the cause for variation in data, or
unless statistical evidence in the form of a statistics derived from data, can be identified
hypothesis test indicates otherwise. and controlled, it is referred to as a special
Ockhams Razor: The doctrine of parsimony that cause. When the cause cannot be identified, it is
advocates provisionally adopting the simplest regarded as random noise and referred to as a
possible explanation for observed data. common cause.
Odds: The ratio of success to failure in probability Statistic: A summary value (such as the mean or standard
calculations. In the case of hypothesis testing deviation) that is calculated from data. A
where only H0 or Ha are possible (but not both), statistic is often used because it provides a good
if the probability of truth of H0 is ProbH0, then estimate of a parameter of interest.
the odds of H0 equals ProbH0/(1-ProbH0). t-statistic: The decision statistic used in Students t-test
One-sided test: A null hypothesis stated in such a way consisting of the ratio of a difference between
that observed values of the decision statistic an observed and hypothesized mean divided by
on one side (either large or small but not both) its estimated standard error.
constitutes evidence against it. Two-sided test: A null hypothesis stated in such a way
P-value: The probability of obtaining a result at least as that either large or small observed values of the
extreme as the one that was actually observed, decision statistic constitute evidence against it.
given that the null hypothesis is true. The fact Two one-sided hypothesis test (TOST): A hypothesis test
that p-values are based on this assumption is for equivalency that consists of two one-sided
crucial to their correct interpretation. tests conducted at the high and low range of
Parameter: In statistics, a parameter is a quantity of equivalence, each of which must be rejected at
interest whose true value is to be estimated. the Type I error rate in order to reject the null
Generally a parameter is some underlying hypothesis of non-equivalence.
variable associated with a physical, chemical, or Type I error: A decision error that results in falsely
statistical model. rejecting the null hypothesis when in fact it is
Point (simple) hypothesis: A statement that a model true. It is sometimes referred to as the alpha-risk
parameter is equal to a single specific value. or manufacturers risk.
For example, H0: true mean = 0 is a simple Type II error: A decision error that results in failing to
hypothesis. reject the null hypothesis when in fact it is false.
Posterior distribution: A distributional estimate of a It is sometimes referred to as the beta-risk or
random variable that updates information from consumers risks.
a prior distribution with new information from Type III error: A decision error that results in choosing
data using Bayes rule. the incorrect null or alternative hypothesis for
Power (or operating characteristic) curve: Power is use in a hypothesis test.
equal to 1 minus the Type II error rate. The
power curve of a hypothesis test is a plot of the
Power versus the true value of the underlying REFERENCES
parameter of interest. 1. LeBlond, D, Data, Variation, Uncertainty, and Probability
Prior distribution: A subjective distributional estimate Distributions, Journal of GXP Compliance, Vol. 12, No. 3, pp
of a random variable, obtained prior to any 3041, 2008.
data collection, which consists of a probability 2. LeBlond, D, Using Probability Distributions to Make
distribution. Decisions, Journal of Validation Technology, Spring 2008,
pp 214, 2008.
Probability density contour plot: A rendering of a
3. LeBlond, D, Estimation: Knowledge Building with
bivariate distribution in which the distribution
Probability Distributions, Journal of GXP Compliance, Vol.
appears in the bivariate parameter space as
12 (4), 42-59, 2008. See Journal of Validation Technology, Vol.
contours of equal probability density.
14, No. 5, 2008 for correction to Table IV.

100 PROCESS VALIDATION Process Design


David LeBlond

4. LeBlond, D, Estimation: Knowledge Building with 18. Price, R, (1763) An essay towards solving a problem
Probability DistributionsReader Q&A, Journal of Validation in the doctrine of chances, in Dale, A Most Honourable
Technology, Vol. 14(5), 50-64, 2008. Remembrance: The Life and Work of Thomas Bayes, Springer,
5. Marden, J, Hypothesis Testing: From p Values to Bayes New York, 2003.
Factors, Journal of the American Statistical Association, 95(452) 19. Jeffreys, H, Theory of Probability, 3rd edition, Oxford
1316-1320, 2000. University Press, Cambridge, 1961.
6. Stigler, S, The History of Statistics. The measurement of 20. Gelman, A, Carlin, J, Stern, H, and Rubin, D, Bayesian Data
uncertainty before 1900, Belknap Press, Cambridge, 1986. Analysis, 2nd Edition, Chapman and Hall/ CRC, New York,
7. Daston, L, Classical Probability in the Enlightenment, Princeton 2004.
University Press, Princeton, 1988. 21. Goodman, S (1999) Toward evidence-based medical
8. William of Ockham, of the 14th century, advocated statistics. 2. The Bayes factor, Annals of Internal Medicine,
adopting the simplest possible explanation for physical 130, 1005-1013.
phenomena, see Jeffreys, H (1961) Theory of Probability, 3rd 22. Casella, G and Berger, R, Reconciling Bayesian and
edition, Oxford University Press, NY, page 342. Freqentist Evidence in the One-Sided Testing Problem,
9. Pearson, K, On the criterion that a given system of Journal of the American Statistical Association 82(397) 106-
deviations from the probable in the case of correlated 111, 1987.
system of variables is such that it can be reasonably 23. Schervish, M, Theory of Statistics, Springer-Verlag, New York,
supposed to have arisen from random sampling. 1995.
Philosophical Magazine 50, 157175, 1900. 24. Berger, J, Statistical Decision Theory and Bayesian Analysis, 2nd
10. Gosset, W, aka Student, The probable error of a mean, edition, Springer-Verlag, New York, 1985.
Biometrika VI (1), 1-25, 1908. 25. Lincoln Technologies, WebVDME in Production at
11. Fisher, R, Statistical Methods for Research Workers, Oliver & the FDA, WebDVMA News, volume 2(2), page 1, 2005.
Boyd, Edinburgh, 1925. Available at HTTP://WWW.LINCOLNTECHNOLOGIES.
12. Snedecor, G and Cochran, W, Statistical Methods, 6th COM
edition, Iowa State University Press, Ames, page 98, 1967. 26. Cowles, M, Review of WinBUGS 1.4, American Statistician
13. Goodman, S, Toward Evidence-based Medical Statistics. 58(4), 330-336, 2004.
1. The P value fallacy, Annals of Internal Medicine, 130, 995- 27. Kass, R and Raftery, A, Bayes Factors, Journal of the
1004, 1999. American Statistical Association, 90(430) 773-795, 1995.
14. Gibbons, J and Pratt, J, P-values: Interpretation and 28. Bolstad, W, Introduction to Bayesian Statistics 2nd Edition, John
Methodology, The American Statistician, 29(1) 20-25, 1975. Wiley & Sons, Hoboken, New Jersey, 2007.
15. Neymen, J and Pearson E, On the Problem of the Most 29. Schuirmann, D, On Hypothesis Testing to Determine
Efficient Tests of Statistical Hypotheses, Philosophical if the Mean of a Normal Distribution is Contained in a
Transactions of the Royal Society, series A, volume 231, 289- Known Interval, Biometrics 37 617, 1981.
337, 1933. 30. Vijayvargiya, A., One Way Analysis of Variance, Journal of
16. International Conference on Harmonisation, ICH Validation Technology, Vol. 15, No. 1, 2009.
Harmonised Tripartite Guideline on Pharmaceutical Development 31. Zellner, A, An Introduction to Bayesian Inference in
Q8, Current Step 4 version, dated 10 November 2005. Econometrics, John Wiley, New York, 1971.
17. Dale, A , A History of Inverse Probability, Springer, New York, 32. Box, G and Tiao, G, Bayesian Inference in Statistical Analysis,
page 545, note 38, 1999. Addison-Wesley Pub. Co., Reading, MA, 1973. JVT

Originally published in the Winter 2009 issue of The Journal of Validation Technology

PROCESS VALIDATION Process Design 101


Paul L. Pluta

PQ=Confirmation
Paul L. Pluta

PQ Forum provides a forum for validation tThe site validation policy should clearly state
practitioners to share information about stage 2 the site strategy and approach to validation and
process qualification (PQ) in validation lifecycle. qualification. Appropriate personnel should be
Information about supporting activities such as trained on this policy.
equipment and analytical validation is shared. The tIf there is not complete confidence in the future
information provided is intended to be helpful and success of the validation PQ, protocols must
practical so as to enable application in actual work not be approved and validation should not be
situations. Our objective: Useful information. initiated.
Comments from readers are needed to help us tThe site validation approval committee (VAC)
fulfill our objective for this column. Suggestions for should have significant input into development
future discussion topics are invited. Please contact of the site policy and must then uphold the stated
column coordinator Paul Pluta at paul.pluta@ validation standards and expectations.
comcast.net or journal coordinating editor Susan tThe VAC should function as a surrogate
Haigney at shaigney@advanstar.com with comments, regulatory auditor when reviewing and approving
suggestions, or topics for discussion. validation documents.

KEY POINTS INTRODUCTION


The following key points are discussed: Validation has evolved over the years; what was
tValidation performance qualification is expected common and accepted practice in validation
to be confirmatory (i.e., the validation is performance qualification (PQ) years ago is not
expected to confirm the design, development, acceptable today. The November 2008 FDA process
and other support work associated with the validation draft guidance (1) clearly states the current
respective validation). expectation for PQ as follows: Stage 2Process
tPersonnel involved in validation may not know Qualification: During this stage, the process design
or understand current validation expectations. is confirmed as being capable of reproducible
tValidation documentation may also be commercial manufacture.
substandard due to incomplete development The draft guidance also states the following:
work. During the process qualification stage of process
tA strategy to change erroneous understanding validation, the process design is confirmed as being
of validation and define current validation capable of reproducible commercial manufacturing.
expectations is proposed. This stage has two elements: [1] design of the facility
and qualification of the equipment and utilities, and
[2] performance qualification (PQ). During this
stage, CGMP-compliant procedures must be followed

[
ABOUT THE AUTHOR
Paul L. Pluta, Ph.D., is a pharmaceutical scientist with extensive industrial development, manu-
For more Author facturing, and management experience. Dr. Pluta is also Adjunct Associate Professor at the Univer-
information, sity of Illinois, Chicago College of Pharmacy. He is also editor-in-chief of the Journal of Validation
go to Technology and the Journal of GXP Compliance. Dr. Pluta has written several chapters and edited
gxpandjvt.com/bios Cleaning and Cleaning Validation, Volume 1, Basics, Expectations, and Principles, published by PDA
and DHI Publishing. He may be contacted by e-mail at paul.pluta@comcast.net.

102 PROCESS VALIDATION Process Design


Paul L. Pluta

and successful completion of this stage is necessary engineers, and other technical people) are not aware
before commercial distribution. of current regulatory expectations. Validation is
A successful PQ will confirm the process design performed in different areas on different things
and demonstrate that the commercial manufacturing related to manufacturing (e.g., manufacturing
process performs as expected. processes, cleaning processes, analytical methods,
The level of monitoring and testing should equipment, HVAC systems, water systems, computer
be sufficient to confirm uniform product quality systems, and so on). Each of these is performed
throughout the batch during processing. by people with different expertise, knowledge,
After establishing and confirming the process, and experience. These people have different
manufacturers must maintain the process in a state of levels of awareness of regulations and regulatory
control over the life of the process, even as materials, expectations for validation. Other factors may also
equipment, production environment, personnel, and influence the validation approach. Project timelines
manufacturing procedures change. may cause an accelerated initiation of validation.
The key word that is restated in these sections of Commercial demand requirements may also cause
the guidance is confirm. validation to be prematurely initiated. New active
There is clear and repeated evidence that pharmaceutical ingredients (APIs) or other materials
the US Food and Drug Administration expects may not be available to perform experimental
manufacturers to thoroughly understand the process runs prior to validation. Costs may be prohibitive,
and confirm its acceptability in the PQ. The PQ especially when used material will be destroyed and
is expected to confirm the design, development, not commercially distributed.
and other preliminary work associated with the All personnel involved in validation must clearly
validation. The confirmatory PQ is a decision point know that validation is intended to confirm that
that heavily contributes to the judgment to transition the process, equipment, facilities, etc. are well
responsibility for the process from development to understood, under good control, and can be expected
manufacturing for routine use. The FDA guidance to perform reliably and consistently throughout their
(1) states, Success at this (PQ) stage signals an respective lifecycle.
important milestone in the product lifecycle and Again, the key message: Validation is
needs to be completed before the manufacturer confirmation.
commences commercial distribution of the drug This discussion addresses the following:
product. tObsolete attitudes and beliefs. Some specific
A successfully completed PQ is a strong statement examples of misunderstanding validation,
that development of the subject operation has obsolete attitudes, and beliefs about validation.
been completed and is ready for manufacture of tValidation equals confirmation. A successful
commercial product. The most effective way to PQ confirms that the subject of the validation
demonstrate readiness for routine use is through can be expected to reliably perform.
a successfully executed PQproblem-free and tDefining current validation expectations.
mistake-freethat clearly shows that the design and A strategy to change erroneous understanding of
development objective has been achieved. validation and clearly define current expectations
The FDA guidance was written specifically for is presented. Topics addressed include validation
manufacturing processes. However, validation policy, training, experimental work, management
thought leaders are now applying the validation commitment, and associated topics.
lifecycle approach to equipment, facilities, utilities,
and other applications (i.e., any item or system to be OBSOLETE ATTITUDES AND BELIEFS
validated). The PQ for each of these items should Validation managers that attended the IVT Validation
confirm acceptable performance. When the PQ Week Europe conference in Dublin, Ireland in
for any of these items is successfully completed, March 2010 discussed obstacles they face in their
responsibility for the object of the PQ (i.e., processes, jobs. The first subject to be raised by these managers
equipment, computer systems, etc.) is transferred for was the general lack of understanding of current
routine use. validation expectations by people at their respective
manufacturing sites. People do not understand
Why Dont We Confirm? that validation should confirm process design and
There are several reasons why the validation PQ development. They learned validation at different
is not always confirmatory. The primary reason times over the last 20 to 30 years and have not stayed
is that people involved in the work of validation current with regulatory expectations and industry
(i.e., research and development [R&D] scientists, practice.

PROCESS VALIDATION Process Design 103


Paul L. Pluta

Incomplete Preparation For Validation one validation manager complained to engineering


Several managers at the conference complained that management about the quality of validation
R&D, engineering, or technical people never totally documentation, she was told that FDA expects to see
complete a project before initiating validation. These mistakes and failures in validation. Mistakes show
people often do not understand that validation that we really performed the validation, and didnt
should confirm their technical work. Their following drylab (falsify) the validation.
statements describe their attitudes and beliefs: Validation professionals must persist in changing
the obsolete attitudes described above and help
tValidation is the final step in the development their colleagues to know current requirements and
process expectations.
tWe may have to do final process optimization in
validation VALIDATION EQUALS CONFIRMATION
tWe will determine the final operating Validation performance is expected to be
parameters in validation confirmatory (i.e., process, equipment, etc.
tThe project should be completely finished before requirements are expected to be confirmed in
we start validation, but if we see something to validation). There must be a good understanding
improve during validation, we will do it during of whatever system is being validated. Testing
the validation. and results should demonstrate acceptable system
performance. The PQ is expected to confirm the
These statements demonstrate a fundamental design, development, and other preliminary work
misunderstanding of validation expectations. associated with the respective validation. There must
Development work must be completed before be an expectation of successful validation based on
validation is initiated. Process parameters must technical knowledge and experience obtained before
be completely defined and clearly stated in validation is initiated. If necessary, development
manufacturing process directions. Processes using work, research studies, and even scale-up runs may
finalized defined parameters are then confirmed in need to be conducted prior to initiating the validation
validation. protocol.
When the validation protocol is written, we
Documentation Problems must know what to test and how to do the testing.
Several managers also commented about validation Sampling must be clearly defined, appropriate for use,
documentation problems caused by incomplete and justified. Acceptance criteria must be rational
technical work. For example, statements included the and justified. There should be complete confidence
following: that validation tests will pass acceptance criteria.
tIf there are major process problems during We must not be learning the performance of our
validation, we explain them in the protocol. If equipment, or determining the limits of process
the explanation is good, then the process is capability in validation.
validated. We must not be testing something for the first
tIt is OK to make mistakes in paperwork because time when we perform validation. If we need to
we will explain them in the write-up. do something for the first time, we should do it in
tAs long as you sign and date changes in the development or in an engineering study, and then
protocol, QA will approve them. confirm its acceptability in validation. We must
not add a new requirement or specification to the
Validation documents are fundamental documents validation that has not been previously evaluated.
frequently requested by regulatory auditors. Laboratories must not be using a new test method
Regulatory expectations for validation documents that has not been previously validated. We must not
are high. Documents should demonstrate that be trying software for the first time in validation.
design and development of process, equipment, To again summarize, validation is confirmation of
or the system being validated has been completely what is already known.
finished. A successful validation then confirms that
the process, equipment, etc., are well understood, Validation Should Be Boring
appropriately controlled, and able to be used Consultant Michael Anisfeld, in his program
to reliably manufacture commercial product. Fundamentals and Essentials of Validation (2),
Validation documents with process problems, teaches that validation should be boringmeaning
errors, corrections, and other mistakes suggest that there should be no surprises in the execution
the oppositeand the more problems seen in the of a validation protocol, and that the execution of
documentation, the worse the impression. When the protocol must yield expected successful results.

104 PROCESS VALIDATION Process Design


Paul L. Pluta

Those responsible for validation should have no their respective management must be part of this
anxiety regarding validation performance. They training and support compliance to the policy.
should be simply confirming what they already
knowthat all testing in the validation will meet Validation Approval Committee
acceptance criteria. Professionals responsible for The site VAC has an important role in the site
processes, equipment, or systems being validated as validation program. The VAC should have significant
well as the personnel conducting the validations must input into development of the site policy. They have
approach their work with a confident expectation responsibility for approving validation protocols,
of complete success. They should not hope the results packages, and other associated documents.
validation will pass. They should be bored waiting These documents must be in compliance with the
for expected results. site policy. Standards upheld by the site VAC sends
a strong message to those who are responsible for
development work and writing protocols. Successful
DEFINING CURRENT VALIDATION validation that confirms design and development
EXPECTATIONS work must be an expectation that is demanded by the
Validation managers at the Dublin meeting VAC.
discussed key elements of a strategy to change
erroneous or obsolete understanding of Pre-Work, Experimental Studies, And
validation and clearly define current validation Engineering Studies
expectations. These included validation policy, Personnel initiating validation must be completely
training, validation approval committee (VAC) confident of the future success of their protocols.
responsibilities, experimental trials ahead of actual When personnel submitting protocols are not highly
PQ runs, personal and organizational reputations, confident of future success, protocols must not be
and senior management support. approved and validation should not be initiated.
Additional development work may be required
Validation Policy to address unanswered questions. Laboratory
The site validation policy should clearly state experimental studies, pilot-scale work, or even full-
the site strategy and approach to validation and scale trials may be warranted depending on the
qualification. These expectations should require information available or risk of failure.
that validation protocols contain requirements that
are based on scientific and technical work done in Personal And Organizational Reputations
advance of validation. Design and development Site personnel, their respective functional
work must be completed before validation is organizations, and site management must support
initiated. The validation should then confirm the and be committed to confirmation in PQ validation
design and development work. No optimization, and its ramificationscomplete and technically
fine-tuning, or other development work is allowed based protocols and impeccable documentation.
to be done in validation. The FDA 2008 process Without commitments from all the aforementioned
validation draft guidance (1) describes this individuals and groups, the validation program will
approach in the lifecycle approach to validation. not succeed. Good technically-based validation
documentation provides a strong statement about the
Validation Training organization and its management.
After the policy is written and approved, training Good validation documentation also conveys a
of appropriate associated personnel must be message about personnel in the organization. One
conducted. These included manufacturing, quality validation manager suggested that authors who had
assurance, technical support, engineering, and written substandard validation documents be asked
other groups. Personnel involved in support work to evaluate the document from a reader perspective
that is done preliminary to validation are key what impression would such substandard documents
participants in this training. They must clearly convey to an auditor? What judgments would
understand that their work must be completed be made about the authors competency? How
before starting validation, and that validation must significantly would the authors reputation be
confirm their work. This training will be especially damaged? Substandard documents convey an
useful to personnel who had learned validation unwritten message.
many years ago but had not remained current with Validation documentation also reflects on the
new developments or regulatory expectations. In site validation approval committee. The integrity
addition to the aforementioned site personnel, and competency of the site VAC is also in question

PROCESS VALIDATION Process Design 105


Paul L. Pluta

when a substandard document is approved. The New software should not be tried for the first time
VAC should serve as a surrogate regulatory auditor in validation. Validation is confirmation of what
when reviewing validation documents and must we already know. Validation should be boring,
require that substandard documents be corrected or meaning that there should be no surprises in the
rewritten before being approved. Again, substandard execution of a validation protocol.
documents convey an unwritten message. A strategy for changing erroneous or obsolete
understanding of validation and clearly defining
Senior Management Support current validation expectations is proposed. This
Senior management support of policies that affect includes a validation policy that clearly requires
multiple areas in the organization is vital to any validation to confirm design and development work.
cross-function policy. Topics discussed above may No optimization, fine-tuning, or other development
also affect costs and timelines. However, failures work must be allowed in validation. If there is not
in validation are often even more costly, required complete confidence in the future success of the
extensive time and effort for correction, and are validation PQ, protocols must not be approved
an embarrassment to the organization and all and validation should not be initiated. Validation
personnel involved. Without site management training of appropriate personnel and their
support, these problems will not be corrected. management is recommended. The site validation
approval committee should function as a regulatory
CONCLUSIONS auditor to uphold site validation standards.
Validation performance is expected to be Substandard validation documents convey a
confirmatory. The PQ is expected to confirm the negatively impression of the site, the document
design, development, and other preliminary work author, and the VAC to regulatory auditors and other
associated with the respective validation. reviewers.
Personnel may not understand current validation In brief, validation PQ equals confirmation.
expectations. R&D or technical people may not
understand that development work should be REFERENCES
completed before validation is initiated. This often 1. FDA, Guidance for Industry, Process Validation: General
leads to substandard documentation with process Principles and Practices, Draft Guidance, November 2008.
problems, errors, corrections, and other mistakes. 2. Anisfeld, Michael, Fundamentals and Essentials
The validation PQ is expected to be confirmatory. of Validation, www.Globepharm.org, Deerfield, IL,
There must be a good understanding of whatever USA. JVT
system is being validated. There must be an
expectation of success in validation based on ARTICLE ACRONYM LISTING
technical knowledge and experience obtained before APIs Active Pharmaceutical Ingredients
validation is initiated. New techniques, sampling, FDA US Food and Drug Administration
testing, and associated activities must have been PQ Process Validation
tried before initiating validation. New acceptance R&D Research and Development
criteria should not be tried without prior evaluation. VAC Validation Approval Committee

Originally published in the Autumn 2010 issue of The Journal of Validation Technology

106 PROCESS VALIDATION Process Design

Вам также может понравиться