Академический Документы
Профессиональный Документы
Культура Документы
1974
The initial portion of this paper is reprinted from THE JOURNAL OF CANADIAN PETROLEUM
TECHNOLOGY, October-December, 1966, Montreal. The addendum was prepared for presentation
with the paper at the SPE Fourth Annual Eastern Regional Meeting to be held in Pittsburgh,
Fa., Nov. 2-3, 1967. Permission to publish any portion of this paper must be secured from
the author.
ABSTRACT
The purpose of this paper is to emphasize that the
taking, assembling and averaging of the basic data for
further use in an analysis of a reservoir requires a profound knowledge of many of the specialized disciplines of
modern geology and petroleum engineering.
The various aspects of "Static Data," as presented in
this paper, analyze that problem and specifically discuss
the following major subheadings: (A) Porosity, (B) Permeability, (C) Facies Mapping, (D) Connate Water, (E)
Relative Permeabilities and Capillary Pressures, and (F)
Method of Least Squares.
This paper is essentially a general and philosophical
discussion on the necessity for obtaining data quality and
of the need for an integrated approach to be applied in a
detailed analysis of any reservoir.
INTRODUCTION AND OBJECTIVES
Because we are primarily interested in the quantitative evaluations that will define the rock volume,
porosity and water saturation of a potential reservoir,
we tend to think of such a reservoir as a static, homogenous, interconnected single rock volume. It is the
intent of this dissertation to suggest that we should
have a concept of a geologic unit that is porous and
heterogeneously homogenous, and consistently changing with time. As information is coming in continuously, the three-dimensional picture of the reservoir
in the mind ot' a production geologist and of a reservoir engineer should represent a flowing process which
might change with each new piece of information.
Within a single geological unit, we can then expect
that the horizontal and vertical differences are important, as well as the fact that the beds lying above
and below the particular reservoir may have lateral
~2-
Porosity
As few reservoirs are cored completely, and normally
all wells are logged, the logs are generally used to
fill in that missing knowledge. Porosity values derived
from core analysis are used as a standard, and it is
usually implicitly assumed that the quantitative
changes in rock porosity and permeability that occur
from the time that the core is removed from the reservoir until the measurements are made are insignificant. It is thus accepted that the lab data express
quantitatively the in-situ conditions (3).
One method of correlating the core analysis porosity values with the porosity-type logs, such as the
sonic, formation density, neutron logs and the microlog, can be accomplished as follows:
1.-Convert the core analysis porosity values into
a running average that is consistent with the interval of investigation of the petrophysical log to be calibrated; i.e.:
Petrophysical Log:
I'Sonic
3' Sonic
Neutron
Formation Density
Core Analysis:
METHOD (A):
NATURAL (Fig. 1A)
Core
1.0 ft.
Porosity Sonic Log Reading
9.5
11.4
10.1
10.2
11.5
10.3
8.8
7.9
8.5
flo
sec.
55.0
58.0
58.4
57.7
61.0
61.6
59.0
57.0
55.5
METHOD (B):
FORCED FIT (Fig. 1B)
1.0 it.
Sonic Log Reading
flo sec.
7.9
8.5
8.8
9.5
10.1
10.2
10.3
11.4
11.5
55.0
55.5
57.0
57.7
58.0
58.4
59.0
61.0
61.6
Core
Porosity
the presence of a filter cake build-up; filter cake buildup therefore suggests a permeability, which is the
other necessary property of net pay.
Well and logging' conditions and the desired type
of geological engineering information determine which
combination of logs should be run. Although log estimates of pay and porosity values are routinely made,
such estimates are evaluated from information recorded by a device which is usually averaging over several
feet. Log evaluations are often quite reliable (21),
and this is particularly true when dealing with gross
values; however, the net values obtained from log::;
should always be considered qualitative until corroboration is obtained by independent means. Particular
attention should be given to the contact logs in determining the gross pay section; those portions in the
gross thickness which, on the basis of resistivity, radioactivity (lithology), transit time, neutron count,
the absence of mud filter cake, etc., appear to be nonpay should be eliminated.
In cases where well conditions permit, and the desire for information warrants the costs, a core of
the hydrocarbon-bearing zone is recovered for inspection and laboratory testing. It is not usual to have a
core on every well in a pool, and frequently, at best,
only a portion of the zone has been cored, the remainder having been drilled through. In all cases,
the laboratory tests reveal the behaviour of the core
under the conditions set in the laboratory, and these
conditions seldom duplicate in-situ conditions in the
reservoir.
In routine analyses, those sections of core which are
not obviously tight are inspected and a few selected
samples are often subjectively assumed to represent
the whole. These samples (often only small plugs)
are then extracted and dried, thus removing the fluid
content of the rock, possibly also some solid fines and
the water of crystallization. Measurements of porosity
are ordinarily conducted in a Boyles' Law porosimeter
with air. A combination of factors results in a porosity value which often inclines toward optimism (3),
some of the obvious causes being the removal of solids
from the core, the removal of crystallization water
from the core and the effect of overburden pressure
on porosity; eventually, also, alterations of the in-situ
rock properties during the coring operations must be
considered. In carbonates, furthermore, a clear distinction should always be made between the effective
and the total porosity value measurements.
Permeability
To understand the averaging of the permeability
data entering into engineering calculations, some familiarity with the pr9cess of deposition is mandatory.
Petroleum reservoirs may be considered to be composed of aggregates of sedimentary particles of like
or similar origin which have been brought together
individually under the action of a driving force and
which mayor may not have been sorted into a size
order. The argillaceous content, if any, of the rock
decreases with the degree of sorting, and there is a
functional form to the particle size distribution (1, 2).
The amount of clay and the particle size at a point
within the reservoir is a function of these properties at an adjacent point.
The framework of particles may be cemented together or chemically altered by ion replacement at depo-
sition or subsequently, thus adding a secondary porosity system to that pore order which existed at deposition. There may be a sharp demarcation line between
pay and non-payor a gradual transition reflecting
the rate of change or deposition from point to point
- laterally if deposited at the same geological time,
or vertically if deposited at different geological times.
The deposition process, which has been oversimplified above, may be subject to periodic forces, allowing muds, or non-pay material to be interbedded within the reservoir deposit. The presence of organic deposits generally precludes this type of interbedding,
as a widespread change in conditions of deposition
usually kills the organic growth. Frequently, periodic
changes may be quite localized, resulting in the transgressing and regressing of a single organic bed to
yield two or more vertically separated beds.
It is the task of the reservoir analyst, using an integrated approach, to define that portion of the reservoir rock which contains the producible, active, moveable hydrocarbons and, based upon laboratory tests
on a few samples of the reservoir rock and their fluid
contents, to evaluate the amounts and economic rates
of fluid recovery from the entire reservoir as a. function of real time. It should be noted that data in the
interwell area are always incomplete and, as a result,
the reservoir parameters are always interpretative
and inter- or extrapolated. Unfortunately, it is less
well understood that the quantitative data available
on the individual wells are usually also incomplete
and always highly subjective.
The permeability measurement in routine analysis
is usually an air value which is therefore subject to a
slippage or Klinkenberg (4) correction and possibly
even to a turbulence correction (5). Air permeability
in sands is usually greater than the "unknown true"
permeability to hydrocarbons by a factor related to
the absolute permeability level. At a permeability
range of 100 mdcs or less, the air permeability may be
two to many times as large as the "unknown true"
value (6) which governs the movement of reservoir
fluids; i.e., oil, gas, water.
Limestones and dolomites usually show little correlation between the amount of correction and the permeability level. This might be expected because of the
lack of a Kozeny-type (7) relationship between porosity and permeability. According to Kozeny, for a
sand of a given porosity level within fixed limits, the
permeability may often be predicted reasonably well.
For a limestone or dolomite, any eventual relationship
between porosity and permeability is much weaker or
usually non-existent, so that, for a given porosity
level, the air permeability as reported in the laboratory may be almost anything.
A very permeable section, such as may occur in
limestone or dolomite as fractures, fissures or vugs,
may be beyond the evaluation capacity of the test
equipment; in any case, such sections are not amenable
to representative measurements.
The transverse permeability (8) (Figure 2) of a
core is commercially measured using a geometrical
shape factor. The permeability at 90 degrees to the
first measurement is subsequently taken, with the
high value reported as Kmax and the low value recorded
as K 90. No other effort is usually made to orient the
core beyond the above-mentioned doubtfully reliable
vertical evaluation of discrete units within a specific stratigraphic interval can be conveniently made by
calculating and mapping their respective centers of
gravity (Figure 6d). As it is usually also desirable to
evaluate quantitatively the spread of the units whose
vertical distribution is being studied, standard statistical methods, as for example variance and standard deviation, become useful tools. For examp~e, standard deviation defines quantitatively the spread relative to the center of gravity discussed above. Combined
center-of-gravity and standard-deviation maps can
then be constructed to show the variations within the
stratigraphic unit studied.
Other methods found useful in modern geologicalstratigraphic-engineering studies include Trend Surface Ana,lysis and Factor Analysis.
The principle of trend surface analysis is to divide
observed data into "regional" and "residual" data.
The purpose is to find, to map and eventually to investigate the evidence of local structures or significant data anomalies which may be obscured by variations in regional trends. The regional trend is assumed to be represented by smooth data; therefore,
a plane, quadratic or higher order surface is developed
which most closely fits the observed data. The residuals, which are the positive and negative variations
of observed data from the calculated smoothed surface, are then examined for significance.
Factor analysis is basically a computer method of
sorting data. The method identifies variables which
increase or decrease together or which vary inversely
with each other. It is a type of statistical analysis
(multiple correlation analysis) which can be used to
investigate, for example, the quantitative observations
made by a geologist or a stratigrapher in describing
a series of rock samples. These observations are sorted,
arranged and analyzed to determine whether the va-'
riations represented can be accounted for adequately
by a number of basic categories (facies) smaller than
that with which each observation started. Thus, data
composed of a large number of variables in the first'
instance may be explained in terms of a smaller num-,
ber of reference variables.
The above procedures of mathematical, graphical
and computer applications of the integrated analysis
of a reservoir are derived from notes on a facies mapping course offered by Mr. John Rivette, a Galgarybased consulting geologist.
Connate Water
The water content at a specific point within the
reservoir is related to the structural position of that
point, to the pore geometry and to the wettability of
the rock (17). As the wettability of the rock at reservoir conditions is difficillt to determine by laboratory
measurements of the wettability of a core (18), due
to possible changes in conditions which affeCt the
characteristics of rock, it is usually most acceptable
to establish the connate water saturation by direct
measurements. Because of the usual indisputability of
direct measurements, the preferred method of deter-'
mining the connate water saturation of rock is thus
through the analysis of cores cut with non-water-base
muds (21). Unfortunately, it is too often forgotten
that even if coring with oil-'Qase mud, the amount of
connate water subsequently determined is greatly in-
-6~
a cos 6
y'k/cf>
I
where:
Pc = capillary pressure (dynes/cmZ ) , u = interfacial
tension (dynes/cm), k = permeability (cm") , <p =
fractional porosity, and (j = contact angle.
Another method of averaging capillary pressure
data is the statistical approach suggested by Guthrie
et al. (22). This last method is described in some detail on page 158 of "Petroleum Reservoir Engineering" by Amyx, Bass and Whitting.
However, as in the case of averaging the relative
permeability ratios, if the pore size distribution were
markedly different among the various samples neither the above normalized methods nor the statistical
approach will satisfactorily solve the scattering problem. In such a case, it may be mandatory to perform
all the individual calculations for each different bed
separately and then attempt to combine the final results numerically.
the after-effects of all these errors. There are available methods to guard against or to recognize mistakes (large ones at least) ; however, these procedures,
each of which might be different, depending on the
nature of the problem itself, are not within the scope
of this discussion (for example - Quality Control
Charts). A systematic error can be predicted logically - in which case, it hardly deserves the name error.
Modifying either the mathematical-physical logic or
the procedures of measurement, or compensating for
the error can remedy the situation. This process of
elimination leaves us finally with the 'random' errors
only.
A method often practiced is to measure the same
physical quantity repeatedly, either on the same object or on similar objects if they comprise physical
quantities of the elements of some collective. Thus,
there accrues a surplus of data which are afflicted with errors and which are most likely incompatible. Such a surplus of "erroneous" data necessitates
some practical means of finding the best answer. To
achieve that practical goal, Legendre suggested in
1806 the 'method of least squares,' in which he proposed to minimize the sum of the squares of such errors from the unknown "true" value. With this he established a mathematically sound standard procedur.e.
C. F. Gauss has used the same method earlier (1795),
and he managed to establish a logical foundation for
the method using, and thus contributing to, the new
concept of probability (Laplace, 1811). Indeed, he
developed, using a few quite reasonable basic assumptions, the Gaussian error distribution function for
the study of these random errors. Since those early
attempts at studies of random errors, more powerful
tools of probability theory have been applied successfully to the problem and better results have been obtained; for example, interval estimation instead of
the point estimation, correlation theory, etc. When
searching for numerical values of some functional
relationship between one or more independent variables and some dependent variables from a surplus of
'observation' points, one may thus justifiably apply
the method of least squares. However, if applied, we
must always realize and keep in mind that this method
does not and cannot, serve as a substitute for creative
thinking, because it cannot in itself define, or even
search for, the most logical type of functional relationship among the several variables and among the
many available and possible solutions. However, having selected, based on applied physical logic, some
type (class) of functional relationship, the method
of least squares will obtain the 'most probable' one,
from all the classes (types) studied while it systematically considers all the given observation points.
Due to the internal logic of the actual evaluation
process of the method of least squares, polynomials
of the nth degree are most suitable for the variable
classes of functions to be studied; this leads then to
n-simultaneous equations, for which we have standard
methods of solution. The classes of transcendental
functions are usually less desirable, because simultaneous non-linear equation,S need to be solved (which
is possible using digitai computers, nevertheless).
Having then decided on the method of least squares as
a tool to be applied to smooth the randomness of our
errors we are somewhat committed to accept the sum
of the'squares of the errors as a measure of the 'nonfit' of the predetermined function. In the statistical
interpretation, this leads actually to the probabilistic
concept of variance (or some estimate of the variance). In fact, the method of least squares will
select, from the types (classes) of functions studied,
the one for which the estimate of that variance is
a minimum. With all the above limitations and physical logic in mind, we have to agree that it is folly
to attempt to use the same criterion which was used
to study the randomness from some average to select
a class (type) of function. Thus, while investigating
the randomness, it is important to understand that
random error pertains only to the operation of selecting a sample or making a measurement and that it
does not judge at all the sample or the measurement
itself.
Despite this limitation, there are available procedures which help in the selection of the best class
(type) of polynomial to be used in a study of almost
any problem on hand. For example, if the prime interest is to write some analytical form of functional
relationship amongst several variables, then one might
compare the estimates of the variances for several
different functions and select the representation for
which that estimate is the smallest. However, attention is called to a strongly growing branch of mathematics called applied statistics which determines more
correctly the methods of solution in this general area,
and any analyst working with randomly scattereddata should be aware of it and should be at least partly
conversant in the general aspects of its use. As a further limitation of the above-discussed general method
of least squares, this technique presumes that only the
independent variable is the variable afflicted with random errors this is a technically unrealistic and often
unfounded 'supposition. Methods are now available,
however which permit the fitting of selected classes
(types) 'of functions when all variables are afflicted
with random errors; that is, a least squares fit which
minimizes the sum of perpendicular distances of the
individual points from the 'best fit' curve; this is similar to 'eye-balling' data in order to fit the best
curve.
From the above brief discussion on the application
and use (or misuse) of the least squares method to
average geological-engineering data, it is concluded
that it is only a method which quantitatively investigates the randomness of the data, i.e., their 'spread'
from some average. It should always be recognized
that although random errors can be small, systematically the data and/or the solution might still be
wrong. Thus, random describes only a method of obtaining and/or averaging data rather than some resulting property of the data discoverable after the
observance of the sample.
RESUME ON STATIC DATA
ACKNOWLEDGMENTS
Thanks are given to the management of the Hudson's Bay Oil and Gas Company Limited, specifically
to the Exploration and to the Production Department, for their permission to spend time on this paper. Thanks are due also to Messrs. G. Meisner, H.
D. Bagley, Dr. W. Nader and J. Rivette for their
contribution and help in preparing certain portions
of this presentation.
BIBLIOGRAPHIC REFERENCES
Static Data
(1) Krumbein, W. C., et al.: "Permeability. as a Function of the Size Parameters of Unconsolidated
Sand," Trans. AIME, 1943.
.
Law, J.: "Discussion on Average Permeabilities of
Heterogeneous JQil Sands," Trans. A.!IME, 1944.
Hatch, Rastal & Black: "The Petrology of the Sedimentary Rocks," Thomas Murby & Co.
(2) Kauffman, G. M.: "Statistical Decisions and Related Techniques in Oil and Gas Exploration" Prentice-Hall.
'
-9-
(3) Earlougher, R. C., et a1.: "A Pore-Volume Adjustment Factor to -Improve Calculations of Oil Recovery Based on Floodpot Data," AP,I D&PP, 1962.
(4) Klinkenberg, L. J.: "The Permeability of Porous
Media to Liquids & Gases," API D&PP, 1941.
(5) Katz, D. L., et al.: "Handbook of Natural Gas Engineering," McGraw-Hill.
.
Ramey, H. J., Jr.: "Non-Darcy Flow and Wellbore
Storage Effects in Pressure Build-up and Drawdown
of Gas Wells," JPT, Feb. 1965.
Rowan, G., et al.: "An Approximate Method for
Non-Darcy Radial Gas Flow," SPEJ, June, 1964.
Dranchuk, P. M., et a.l.: "The Interpretation of
Permeability Measurements," The Journal of Ca,nadian Petroleum Technology, Vol. 4, No.3, 130-133,
1965.
(6) Muskat, M.: "Physical Principals of Oil Production," Pg. 138, Table 2, McGraw-Hill, 1949.
(7) Pirson, S. J.: "Elements of Oil Reservoir Engineering," First Edition, Pg. 147, McGraw-Hill;
(8) Collins, R. E.: "Determination of the Transverse
Permeabilities of Large Core Samples from Petroleum Reservoirs," Journal of Applied Physics, June,
1952.
(9) Greenkorn, R. A., et al.: "Directional Permeability
of Heterogeneous Anisotropic Porous Media," SPEJ,
June, 1964.
(10) Bulnes, A. C.: "An Application of Statistical Methods to Core Analysis Data of Dolomite Limestone,"
JPT, May, 1946.
(11) Testerman, J. D.: "A Statistical Reservoir Zonation
Technique," Trans. AIME, 1962.
(12) Douglas & Rachford: "On the Numerical Solution
of Heat Conduction Problems in Two or Three
Space Variables," Trans. Am. Math. Soc. (1965),
82, 421-439.
Irby, T. L., et al.: "Application of Computer Technology to Reservoir Studies," The Journal of Canadian Petroleum Technology, Vol. 3, No.3, 130135, 1964.
(13) McCarty, D. G., et al.: "The Use of High Speed
Computers for Predicting Flood-Out Patterns,"
Trans. AIME, 1958.
Bechenbach, E. F.: "Modern Mathematics for Engineers," Second Series, McGraw-Hill.
Quon, D., et al.: "A Stable, Explicit, Computationally Efficient Method for Solving Two-Dimensional
Mathematical Models of Petroleum Reservoirs,"
The Journal of Canadian Petroleum Technology,
Vol. 4, No.2, 53-58, 1965.
(14) Stiles, W. E.: "Use of Permeability Distribution in
Waterflood Calculations," Petroleum Transactions
Reprint Series #2,Waterflooding Series.
Dykstra, H., & Parsons, R. L.: "The Prediction of
Oil Recovery by Waterflood," Secondary Recovery
of Oil in the United States, Second Edition, API.
Suder, F. E., & Calhoun, J. C., Jr.: "Waterflooding
Calculations," AP.I D&PP, 1949.
Muskat, M.: "The Effect of Permeability Stratification in Complete Water-Drive Systems," Trans.
AIME, 1950.
Muskat, M.: "The Effect of Permeability Stratification on Cycling Operations," Trans. AIME, 1949.
Warren, J. E., et al.: "Prediction of Waterflood Behaviour in a Stratified System," SPEJ, June, 1964.
Hiatt, W. N.: "Injected-Fluid Coverage of MultiWell Reservoirs with Permeability Stratification,"
API D&PP, 1958.
Jacquard, P., & Jain, C.: "Permeability Distribution
from Field Pressure Data," SPE Journal, Dec., 1965.
(15) Rapaport, L. A.: "Laboratory Studies of Five-Spot
Waterflood Performance," Trans. AIME, 1958.
Prats, M.: "Prediction of Injection Rate and Production History for Multifluid Five-Spot Flood,"
Trans. AIME, 1959.
(22) Guthrie, R. K., et al.: "The Use of Multiple Correlation Analyses for Interpreting Petroleum Engineering Data," (unpublished), Spring APiI Meeting, New Orleans, La., March, 1955.
Buckles, R. S.: "Correlating and Averaging Connate Water Saturation Data," The Journal of Canadian Petroleum Technology, Vol. 4, No.1, 42-52,
1965.
(23) Guerrero, E. T., & Earlougher, R. C.: "Analysis
and Comparison of Five Methods to Predict Waterflood Reserves and Performance," API D&PP,
1961.
Arps, J. J.: "Analysis of Decline Curves,'; JPT,
Sept., 1944.
Lefkovits, H. C.: "Application of Decline Curves to
Gravity - Drainage Reservoirs in the Stripper Stage,"
Petroleum Transactions, Reprint Series No.3, Oil
& Gas Property Evaluation & Reserve Estimates.
Hovanessian, S. A.: "Waterflood Calculations for
Multiple Sets of Producing Wells," Trans. AIME,
1960.
A few other additional useful references on the interpretation and use of Static Data are:
Law, J., et al.: "A Statistical Approach to CoreAnalysis Interpretation," API D&PP, 1946.
Stoian, E.: "Fundamentals and Applications of the
Monte Carlo Method," The Journal of Canadian
Petroleum Technology, Vol. 4, No.3, 120-129, 1965.
Warren, J. E.: "The Performance of Heterogeneous
Reservoirs," SPE paper 964, presented at the 39th
Annual Meeting of SPE in Houston,Texas, October, 1964.
Kruger, W. D.: "Determining Areal Permeability
Distribution by Calculations," JPT, July, 1961.
Dupuy, M., & Pottier, J.: "Application of Statistical Methods to Detailed Studies of Reservoirs,"
Revue de l'Institut Francais du Petrole, 1963.
62
62
. 61
61
(,)
GO
~ 60
u 60
GO
(!)
en
~ 59
~ 59
ct
UJ
0:
(!)
(!)
58
58
ct
...J
UJ
57
0:
57
(!)
0
...J
Ul
56
56
Ul
55
55
54
6
10
CORE POROSITY -
11
54
12
8
9
10
CORE POROSITY -
II
Figure 1.-CmTelation of the Sonic Log Reading with the Core Analysis Porosities.
-10-
12
Kmax
K90
qJ) -ad
K
FORSCHEIMER
P
Figure 2.-Permeabirity Measurements.
><>
"'::>o
"'
0:
Ie
"'>
;:
'"
...J
"'
0:
".jX,X2" x.
WHERE:
DISTRIBUTION OF PERMEABILITIES'
n = Product
of on terms
one for each positive
-r/
;: -
/"
M: ~ .qj Xj
.r qj
,=1
K
A
,-I
Kj
"
Log
REL.
FREQ
I
Wi
~I
~I
wi ~I
~I ~I
zl
~I
t;;1
::;;
~I
i=1
~I
99
I
I
I
99.9
REL.
FREQ.
dependent upon what has already happened; i.e., dependent on adjacent values.
mode
~kl x k2
X kn
REL.
FREQ.
Cl
99
99.9
~ki
mode
i
n
Figure 4b.-NORMAL.
Definitions:
1. Average: A single number typifying or representing a set of numbers of which it is a function.
-It is a single number computed such that it is not
less than the smallest nor larger than the largest
of a set of numbers.
Arithmetic Average: or arithmetic
mean, is the average of the sum of n
numbers divided by their number.
Harmonic A vera.ge : is the reciprocal
ISOLITH
Figure 6a.
NON -CLASTIC
CLASTIC RATIO
MORE THAN ONE
SD/SH MORE
THAN TWO
Figure 6b.
Fine
Medium
Coarse
Rare
Common
Abundant
Reefal
Basinal
100'fo OF SAND
100% OF
SHALE
100 % OF
CARBONATES
Lagunal
Limestone
Dolomite
Anhydrite
Non-Clastic
Shale
Sandstone
l-
I&.
IC'l
25
...J
x 10
250
70 x 25
110 x 30
1750
3300
65
5300
~
a::
I&J
IZ
C. ofG.
Relative C. of G.
65
Figure 6d.
5300 = 82 ft.
,,<
SPE 1974
3)
D. HAVLENA
Variation exists in any repetitive process
because "there is no identity of successive
events ". There are controlled experiments
which, even if carried out repeatedly with the
utmost care to keep the conditions constant,
yield varying results, as the below example
will show:
SPE
1 Q76.
= 55.5
millimeters
which being greater than the observed(measured) 42.1 millimeters suggests that at 95%
confidence level the difference is not
significant.
This particular example suggests how to
guard against looking for an assignable cause
when the differences are no greater than the
expected results of random chance.
Operator 2
Operator 1
B(bottles
Q
3
Total: _A.
R:
~i
c(cMs)
Total:
R:
Inch
1.7
1.8
1.7
5.2
.1
1.7
1.7
2.5
5.9
.8
Inch
rom
1.8
1.8
1.9
5.5
.1
58.9
59.0
54.2
172.1
1.B
5B.5
55.0 10
73.0 12
1El."S .Jl.
19.2
2.0
0.9
61.9
33.3
153.7
rom
2
54.6
56.4 ,4
6
53.3
1b4.3 .B
3.1
53.B
4.7
1.1
4.B
2B.6
13
15
17
_E
rom
1.8
1.6
1.3
55.5
53.3
14
16
4B.3
1B
157.1
7.2
61.5
46.9
42.3
150.7
19.2
20
22
24
4.7
.5
19
21
23
Jb
Inch
Inch
1.9
1.3
1.3
4.5
.6
1.8
1.9
2.1
rom
]!-
"5."E
.3
..JL
1.7
2.2
2.3
b.2
.6
C-
Inches
,Group
Operator
{t
Operator
{.!i
1,3,5
2,4',6
~ 7,9,11
8,10,12
.x 13,15,17
14,16.18
. G 19,21,23
H 20,22,24
~
Total
-R
Total
5.2
5.5
5.9
4.7'
4.7
0.1
0.1
0.8
1.1
05
0.3
0.6
0.6
164.3
172.1
181.8
153.7
157.1
181.2
150.7
192.8
3.1
4.8
19.2
28.6
7.2
9.2
19.2
18.9
1,353.7/8
169.2125
110.2/8
13.775
~.8
.5
6.2
4.1/8
5125
42.5/8
5.3125
Total:
R:
rom
Inches
Op: __
F;
BLi
Total
cTotal
F:
L:
1
A
B
C
,D
rom
Total
4.7
5.8
10.5
99
11.3
21.2
5.9 G 4.5
4.7 H 6.2
1D.b = 10.7
10.4
109
21.3
5.2
5.5
10.7
E
,F
Total
164.3
172.1
33b.4
157.1
181.2
338.3
b74:7
150.7
192.8
343.5
332.5
346.5
679.0
181.8
15~.7
335.5
321.4
353.3
~
,F
Grand Total
= 1,353.7
B-
c-
TABLE 4
TABLE 3
SIGNIFICANCE TEST
= Tl
Rl
- T2
+ R2
Confidence Interval
Percent
99%\
95%--
2.32
1.46
1.29
1.23
1.22
1.21
1.22
1.23
1.25
Note: 95% level, Nl = N2 '" 3; d s
Height of Pour:
5.56
2.57
2.10
1.93
1.86
1.84
1.83
1.84
1.85
= 1.46.
Inches
l!.
.
Difference
Sum:
6.20
4.50
1.70
0~6
0.6
Millimeters
H
Difference:
Sum:
192.8
150.7
42.1
18.9
19.2