Chapter 1 of Prof. Daniel's Book

© All Rights Reserved

Просмотров: 1

Chapter 1 of Prof. Daniel's Book

© All Rights Reserved

- Sheet1Perf SSP S2014 math
- Homework 2
- احصاء شبتر 2
- sqqs1013-chp05
- Probability and Stochastic Processes
- Parker
- fifa
- SDA 3E Chapter 3 (1)
- Assignment Questions BITS
- 5Enote5
- Discrete Probability Distribution data models
- simulation.pdf
- Lecture 1
- handout2
- Solution 3
- iet 603 700 forum 5
- s15notes
- Lecture 3
- Chapter Five Notes (1).doc
- 6.3c Geometric Random Variables

Вы находитесь на странице: 1из 25

APPENDIX A

A.1. Introduction

Many of the problems we face daily as industrial engineers have elements of

risk, uncertainty, or variability associated with them. For example, we can not

always predict what the demand will be for a particular inventory item. We can

not always be sure just how many people will shop at a grocery store and desire

to check out during a particular hour. We can not always be sure of the variation

in quality of raw materials from one of our suppliers. Not claim to be prophets

who can accurately predict results in ' we as industrial engineers are educated in

the use of applied statistics to make intelligent engineering decisions despite

our lack plate knowledge about future events.

The material presented in this appendix is meant to introduce you to

some of the basic laws of chance. Although the treatment is elementary, it is

sufficient to allow you to grasp the material on quality control project

management (PERT), and probabilistic models. Further grounding in

probability theory and statistics will come as more specialized courses on these

and other topics are studied. Since this text is written for students in the applied

professions, we shall avoid rigorous mathematical and deeply philosophical

treatments in of workable definitions and explanations.

In the customary manner, we shall define an experiment as any process. Almost

all probabilistic experiments have more one possible outcome; otherwise, they

would be trivial from the view- point of probability theory. The number of

outcomes of an experiment may infinite. For example, when we inspect a

capacitor or diode, the outcomes are typically finite (either good or bad), but the

exact theoretical resistance measurement of a piece of wire could take on an

infinite number of values.

The sample space of an experiment is the set of all possible outcomes pertinent

to the experiment. Depending on our interest, we may find that there are several

different ways of expressing the outcome. As an example, suppose that we

select two electric light bulbs and test them in order to determine whether or not

they light properly. If we are trying to distinguish completely all outcomes, we

might list the following set:

189 Appendix A

Bulb 1 Bulb 2

Light Light

Light Not light

Not light Light

Not light Not light

We note that there are four possible distinct outcomes to this

experiment.

It is just as valid for us to be interested only in the number of bulbs (out

of the two selected) that light, not in the specific ones that light. With this

interest, we have the following outcomes:

0

1

2

We note that there are only three outcomes to the experiment now. The

alert reader may observe that the outcome of one bulb lighting consists of two

outcomes from the first set-(light, not light) and (not light, light).

Any definition of an experiment should reflect distinguishable outcomes

according to the interest of the investigator. Many writers refer to the most

refined and detailed set of mutually exclusive (no two outcomes can occur

simultaneously) and exhaustive outcomes as the sample space. No matter how

we define the sample space, we often have an interest in various subsets of the

total set of outcomes. This interest will later lead to our definition of an event.

A.2.2 Probability

Let us consider a finite sample space and only a finite number of repetitions of

an experiment of a defined type. As an example, we define our experiment to be

the inspection of a reel of magnetic tape produced by a certain process during

the year. Each repetition of this experiment will consist of the examination of a

reel of tape (always different), the possible outcomes being accept, rework, and

reject. The number of repetitions corresponds to the number of reels of tape,

and the possible outcomes number only three; thus, both the number of

repetitions of the experiment and the number of outcomes are finite.

Returning to our general finite case, we let

ni = number of repetitions of the experiment that will result in outcome i

Appendix A 190

N = total number of repetitions

n

It follows that the ratio i is the fractional number of the total number of

N

repetitions favorable to outcome i (frequency ratio of outcome i). We shall now

define probability with respect to this finite case as follows: The probability

n

p(xi) associated with outcome i is the frequency ratio i that is,

N

ni

p( xi ) (i 1,2,......... ., k ) ( A.1)

N

From this definition, the reader should easily be able to verify the following

characteristics of our probability function:

And p ( xi ) 0 (i 1,2,..............., k ) ( A.2)

p( x ) 1

i 1

i ( A.3)

Returning to our magnetic tape example, suppose that 100 reels are to be

inspected and that 50 are accepted, 30 are to be reworked, and 20 are rejected.

Figure A.l is one way of portraying this probability function geometrically. The

properties expressed by Equations A.2 and A.3 are seen to be true.

P(x)

0.6

0.5

0.4

0.3

0.2

0.1

0 1 2

(reject) (rework) (accept)

In order to extend our ideas concerning probability, let us now consider

the possibility that the number of repetitions of an experiment approaches

infinity, whether or not the number of outcomes approaches infinity is

immaterial. Our definition of probability will now be extended as follows,

maintaining the same notation:

191 Appendix A

ni

p( xi ) lim (i 1,2,....,k ) ( A.4)

N N

The properties expressed by Equations A.2 and A.3 are still valid; with

the extension that k may also be infinite. Returning to our magnetic-tape

example, we might conceive of all the reels of tape that can ever be produced

(consider as infinite) and the outcomes might be the number (perhaps infinite)

of "bad" spots on a reel of tape. This example can now be considered as

consisting of an infinite number of outcomes and an infinite number of

repetitions of the experiment. The point of this extension is to use our

traditional approach to a limit whenever the number of repetitions approaches

infinity.

The most difficult case for us to handle conceptually involves an infinite

number of possible outcomes that we cannot count. Even when we talked about

the number of bad Spots on a reel of tape (perhaps infinite), at least we could

"count" or identify them with the real numbers, e.g., 1, 2, 3 That is, the out

comes were discrete. Continuous outcomes, however, are not countable. For

example, continuous outcomes may be the exact air pressure in a tire the

amount of rain in May, or the capacitance of a capacitor. Since we cannot count

these outcomes, we normally talk about subintervals of outcomes (e.g.,10-15,

15-20, 20-25, and so on, pounds per square inch tire pressure). These

subintervals are countable, and we can associate probabilities with them: For

example, the probability that the outcome x lies in the range from a to b is

written as follows:

p( x) p(a x b). for a b ( A.5)

It is apparent that interval probabilities obtained for a continuous random

variable are approximate because we are never absolutely sure about the

specific value of the outcome in the interval to be associated with the

probability.

f (x)

illustrated in Figure A.2, instead of having distinct probabilities or weights,

Appendix A 192

p(x), at points, as in Figure A.I. Our new function f(x) has the following

properties:

f ( x) 0 ( A.6)

f ( x)dx 1 ( A.7)

And

b

p(a x b) f ( x)dx for a b ( A.8)

a

The reader should note that Equations A.6 and A.7 present the same

basic properties for continuous distributions as those given in Equations A.2

and A.3 did for discrete distributions. Equation A.8 is simply another way

stating Equation A.5.

In reflecting on our development of probability, we can say that a

number, called probability, is associated with each point of a discrete (finite or

infinite) sample space. For a continuous sample space (always infinite),we

developed a way of representing probability as the area under a curve between

limits, such area being computed through the use of the calculus. The curve

itself is a probability density function.

A.2.3 Events

An event is a subset of the sample space of an experiment. It may consist of

none of the outcomes (void), some of the outcomes, or all of the outcomes.

Returning to our light bulb example, let us consider the event that (of the bulbs

tested) exactly one bulb lights. Regardless of how we defined our sample space

above, the event that one bulb lights represents a subset of some of the

outcomes of the sample space.

Two events are said to be mutually exclusive if the occurrence of one

excludes the occurrence of the other; that is, they do not possess any points in

common from the sample space. For example, it is clear that the event of

neither bulb lighting and the event of both bulbs lighting are mutually exclusive

for that sample space. It should be apparent that the mutually exclusive property

of events must be considered with respect to the sample space.

As a further example linking sample space and events, a group of 100

items are taken for inspection. We are interested in the number of defectives in

this group. If 10 or fewer defectives are found, we shall conclude that the

manufacturing process from which the items were taken is operating

satisfactorily. The sample space for our inspection would include 101 different

possible outcomes-that 0, 1, 2, ..., 99, or all 100 items are defective. The event

that we conclude that our manufacturing process is operating satisfactorily is

193 Appendix A

be defective.

An event may consist of any combination of possible outcomes of an

experiment. It is up to the experimenter to define events that are meaningful to

him in his experiment. Let us consider the experiment of drawing one ball from

a box containing ten balls, numbered 1, 2, ..., 10. The balls numbered 1 through

5 are black; those numbered 6 through 10 are white. The probability of drawing

any particular one of the ten balls on any performance of the experiment is 0.1.

We can define many events relating to the experiment of drawing one ball from

the urn as follows:

E1 is the event of drawing an even-numbered ball.

E2 is the event of drawing a black ball.

E3 is the event of drawing an even-numbered black ball.

E4 is the event of drawing a ball larger than 6.

E5 is the event of drawing a ball less than or equal to 4.

Many other events could be defined for this experiment, but we shall

discuss only these five events and the way in which each event could occur.

There are five outcomes by which event E1 can be realized. We shall say that

five of the ten possible outcomes of the experiment are favorable to E1. Since

each outcome is equally likely to occur in this example, the probability event

E1 is 0.5. Notationally, we have

p ( E1 ) 0.5

By similar reasoning, we see that the probabilities of the remaining events are

p ( E 2 ) 0.5

p ( E3 ) 0.2

p ( E 4 ) 0.4

p ( E5 ) 0.4

the outcomes of which E are comprised. We must remember, of course, that in

the case of a continuous distribution, always with an infinite number of

outcomes, an event consists of one or more intervals of outcomes; therefore, the

probability of such an event is the sum of the areas under the density curve

corresponding to the intervals of outcomes comprising the event.

Appendix A 194

events are said to be equally likely if each event has the same probability of

occurrence. Two events are said to be independent if the occurrence of one

event in no way affects, or is affected by, the occurrence of other event. The

intersection of the events A and B (denoted by AB) consists of all the sample

space points corresponding to both A and B. The union of events A and B

(denoted by A + B) consists of all the sample space points corresponding to

either A or B (or both).

We express the multiplication theorem for events A and B as follows:

p( AB ) p( A / B) p( B ) ( A.9)

Where P(A/B) is read "the probability that A occurs given that B has

occurred." If events A and B are independent, then P (A /B) equals p (A) and

Equation A.9 can be written in the following form:

p( AB) p( A) p ( B ) ( A.10)

The conditional probability of the event A, given that event B has also

comes from Equation A.9 and is given as follows:

p( AB )

p( A / B) ( A.11)

p( B)

P( A B ) P ( A) P( B ) P ( AB) ( A.12)

Equation A.12 as follows:

P( A B) P( A) P( B) ( A.13)

These rules must be extended whenever more than two events are

involved. As an example, imagine a subsystem comprised of two main

components, A and B. They are produced independently, and the defective

proportions of A and B are, respectively, .2 and .3. The probability of the event

that both A and B are defective in a subsystem is given by the multiplication

theorem in Equation A.10 and equals (.2) (.3) = .06. The probability that r either

component A or B (or both) is bad, resulting in defective subsystem, is given

by Equation A.12 and equals .2 + .3 -.06 = .44.

A.3 Combinations

In considering problems involving finite sample spaces of equally likely

outcomes, we are often tempted count the frequencies of interest. This

195 Appendix A

Therefore, we most often use a formula to determine numbers of combinations.

The number of combinations of n things taken k at a time is expressed by the

following notation and formula:

n!

C n,k n, k ( A.14)

k!(n k )!

For example, we may want to know the number of ways in which two

defective items can be observed in a sample of four parts. If (l, 2) indicates that

the defectives were the first and second items observed, we can extend this

notation to show the other possible ways the inspector might have encountered

the defectives as (1,3), (1,4), (2,3), (2,4), and (3,4). In all, we count six ways.

Using Equation (A.14), we can solve this problem by using n = 4 (sample size)

and k = 2 (number of defectives) as:

4! 4 x3x2 x1

6

2! 2! 2 x1 x 2 x1

A random variable is a numerically valued variable defined on a sample -.For

each point of the sample space, the random variable would be assigned a value.

This definition should be considered in the most gel terms: The random variable

may be positive or negative; it may have same value at different points of the

sample space, it may be either discrete or continuous, and so forth. In other

words, we may treat a random van. as we have treated other mathematical

variables. For our light bulb example with four outcomes in the sample space,

we might define the random v able x as follows:

x = 0 for the sample point (0, 0)

x = 1 for the sample point (1, 0)

x = 2 for the sample point (0, 1)

x = 3 for the sample point (1, 1)

Or we might decide that the random variable x should be the number

bulbs that burned of the two tested; thus, for the sample space, x would define

as follows:

x = 0 for (0, 0)

x = 1 for (1,0) and (0 ,1)

x = 2 for (1, 1)

In considering our magnetic tape example we might define the random

variable to be the number of "bad" spots on a reel of tape. Theoretically, the se

of values that such a random variable would have consists of zero and the

natural numbers.

The above examples illustrate discrete random variables. Many

measurements, however, define a continuous random variable, even though our

Appendix A 196

production process should be continuous. Measurements of temperature, speed,

voltage, amperage; and so on, give rise to continuous random variables.

The word random in random variable means that the variable will

assume its values in a chance manner. As long as the values of the variables are

equally likely to occur, then our concept of randomness seems sensible. We

should recall, however, that different values of the random variable may have

different probabilities of occurring. This happens when the events that

correspond to the values of the random variable are not equally likely.

In practice, it is important that we choose our random variable carefully.

Of course, we desire the random variable that correctly expresses the item of

interest. This choice is left to the investigator. Consider a problem concerned

with machine breakdowns. Our random variable might be the r breakdowns per

day, the time between breakdowns, the number of simultaneous breakdowns,

the time required to repair breakdowns, and a host of other possibilities. Every

problem must be individually analyzed and pertinent random variable must be

chosen and defined.

The importance of understanding random variables cannot be over-

emphasized. The result of any experiment is a random variable unless that

result can be predicted with certainty. Some examples of experiments whose

outcomes are random variables are as follows:

Number of defective items observed in a sample (discrete);

Average dimension of items sampled and measured (continuous);

Rain or no rain on any given day (discrete);

Amount of rainfall on any given day (continuous);

Launch or abort on any attempted missile shot (discrete);

Success or failure on a missile launch (discrete);

Accuracy of a missile launch (closeness to intended target)

(continuous);

The temperature in a petroleum refining process (continuous);

The number of whole barrels of gasoline produced by a petroleum

refining process in a given time period (discrete); and

The voltmeter reading (assuming no instrument error) of a

residential electrical circuit (continuous).

The examples above are random variables because the value of any

particular performance of each of the experiments cannot be predicted with

certainty.

Associated with every random variable is a probability distribution. As a matter

of fact, we have already encountered two probability distributions in Figures

197 Appendix A

distribution, or portrayal, of the probabilities of occurrence of each value (or

interval of values) of the random variable. The probability distribution that we

have already encountered in Figure A.1 is "discrete" because the particular

random variable is discrete (accept, reject, rework). The distribution shown in

Figure A.2 is obviously continuous. The presentation in this section will include

both the discrete and continuous cases. We should always bear in mind that the

nature of the random variable determines whether a distribution is discrete or

continuous.

In addition to the mathematical representation of a distribution, we are

often interested in measures that further describe the properties of a universe

under study. Two very popular characteristics of a distribution are the mean and

variance of the random variable. The mean is a measure of central tendency and

represents somewhat of an average value. The variance is a dispersion or

distribution spread. The mean is often represented as and the variance as

2 . We often speak of the standard deviation is the positive square root of the

variance. The standard deviation is represented as sigma ( ).

A discrete probability distribution is a function p(x) of the discrete random

variable x yielding the probability p(xi) that x will assume the value xi . Such a

function is often called the frequency function of a discrete random variable.

The defining characteristics of such a function are given by the following

mathematical statements:

p(x ) 1

i 1

i (A.16)

Of course, Equations A.15 and A.16 are repeats of Equations A.2 and A.3.

Much of our work with distributions involves cumulative probabilities

that is, the probability that the random variable will assume one of a set of

possible values. The discrete cumulative distribution is defined as fol1ows:

F (a ) p( x a) p( x )

xi a

i ( A.17)

F (a) is the probability that x will have any value less than or equal to a.

A useful result of Equation (A.17) is that

Appendix A 198

p ( xi ) F ( xi ) F ( xi 1 ) ( A.18)

Thus, the discrete probability distribution and the discrete cumulative

distribution are equivalent, since either can be obtained from the other. Figure

A.3 is the discrete cumulative distribution corresponding to Figure Note that

according to our definition it is a step function.

F(x)

1.0

0.8

0.6

0.4

0.2

0 1 2 x

Figure A.3: Discrete Cumulative Distribution.

closely approximate many naturally occurring distributions. The next three

sections will briefly present some of these distributions.

Suppose that we consider n independent experiments, each of which has only

two possible outcomes. For example, inspecting and classifying n items as good

or bad meet our description. If our random variable is the number of

occurrences of a particular outcome, such as bad items, we can see that the

possible values of the random variable are 0, 1, ..., n.

For convenience in discussion, let us agree to refer to the occurrence of

a particular outcome (e.g., bad items) as a "success" and the alternative

outcome as a "failure." Now, by considering n independent and identical "two

outcome" experiments, with a "success" having the probability p and a "failure"

having the probability q = 1 -p, we can explore the probability of "x successes,"

where x n . First, the number of ways we can obtain x successes is the

number of combinations of n things taken x at a time, that is,

199 Appendix A

n n!

c ,xn

x x!(n x)!

By recalling our multiplication rule for independent events, we note that

each such combination has a probability of p x q n x of occurring. Thus, the

probability of x successes in n independent and identical experiments may be

written as follows:

n x n x

p(x;n,p) p q (A.19)

x

Where x = 0, 1, ..., n.

probability distribution. Its cumulative binomial probability distribution is as

follows:

a (A.20)

F (a ; n, p) p ( x; n, p) ( A.20)

x o

For a = 0, l,...,n.

Also, the mean and variance of the binomial are given by the following:

np (A.21)

npq

2

( A.22)

As an example of the binomial distribution, assume that we randomly select six

transistors from a production line. Past data have indicated that 10% of the

transistors inspected are found defective. We can determine i probability of

finding exactly two defective transistors by using Equation A.19 as

Appendix A 200

6 2 4

p(2;6,.1) 0.1 *0.9 .098.

2

Similarly, using Equation A.20, we can find the probability of detecting two or

fewer defectives as

2

F ( 2;6,0.1) p ( x;6,0.1) 0.984.

x 0

The binomial density function for this example is graphed in Figure A.4.

0.532

0.354

Probability

0.098

P(x:6 ,0.1)

0.015

0.001 0.000 0.000

0 1 2 3 4 5 6 x

Number of Defects

The Poisson distribution is applicable in many situations in which some kind of

event (such as a "flaw" or a "change") occurs randomly in over distances, areas,

or volumes. To be consistent with our earlier terminology, we shall continue to

call the occurrence of an event a "success. The Average rate of occurrence of

the event is considered constant in a Poisson process and is denoted by . The

probability of x successes in f a constant average occurrence rate is

x

p ( x; ) e ( A.23)

x!

201 Appendix A

Poisson distribution is a good approximation to the binomial when n is large

and p is small. Some writers recommend this approximation if

np 5 and p .1.

a

F (a ; ) p ( x ; ) ( A.24)

x 0

For the Poisson distribution, we obtain values of the mean and variance as

follows:

(A.25)

2 ( A.26)

In other words, the variance and mean are identical.

Some examples of use of the Poisson distribution are provided by the number

of imperfections on a sheet of metal, the number of diseased spots on a tree, the

number of weeds on a plot of land and so forth. When we use the Poisson, we

should remember that the random variable can assume the set of numbers 0, 1,

2, ..., which is a countable infinite discrete set.

Uniform distribution is a very easy distribution to analyze and one of the most

common. We shall define our random variable so that it can assume a finite and

discrete set of values. We define the uniform distribution as follows:

1

p ( x ; n) ( A.27)

n

For x = 1, 2,.,n. we can see that the cumulative distribution is

k

F ( a ) p ( x ; n) ( A.28)

x a n

Where k is the number of values of x less than or equal to a.

The mean and variance of the uniform distribution are given as follows:

n 1

( A.29)

2

And

n2 1

2 ( A.30)

12

distribution is that each value of the random variable has the same probability

Appendix A 202

should recognize it as a distinct probability distribution.

As an example, suppose that four finalists have been selected drawing.

The finalists are numbered 1, 2, 3, and 4. Only one top prize is given. The

probability that finalist number 3, say, is chosen is given by Equation A.27 as

1/4. The probability that finalists 1, 2, or 3 are selected is given by Equation

A.28 as F (3) = 3/4. The probability of any of the four finalists being chosen is

illustrated in the uniform density function graphed in Figure A.6.

Probability

0.098

P(x;n)

1 2 3 4 x

Item Number

A continuous probability distribution is a function f(x) of' random variable x

that possesses the following properties:

f ( x) 0 ( A.31)

f ( x)dx

1 ( A.32)

and

b

P(a x b) f ( x)dx for a b ( A.33) these

a

properties are, of course, just a repeat of equations A.6, A.7 , and A.8 . such a

function is often called either a density function of a probability density

function.

The continuous cumulative distribution is defined as follows:

203 Appendix A

a

F (a) f ( x)dx

( A.34)

F(a) is the probability that the random variable x will have any value

less that or equal to a . If the derivative of F exists , we have the following:

F ' ( x ) f ( x) ( A.35)

Thus, with nice mathematical properties the existence of either f(x) or

F(x) determines the other.

We should recall that f(x) is a density type of function instead of a pure

probability function. By integration we actually obtain probability. That is, we

can specify the probability that a random variable well assume a value between

two points; however, the probability for any single point is zero.

In considering our integral definition of F(x), we may find in practice

that our continuous probability distribution f(x) is defined only over a part of

the real-number axis. In such cases, we simply extend the definition over the

total axis by assigning f(x) the value of zero elsewhere. This extended definition

may require integrations over subintervalsa valid operation mathematically.

We shall define the general form of the normal distribution of the continuous

random variable x as follows:

1 2 2

f ( x) e ( x ) / 2 A.36

2

for x

This distribution is one of the most interesting and useful that we study. It has a

single peak at the mean and is symmetrical about that point. If we plot an

example of a normal distribution, it will be readily apparent that it is bell-

shaped with mean and variance 2 .

In practice, many distributions are well approximated by the normal

distribution. Some examples include bolt diameter construction errors,

resistance of a specified type of wire, weight of a packaged material, and so on.

We can show mathematically that if a random variable is distributed

normally with mean and variance 2 , then the standardized normal

variable z = (x )/O" is distributed with zero mean and unit variance using the

transformation of the standardized normal random variable can derive the

standard normal distribution, which is

1 2/2

f (z) e z ( A.37)

2

Appendix A 204

For - < z < . Since we can always make the standard transform in

practice, we shall use this function in Table B.2. Since the cumulative Equation

A.37 cannot be derived in closed form, the tables represent result of numerical

integrations.

One very useful continuous probability distribution is called the exponential

distribution. It is most commonly used when we are interested distribution of

the interval (measured in minutes, for example) between successive

occurrences of an event. The probability function associated with the

exponential random variable is as follows:

and f(x) = 0 for x 0. The cumulative distribution of the exponential random

variable is as follows:

a

F ( a ) e x dx 1 e a ( A.39)

0

1

( A.40)

1

2 ( A.41)

2

and the discrete Poisson distribution in problems involving the occurrence of

events ordered in time. In the Poisson case, there are changes occurring

intermittently in the process in which we are interested. The Poisson

205 Appendix A

distribution describes the number of such changes in a unit time interval. The

exponential distribution describes the time spacing between such occurrences.

Suppose that the life in hours of a certain type of tube is a random

variable having an exponential distribution with a mean of 1000 hours. What is

the probability that such a tube will last at least 1250 hours? To solve this, we

use the cumulative distribution expression in Equation A.39 as follows:

F ( a ) 1 e a

1

= 1 e (1250)

1000

= .713

This is the probability that the tube will last 1250 or fewer hours; hence,

answer to our questions is 1 -F(1250) = .287. The exponential distribution for

this example is illustrated in Figure A.8.

Appendix A 206

A continuous probability distribution that has constant density over the range of

values for which the density of the random variable is nonzero is called a

rectangular distribution. The probability distribution associated with the

rectangular random variable is as follows:

1

f ( x) for c x d ( A.42)

d c

and f(x) = 0, otherwise. The cumulative distribution of the rectangular random

variable is as follows:

a 1 ac

F (a) dx ( A.43)

c d c d c

The mean and variance of the rectangular distribution are given by the

following equations:

cd

( A.44)

2

( d c) 2

2 ( A.45)

12

rectangular between 100 and 1100 gallons per day. Using Equation we see that

the probability that no more than 700 gallons will be required is

700 100

F (700) .6

1100 100

The rectangular distribution for this example is graphed in Figure A. 9.

f(x)

1

1100

100 1100 x

207 Appendix A

Table A.1: Common Distribution Summary

Distribution

of random Formula Parameters Range of x Mean Variance

variable x

Discrete

Binomial n n, p x 0,1,2,3...., n np np (1 p )

f ( x) P x (1 P ) n x

x

Poisson e x x 0,1,2,3.....

f ( x)

x!

Uniform 1 n x 1,2,3,...., n n 1 n2 1

f ( x)

n 2 12

Continuou

s

Normal

f ( x)

1 ,/

e ( x )

2

2 2 x 2

2

Exponential f ( x ) e x 0x 1 1

2

Rectangular 1 c, d cxd c d (d c) 2

f ( x)

d c 2 12

treated in this section. The formula parameters, range, mean, and variance are

presented in the table for easy reference.

We earlier introduced the distributional characteristics known as m and

variance. In this section we shall elaborate upon those measured from the

viewpoint of expected values.

A.6.1 Mean

The mean or expected value of a discrete random variable x is denoted by the

letter and is defined as follows:

E ( x ) x i p ( xi ) ( A.46)

i

manner, we define the mean or expected value of a continuous random variable

as follows:

Appendix A 208

E ( x ) xf ( x)dx ( A.47)

note that the mean need not equal a value that the random variable may assume.

For example, the expected value or mean of many dice rolling experiments is:

xi p ( x i )

1

1 1 1

1 2 .... 6

6 6 6

= 3.5

The value 3.5 is clearly not a value that can be assumed on a roll of the die.

A.6.2 Variance

The measure of variability that we have considered is called the variance ( 2 ),

and it is defined as follows:

2 E(x ) 2 ( A.48)

This definition holds for both discrete and continuous random variables. The

reader should note that the following computational convenience holds:

for rolling a die in a manner similar to finding the mean. That is,

2 E( x 2 ) 2

= xi

2

i p( xi ) 2

1 1 1

= 1 2 ... 6 (3.5)

2 2 2 2

6 6 6

= 2 .917

For the reader familiar with mechanics, we can point out an interesting

analogy. The mean is analogous to the first moment about the origin, and the

variance is equivalent to the second moment about the mean.

The mean and variance are fixed for a given distribution; thus, they are

parameters of the distribution. For our purposes, the use of expected values

provided a nice medium for defining the mean and variance.

209 Appendix A

Much of the work of the applied professions involves the study of only a subset

of the total items of interest, in the hope of making statistical inferences about

the total. An engineer might collect data on machine utilization for 1 month,

hoping to infer from it machine utilization information for many months or

years. An automobile manufacturer might test a small number of automobiles

and then make generalized statements about all the auto-mobiles produced

during that model year. An inspection team might use destructive inspection on

a small percentage of items in order to infer characteristics of the total number

being produced. In order to describe this process accurately, we must clearly

understand the meaning of populalation and sample.

A.7.1 Population

A population, in the broadest sense, is the total set of element about which

knowledge is desired. Some populations are relatively small, for example the

number of Atlas missiles; other populations are large, for example, all the

electric light bulbs now in existence and to be produced in the future. All

elements of a population do not have to be in existence, as the last example

indicates. The important thing to remember is that the population must be

definable.

The definition of population clearly indicates that it contains the

elements in which we have an interest. Why, then, do we not study the complete

population? The answer is simple: The population is usually too large or too

complex, or not available, or the expense of considering all of it is too high.

Any investigator would measure all the elements of his defined population if it

were not prohibitive in some manner. As a result of the impossibility or

impracticality of always considering all elements of a population, we are forced

into a consideration of a sample (or samples) from that population.

A.7.2 Sample

A sample is a subset of a population. In extreme situations, the sample may be

the complete population or it may consist of no elements at all. Of course, this

latter sample would yield no information and we shall not consider it further.

Remember that the purpose of a sample is to yield inferences about the

population from which it was taken.

The two most important features of a sample are its size and the manner

in which it was selected. Much of the study of sampling statistics concerns the

determination of these two characteristics. As expected, this determination is

based upon the specific conditions prescribing the purpose of the sample.

Appendix A 210

estimate a population parameter such as a mean or variance. Since samples

from a population are not identical, it is immediately apparent that sample

statistics are not always the same, that is, they vary from sample to sample.

Thus, a sample statistic is a random variable with its own frequency function.

Two important sample statistics are the sample mean and the sample

variance. The sample mean is defined as follows:

n

x

x i ( A.50)

i 1 n

where n is the number of measurements in the sample and the xi ' s are the

values of the random variable x in the sample. The sample variance is defined

as follows:

2

n x i x

s2 ( A.51)

i 1 n 1

For computational purposes, the sample variance can also be written as follows:

2

n

n

xi

2 i 1

xi

n

s 2 i 1 ( A.52)

n 1

In this form, s 2 is much easier to calculate. As we might expect, x is

an estimate of the population mean and s 2 is an estimate of the population

variance 2 .

Other measures of central tendency include the median, which is the

middle value in an ordered set of data, and the mode, which is the value that

occurs most frequently. The range, denoted by R, is a particularly useful

measure of dispersion in quality control work. It is simply the largest value in a

sample minus the smallest value:

R xl arg est x smallest ( A.53)

We often make inferences about a population from the average value of a

sample. This usually requires that we know the parameters of the distribution of

means. Naturally, the expected value of the sample average is p, the same mean

211 Appendix A

2

value as held by the population. The variance of the sample means x differs

from the population variance and is given by the following:

2

x ( A.54)

n

It is reasonable to expect that the distribution of sample means to have a

smaller variance, since the larger the sample, the closer one would expect the

average to fall to the population mean, giving rise to a smaller distribution

spread.

We can return to our example in which resistors are normally distributed

with mean = 1000 and variance 2 = 900. If we take sample on size n= 9

and average the ohmmeter readings, it is virtually impossible that the sample

average will be as high as 1060 which had a .0227 probability for a single

resistor. In fact, the likelihood that the sample average will exceed 1030 is very

small. To calculate the exact probability, we must first determine the mean and

variance of the sample average distribution. The mean will remain at 1000,

but the variance is given by

2 2 900

x 100

n 9

Finding the standard normal variable, we have

x 1,030 1,000

z 3

10

x

Consulting our Table B.2 values in Appendix B, we can see that probability that

z exceeds 3 is 1 -.99865 = .00135. Correspondingly, this is also the probability

that x will exceed 1030.

The distributions of x and x for this example are graphed in Figure A.10.

Appendix A 212

In the preceding example, we implicitly assumed that the distribution of sample

means to be normal. In fact, this is true if the population is normally distributed.

Although the sample mean distribution is not truly normal if the distribution is

other than normally distributed, we frequently treat the sample means as if they

were. The reason we can do this is stated in the central limit theorem which, in

essence says: If x has a distribution with a finite variance 2 , then the random

variable x has a distribution that approaches normally as the sample size

trends to infinity. Fortunately, for many population distribution often

encountered, sample sixes as low as n = four produce sample average

distributions which are workably close to normal. We use the central limit

theorem extensively in quality control, project management, and probabilistic

model.

- Sheet1Perf SSP S2014 mathЗагружено:Muhammad Ashraf Negm
- Homework 2Загружено:sirali94
- احصاء شبتر 2Загружено:api-3720556
- sqqs1013-chp05Загружено:denixng
- Probability and Stochastic ProcessesЗагружено:Ashiquzzaman Akash
- ParkerЗагружено:andres57042
- fifaЗагружено:Punit Sarda
- SDA 3E Chapter 3 (1)Загружено:xinearpinger
- Assignment Questions BITSЗагружено:Bharat Raghunathan
- 5Enote5Загружено:Rawan F Hamad
- Discrete Probability Distribution data modelsЗагружено:Sameer Karnik
- simulation.pdfЗагружено:Wellington Gonçalves
- Lecture 1Загружено:hyperpearl
- handout2Загружено:fuckscribdlol
- Solution 3Загружено:NitinKumar
- iet 603 700 forum 5Загружено:api-285292212
- s15notesЗагружено:canorao
- Lecture 3Загружено:Mayank Saini
- Chapter Five Notes (1).docЗагружено:Anonymous 1cA1GVyRP
- 6.3c Geometric Random VariablesЗагружено:Karla Hoffman
- 041SCF13Загружено:ujnzaq
- Statistics Notes on Random VariablesЗагружено:Dillon Chew
- ReviewerЗагружено:Marvel Tating
- Statistics Q&AЗагружено:sureshkiran
- Biostatistic Chapter TwoЗагружено:monerch Joser
- fstats_ch1.pdfЗагружено:lww
- Sample of Mth302mcq Google DocsЗагружено:Aman Mishra
- DAV Faridabad Maths AssignmentЗагружено:Adwin Anil Saldanha
- IntroductoryStatistics.pdfЗагружено:M M Pasha
- AB1202-Lect-02bЗагружено:xthele

- Section 1-A Refresher on Probability and StatisticsЗагружено:Islam Ali
- Technical NoteЗагружено:Jnj Sharma
- Cognitive computer training in children with attention deficit hyperactivity disorder (ADHD) versus on intervention - study protocol for a randomized controlled trial.pdfЗагружено:RobsonMarques
- Advance Database SystemЗагружено:Manjunath Bj
- Seminar Report on Machine LearingЗагружено:harshit
- Chapter 5 - Measuring Variables and SamplingЗагружено:Mohammed Albaom
- DOW METSY (602478) 2013-06-18Загружено:globglo
- Polynomial RegressionЗагружено:Ngabirano B. Julius
- Lecture Minitab Part 1Загружено:Yuriy Podvysotskiy
- Froehlich Reinsurance-PricingЗагружено:stehbar9570
- Probability DistributionЗагружено:aaaapple
- The Curve-Fitting ProblemЗагружено:hammoudeh13
- Modern-trends 11 17Загружено:ابو محمد
- Continuous Probability DistributionsЗагружено:Sai Nath
- Classification Methods With Reject Option Based on Convex Risk MinimizationЗагружено:Doctor_Ciclada
- CE605A_syllabusЗагружено:Prasad
- 02 Basic Technical ArchitectureЗагружено:dyssa123
- NR 311801 Probability and StatisticsЗагружено:Srinivasa Rao G
- Gujarati BookЗагружено:Manuel Antonio Díaz Flores
- Chapter 10 - Managing QualityЗагружено:Edson Untalan
- BUILDING-SPECIFIC LOSS ESTIMATION METHODS(chp1-5).pdfЗагружено:Romina Ilina-Posea
- Clicker Question SpeckЗагружено:Irbert Tsukiyomi
- ClarifyЗагружено:Hazel Cadoo-Gabriel
- 30865256-Study-of-the-Market-Scenario-of-Indian-Packaged-Drinking-Water-Industry-with-Focus-on-Bisleri.docЗагружено:rtkob
- OutliersЗагружено:xxronaldxx
- ec335.2014.syllabusЗагружено:Nguyễn Nhật Linh
- Henseler Fassott Dijkstra Wilson 2012Загружено:Liliya Iskhakova
- IMCЗагружено:vyshnavigujral
- Modeling of Activated Sludge With ASM1 Model, Case Study on Wastewater Treatment Plant of South of IsfahanЗагружено:Anish Ghimire
- Analyzing Pair-Programmer’s Satisfaction with the Method, the Result, and the PartnerЗагружено:Aries Dang

## Гораздо больше, чем просто документы.

Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.

Отменить можно в любой момент.