Вы находитесь на странице: 1из 5

26-07-2015

ME 102
Data Analysis and Interpretation
Lecture-1, Role of Statistics in Engineering
Kannan Iyer
Kiyer@iitb.ac.in

Department of Mechanical Engineering


Indian Institute of Technology, Bombay
1

Engineering and Technology-I


 Engineering and technology deals with
development of machines and tools that removes
human drudgery and increases comfort
 Design and Development of these requires
understanding of fundamental principles of nature
and its behaviour
 For developing a mathematical base so that these
systems can be designed and engineered, math
models are required.
 Some of these might have been well understood,
while many are still under evolution.

Motivation for the Course


 Before we can begin learning Statistics, it is
essential to look at the motivation.
 We shall realise that many engineers and
scientists use statistics routinely in their work.
 The purpose of this lecture is to appreciate this.
 This will set the tone for the matter you shall be
exposed to.
 Before the subject matter begins, we shall look
at the policy to be followed.
 This document is posted in Moodle and has
been emailed to you.

Engineering and Technology-II


 Let us look at concrete examples
 For design of machines like a car, the laws of physics
for most components are well known by now.
 However, the physics of weather prediction is far
from completely understood.
 When the laws of physics are understood, such as
Newtons laws of motion, the analysis carried out for
the design is termed as Deterministic Analysis.
 However, when the physics is complex, one has to
speculate and test the hypothesis using experiments.
Certain empirical relations may be generated. This
falls under Statistical or Probabilistic Analysis

26-07-2015

Engineering and Technology-III

Engineering and Technology-IV

 Shown below is part data given by BHEL on the


behaviour of an hydraulic turbine for analysis.
 It summarises the flow and torque in a turbine at
different head and gate valve opening for a given
speed.
 To perform mathematical analysis, we would prefer
an equation
H (m)

GVO (%)

41.7

45.8

50.0

54.2

Q (m3/s)

58.3

62.5

66.7

70.8

75.0

80.676

86.638

92.742

98.441

102.964
5087.57

178.07
(kN-m)

162.93

149.63

3968.65

4267.66

4575.03

4861.88

Q (m3/s)

65.376

70.983

77.332

83.006

88.827

94.270

98.568

(kN-m)

2954.99

3237.63

3555.17

3831.58

4105.11

4360.78

4562.15

Q (m3/s)

62.698

68.089

74.246

79.671

85.293

90.438

94.536

(kN-m)

2634.11

2892.73

3183.58

3437.33

3692.30

3918.48

4097.20

Q (m3/s)

54.512

60.249

65.485

71.333

76.597

82.030

86.922

90.813

(kN-m)

2077.13

2351.53

2590.84

2861.67

3096.78

3326.72

3533.47

3694.85

137.90

Engineering and Technology-V

 One way is to fit an empirical equation by a


method called regression.
 Using this, I had fitted an equation of the type,

Q = a + bH + cF + dH 2 + eHF + fHF 2
 In the above equation, a-f are constants, H and F are
head and fraction of opening of the valve.
 The constants a-f were dependent on the angular
speed of the turbine, each of which was a quadratic.
 The above is an approximate intuitive
representation. Let us look at its success in
predicting the data

Propagation of Errors

Q
Parity Line

30

60

Mean Error
Std_Dev
90
120
Q-data

150

-0.000014
0.007

180

Torque parity

Mean Error
Std_Dev

-0.006142
0.034

10,000
8,000
T-Fit

Q-Fit

Flow parity
180
150
120
90
60
30
0

Torque
Parity

6,000
4,000

 One may have noticed that we have used statistics


to predict approximate relation that is subject to
some error
 Now the question is when I predict the behaviour,
how much of an error, I might have?
 How confident I shall be in predicting the behaviour
during some transient operation?
 Thus how the error propagates is important to
know.

2,000
0
0

2000

4000 6000
T-Data

8000

10000

26-07-2015

Variability
Role of Jitter
 Statistical techniques are useful for describing and
understanding variability.
 In our example, when the data was obtained, there
would have been some error, which may introduce
variability in predictions
 Many times, the physics itself may be variable. The
classic example of this is radioactivity.
 If we count a sample repeatedly for a given time,
we will find that the counts are not the same. It can
be shown that this follows what is called Poisson
distribution.
 Turbulent flow is another example.

Demings Experiment-I
 Marbles were dropped through a funnel onto a
target and the location where the marble struck
the target was recorded.
 Variation was caused by several factors:
 Marble placement in funnel & release
dynamics, vibration, air currents, etc.

 Consider the case of Cannon Ball Shooting


 Usually, one will approximate the orientation of the
gun and shoot at the target.
 If it falls short, one will increase the angle and shoot
again.
 After 3-4 trials one can hit the target.
 In this case, it is assumed that the physics is well
behaved but is not fully known.
 On the other hand, if there are some random changes
introduced, such as wind speed and direction, often
called jitter, we may get very different results.

Demings Experiment-II
The funnel was aligned with the center of the
target. Marbles were dropped. The distance from
the strike point to the target center was measured
and recorded.
Two strategies were adopted.
Strategy 1: The funnel was not moved. Then the
process was repeated.
Strategy 2: The funnel was moved an equal
distance in the opposite direction to compensate
for the error. Then the process was repeated.

Ref: Applied Statistics and Probability for Engineers, by Montgomery and Runger.

26-07-2015

Demings Experiment-III

Adjustments applied to random disturbances increased


the deviations from the target

Control Chart-I
 The lesson of the Deming experiment is that a
process should not be adjusted in response to
random variations.
 However, when a clear shift in the process value
becomes apparent, suitable adjustments can be
made.
 To identify the shift, a control chart is employed.
Output values, plotted over time along with the outer
limits of normal variation to indicate when the
process leaves normal values.

Control Chart-II

Probability Models

 While the above figure illustrates a process shift,


one should have a strategy to detect these on field
 These could be moving averages, scatter monitors,
etc.

1. Probability models help quantify the risks


involved in statistical inference, that is, risks
involved in decisions made every day.
2. Probability provides the framework for the
study and application of statistics to systems
with multiple components.
3. Using these, reliability of complex systems
can be improved.
4. Probability concepts will be introduced in the
next few lectures.

26-07-2015

The Art of Statistics


 From the discussions, we can conclude that
Statistics is the art of learning from data.
 It is concerned with :
 Collection of data,
 Its subsequent description,
 Its analysis, which often leads to drawing of
conclusions.

Way Forward
 We have got some feel for the general variation
issues, and how statistical concepts may be used.
 We have seen certain nuances but we need to
understand more details and these will be
introduced as the course grows.
 We shall begin the next class with Sampling and
Descriptive Characteristics.

Вам также может понравиться