Академический Документы
Профессиональный Документы
Культура Документы
Fifth Edition
Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu
http://orsc.edu.cn/liu/ut.pdf
5th Edition © 2017 by Uncertainty Theory Laboratory
4th Edition © 2015 by Springer-Verlag Berlin
3rd Edition © 2010 by Springer-Verlag Berlin
2nd Edition © 2007 by Springer-Verlag Berlin
1st Edition © 2004 by Springer-Verlag Berlin
Contents
Preface xi
0 Introduction 1
0.1 Indeterminacy . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.2 Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
0.3 Belief Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1 Uncertain Measure 11
1.1 Measurable Space . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Uncertain Measure . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Uncertainty Space . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 Product Uncertain Measure . . . . . . . . . . . . . . . . . . . 19
1.5 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Polyrectangular Theorem . . . . . . . . . . . . . . . . . . . . 27
1.7 Conditional Uncertain Measure . . . . . . . . . . . . . . . . . 29
1.8 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 31
2 Uncertain Variable 33
2.1 Uncertain Variable . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Uncertainty Distribution . . . . . . . . . . . . . . . . . . . . . 36
2.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4 Operational Law: Inverse Distribution . . . . . . . . . . . . . 48
2.5 Operational Law: Distribution . . . . . . . . . . . . . . . . . 59
2.6 Operational Law: Boolean System . . . . . . . . . . . . . . . 66
2.7 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.8 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.9 Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.10 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.11 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.12 Conditional Uncertainty Distribution . . . . . . . . . . . . . . 95
2.13 Uncertain Sequence . . . . . . . . . . . . . . . . . . . . . . . . 98
2.14 Uncertain Vector . . . . . . . . . . . . . . . . . . . . . . . . . 104
vi Contents
Bibliography 483
Index 495
Preface
Uncertain Measure
Uncertain Variable
Uncertain Programming
Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model unsharp concepts like “young”, “tall”, “warm”, and “most”. The
main difference between uncertain set and uncertain variable is that the for-
mer takes values of set and the latter takes values of point. Uncertain set
theory will be introduced in Chapter 8.
Uncertain Logic
Some knowledge in human brain is actually an uncertain set. This fact en-
courages us to design an uncertain logic that is a methodology for calculating
the truth values of uncertain propositions via uncertain set theory. Uncertain
logic may provide a flexible means for extracting linguistic summary from a
collection of raw data. Chapter 9 will be devoted to uncertain logic and
linguistic summarizer.
Preface xiii
Uncertain Inference
Uncertain Process
Uncertain Calculus
Uncertain Finance
Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting expert’s
experimental data by uncertainty theory. Chapter 16 will present a question-
naire survey for collecting expert’s experimental data. In order to determine
uncertainty distributions and membership functions from those expert’s ex-
perimental data, Chapter 16 will also introduce linear interpolation method,
principle of least squares, method of moments, and Delphi method. In ad-
dition, uncertain regression analysis and uncertain time series analysis are
also introduced when the imprecise observations are characterized in terms
of uncertain variables.
Lecture Slides
If you need lecture slides for uncertainty theory, please download them from
the website at http://orsc.edu.cn/liu/resources.htm.
Purpose
The purpose is to equip the readers with a branch of mathematics to deal
with belief degrees. The textbook is suitable for researchers, engineers, and
students in the field of mathematics, information science, operations research,
industrial engineering, computer science, artificial intelligence, automation,
economics, and management science.
Acknowledgment
This work was supported by National Natural Science Foundation of China
Grant No.61573210.
xvi Preface
Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
December 4, 2017
Chapter 0
Introduction
0.1 Indeterminacy
By indeterminacy we mean the phenomena whose outcomes cannot be ex-
actly predicted in advance. For example, we cannot exactly predict which
face will appear before we toss dice. Thus “tossing dice” is a type of in-
determinate phenomenon. As another example, we cannot exactly predict
tomorrow’s stock price. That is, “stock price” is also a type of indetermi-
nate phenomenon. Some other instances of indeterminacy include “roulette
wheel”, “product lifetime”, “market demand”, “bridge strength”, “travel dis-
tance”, etc.
Indeterminacy is absolute, while determinacy is relative. This is the rea-
son why we say real decisions are usually made in the state of indeterminacy.
How to model indeterminacy is thus an important research subject in not
only mathematics but also science and engineering.
In order to describe an indeterminate quantity (e.g. stock price), what we
need is a “distribution function” representing the degree that the quantity
falls into the left side of the current point. Such a function will always have
bigger values as the current point moves from the left to right. See Figure 1.
2 Chapter 0 - Introduction
0.2 Frequency
Assume we have collected a set of samples for some indeterminate quantity
(e.g. stock price). By cumulative frequency we mean a function representing
the percentage of all samples that fall into the left side of the current point.
It is clear that the cumulative frequency looks like a step function in Figure 2.
....
........
..
...
1 ...
......................................................................................
... ....................
..................
................
...
... . ... ..... ...
... .................. . ... ... ...
... ... .
.. ... .
.. ...
... .................. . .
.. .
... .
.. ...
... ... ... .
.. .. .
.. ...
... ... .
.. .
.. .
.. .
.. ...
... ... .
.. .
.. .
.. .
.. ...
... .................. .
. .
. .
. .
. ...
... ... ... .... .... .... .... ...
... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ...
... ................. . .
.
... .
.. .
.. .
.. . ...
..
... ... ... . .
. .
. .
. .
. ...
... ... ... .... .... .... .... .... ...
... .................. . .
.. .
.. .
.. .
.. .
.. .
.. ...
... ... .
.. .
.. .
.. .
.. .
.. .
.. .
.. ...
... . .
.. .
.. .
.. .
.. .
.. .
.. .
.. ...
... ................... .
. .
. .
. .
. .
. .
. .
. ...
... .. .... .... .... .... .... .... .... .... ...
... ................... .
.. .
.. .
.. .
.. .
.. .
.. .
.. .
.. ...
. ................. .
. .
. . . .
.............................................................................................................................................................................................................................................
. . . . .
..
change with our state of knowledge and preference. In other words, the
frequency in the long run exists and is relatively invariant, no matter if it is
observed by us.
red ball” and “drawing a black ball” are equally likely. Besides, the belief
degree for drawing a black ball is also 0.5.
The belief degree depends heavily on the personal knowledge (even includ-
ing preference) concerning the event. When the personal knowledge changes,
the belief degree changes too.
See Figure 4. From the function Φ(x), we may infer that the belief degree
of “the bridge strength being less than 90 tons” is 0.25. In other words, it is
reasonable to infer that “I am 25% sure that the bridge strength is less than
90 tons”, or equivalently “I am 75% sure that the bridge strength is greater
than 90 tons”.
..
.........
...
...
1 .......................................................
...
...
......................................................................
.....
..... ..
... ..... .
... ..
...... ...
... ..
... .....
... ..... ...
... ....
... ..
....... ..
..
... ......
. ..
....
... .
...
.
. ..
... ..
. ..
... ..
..... ..
... ..
...
. ..
... ..
..... ..
... ..
...
. ..
... ..
....
. ..
... ..
..... ..
... ..
...
. ..
... ..
..... ..
... ..
...
. ..
... ..
....
. .
..
........................................................................................................................................................................................................................
0 ..
...
... x (ton)
80 ..
..
120
that human beings usually estimate a much wider range of values than the
object actually takes. This conservatism of human beings makes the belief
degrees deviate far from the frequency. Thus all belief degrees are wrong
compared with its frequency. However, it cannot be denied that those belief
degrees are indeed helpful for decision making.
...
... ...
... ...
“exactly 90 tons” ...
... ...
... ...
... .... ... ... .... ...
... ..... ..... ..... ... ..... ..... .....
..
.. .. .. ...
... .. ..... ... ...
... ... ... ... .. ...
⇓
...................................................................
..
. . . .
... ... .... ... ....
... .. .. ... ... ...
... ... .. .. .. ...
... ... .. .. ... ... .................................... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ..... .... ..... ..... .....
.
.... ..... .... ..... ..... ..... .. . . . . . . . . . . . . . . ..
. . . ... ... ...
.
... ... ... ... .... ... ......................... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. .. ... .... ....
... ... .. ... ... .... ...................
... ... ... .................................................................................................................. ................................................................. ... ... ..
... ..... ....
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
.... ..... .... ... ... ...
... ... ...
.
.. ... . .
... . .
... ... ...... . ...... . ...... ...... . . .
. ..
. . .. ... ... ...
.
.. . ..
. .
. .. .
. .
.. .. .
.
. . .. ..
...... ....
.......
....
...... ...........
... ......
.....
⇑ ....... ....
.....
..
.................
...........
... .. ... ... ..... . ..... .. .. ... ...
. ... ... .... .. ..... ... ... ... ...
.... ... ... ... ... .. ..... ... ... ... ...
.... .... ... ... .... .. ... ... ... ... ....
. ....
... .... .. .. .... .. .... .... .... .. ..
.................................. ..... .. ..... .................................
........ .
.
......
..... .....
..........................................................................................................................
.... .
Unknown Strength
That is to say, we are 100% sure that the truck can cross over the 50 bridges
successfully.
....
........
....
....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ............................................................................
1 ....
...
... ......
..... ..
......
..... ..
..... .
... .. . ..... ...
... “true” probability ..... .. .
........ .
... .. .. .. .......
... . ... .. .. ..........
.....
..
.
...
...
distribution ..... ...............
.. ........... ..
...
..
.
.
... .............. .. .. ..
... ........ . . . .
... .
....... .. .. ..
...
.
.. ..
... . ..
.......... .. .. .. .. .
... ....... ...
. .
.. .. .. .. .. .. ..
... ....... .... .. .. .. .. ..
. .
... . . ...... .... .
.... .. .. .. .. .. ..
... .......
. ... .. .. .. .. .. ..
.. .
... .... .. .. .. .. .. .. ..
belief degree
... ....
.
....
...
....
.
.
.
... .. .. .. .. .. .. ..
.
... ...
..... ..
.. .. .. .. .. .. .. .. ..
function ... ..
...... .
... .. .. .. .. .. .. .. .
... ........ ... .... .. .. .. .. .. .. ..
. ..
... .......
. .... .. .. .. .. .. .. .. ..
. .
... ...
..... .
.... .. .. .. .. .. .. .. .. ..
... ..
...... ...
. ..... .. .. .. .. .. .. .. ..
. .
... ........ .. .. .. .. .. .. .. .. .. ..
.
. .
.. . . . . . . . . . .
0 .................................................................................................................................................................................................................................................................. x (ton)
....
80
.. 95 110 120
However, when there do not exist any observed samples for the bridge
strength at the moment, we have to invite some bridge engineers to evaluate
the belief degrees about it. As we stated before, human beings usually esti-
mate a much wider range of values than the bridge strength actually takes
because of the conservatism. Assume the belief degree function is
0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (3)
1, if x > 120.
See Figure 6. Let us imagine what will happen if the belief degree function
is treated as a probability distribution. At first, we have to regard the 50
bridge strengths as iid uniform random variables on [80, 120] in tons. If we
have the truck cross over the 50 bridges one by one, then we immediately
have
Thus it is almost impossible that the truck crosses over the 50 bridges suc-
cessfully. Unfortunately, the results (2) and (4) are at opposite poles. This
example shows that, by inappropriately using probability theory, a sure event
becomes an impossible one. The error seems intolerable for us. Hence the
belief degrees cannot be treated as subjective probability.
8 Chapter 0 - Introduction
That is to say, we are 75% sure that the truck can cross over the 50 bridges
successfully. Here the degree 75% does not achieve up to the true value 100%.
But the error is caused by the difference between belief degree and frequency,
and is not further magnified by uncertainty theory.
0.4 Summary
In order to model indeterminacy, many theories have been invented. What
theories are considered acceptable? Personally I think an acceptable theory
should be not only theoretically self-consistent but also the best among others
for solving at least one practical problem. On the basis of this principle, I
may conclude that there exist two mathematical systems, one is probability
theory and the other is uncertainty theory. It is emphasized that probability
theory is only applicable to modelling frequencies, and uncertainty theory
is only applicable to modelling belief degrees. In other words, frequency is
the empirical basis of probability theory, while belief degree is the empirical
Section 0.4 - Summary 9
... ...
.......... ..........
... ............................................ ...
....... .....................
. . ..............
.... .................. .... ... .... ......... ..............
.................. .... .... .... . ....
..
... ..
. .. ... ... ... ...
.
..
... .................... ....
.............. .. ... .. .. ... .
... .. . .. . .. . ... ..... ... ... ...
... ..... .... ... .... .... .... ... ......... ... ... ...
... ................ .... .... .... .... ... ... ..... ... ... ... ...
... ..... ... ... ... ... ... ... ... .... ..... .... .... .... ....
... .... ... .. .. .. .. .. ... ... ...... ... .... .... ...
... ............... .... .... ..... .... .... .... ... ... ... .. ... .. .. ..
... ..... ..... .... .... .... .... .... ... ... .. .. ... .. .. .. ..
... .
..
............ ... ... .... ... .... .... ....
. ... ... ....... .... .... ..... ..... ....
.
.... ... ... .. ... .. .. ..
... ... .. .. .. .. .. .. . . ... .... .. .. .. .. .. .. ..
... .................... .... .... .... .... .... .... ..... .... ... ..... ......... .... .... .... .... .... ...
... ... . . . . . . . . . ... ..... .. ... ... ... ... ... ... ...
. ...
..
..
..
.
.
. .
..... ..... .... .... .... .... .... .... .... .... .... . ...
...
........ .... .......
. .
...
...................................................................................................................................................................................... ....................................................................................................................................................................................
. .
... . .
.... ....
..
.. Probability ..
.. Uncertainty
Figure 7: When the sample size is large enough, the estimated probability
distribution (left curve) may be close enough to the cumulative frequency (left
histogram). In this case, probability theory is the only legitimate approach.
When the belief degrees are available (no samples), the estimated uncertainty
distribution (right curve) usually deviates far from the cumulative frequency
(right histogram but unknown). In this case, uncertainty theory is the only
legitimate approach.
Uncertain Measure
Uncertainty theory was founded by Liu [77] in 2007 and subsequently studied
by many researchers. Nowadays uncertainty theory has become a branch of
mathematics for modelling belief degrees. This chapter will provide normal-
ity, duality, subadditivity and product axioms of uncertainty theory. From
those four axioms, this chapter will also introduce an uncertain measure that
is a fundamental concept in uncertainty theory. In addition, product uncer-
tain measure and conditional uncertain measure will be explored at the end
of this chapter.
Example 1.1: The collection {∅, Γ} is the smallest σ-algebra over Γ, and
the power set (i.e., all subsets of Γ) is the largest σ-algebra.
Example 1.3: Let L be the collection of all finite disjoint unions of all
intervals of the form
Then L is an algebra over < (the set of real numbers), but not a σ-algebra
because Λi = (0, (i − 1)/i] ∈ L for all i but
∞
[
Λi = (0, 1) 6∈ L. (1.4)
i=1
Example 1.5: Let < be the set of real numbers. Then L = {∅, <} is a
σ-algebra over <. Thus (<, L) is a measurable space. Note that there exist
only two measurable sets in this space, one is ∅ and another is <. Keep in
mind that the intervals like [0, 1] and (0, +∞) are not measurable in this
space!
Example 1.6: Let Γ = {a, b, c}. Then L = {∅, {a}, {b, c}, Γ} is a σ-algebra
over Γ. Thus (Γ, L) is a measurable space. Furthermore, {a} and {b, c} are
measurable sets in this space, but {b}, {c}, {a, b}, {a, c} are not.
Example 1.7: It has been proved that intervals, open sets, closed sets,
rational numbers, and irrational numbers are all Borel sets.
Example 1.8: There exists a non-Borel set over <. Let [a] represent the set
of all rational numbers plus a. Note that if a1 − a2 is not a rational number,
Section 1.2 - Uncertain Measure 13
then [a1 ] and [a2 ] are disjoint sets. Thus < is divided into an infinite number
of those disjoint sets. Let A be a new set containing precisely one element
from them. Then A is not a Borel set.
sup ξi (γ); inf ξi (γ); lim sup ξi (γ); lim inf ξi (γ). (1.7)
1≤i<∞ 1≤i<∞ i→∞ i→∞
Especially, if limi→∞ ξi (γ) exists for each γ, then the limit is also a measurable
function.
Remark 1.3: Since “1” means “complete belief ” and we cannot be in more
belief than “complete belief ”, the belief degree of any event cannot exceed 1.
Furthermore, the belief degree of the universal set takes value 1 because it is
completely believable. Thus the belief degree meets the normality axiom.
Remark 1.5: Given two events with known belief degrees, it is frequently
asked that how the belief degree for their union is generated from the in-
dividuals. Personally, I do not think there exists any rule to make it. A
lot of surveys showed that, generally speaking, the belief degree of a union
of events is neither the sum of belief degrees of the individual events (e.g.
probability measure) nor the maximum (e.g. possibility measure). It seems
that there is no explicit relation between the union and individuals except
for the subadditivity axiom.
Remark 1.7: Although probability measure satisfies the above three axioms,
probability theory is not a special case of uncertainty theory because the
product probability measure does not satisfy the fourth axiom, namely the
product axiom on Page 20.
Definition 1.5 (Liu [77]) The set function M is called an uncertain measure
if it satisfies the normality, duality, and subadditivity axioms.
Exercise 1.2: Let Γ = {γ1 , γ2 }. It is clear that there exist 4 events in the
power set,
L = {∅, {γ1 }, {γ2 }, Γ}. (1.10)
Section 1.2 - Uncertain Measure 15
M{∅} = 0, M{Γ} = 1.
Show that M is an uncertain measure.
Exercise 1.7: Let Γ be the set of real numbers, and let c be a real number
with 0 < c ≤ 0.5. For each subset Λ, define
0, if Λ = ∅
c, if Λ is upper bounded and Λ 6= ∅
M{Λ} = 0.5, if both Λ and Λc are upper unbounded (1.15)
1 − c, if Λc is upper bounded and Λ 6= Γ
1, if Λ = Γ.
Exercise 1.8: Suppose that λ(x) is a nonnegative function on < (the set of
real numbers) such that
sup λ(x) = 0.5. (1.16)
x∈<
Proof: The normality axiom says M{Γ} = 1, and the duality axiom says
M{Λc1 } = 1 − M{Λ1 }. Since Λ1 ⊂ Λ2 , we have Γ = Λc1 ∪ Λ2 . By using the
subadditivity axiom, we obtain
Theorem 1.2 The empty set ∅ always has an uncertain measure zero. That
is,
M{∅} = 0. (1.21)
Proof: Since ∅ = Γc and M{Γ} = 1, it follows from the duality axiom that
M{∅} = 1 − M{Γ} = 1 − 1 = 0.
Theorem 1.3 The uncertain measure takes values between 0 and 1. That
is, for any event Λ, we have
0 ≤ M{Λ} ≤ 1. (1.22)
Example 1.9: Assume Γ is the set of real numbers. Let α be a number with
0 < α ≤ 0.5. Define an uncertain measure as follows,
0, if Λ = ∅
α, if Λ is upper bounded and Λ 6= ∅
M{Λ} = 0.5, if both Λ and Λc are upper unbounded (1.26)
c
1 − α, if Λ is upper bounded and Λ 6= Γ
1, if Λ = Γ.
Example 1.10: Let Γ be a two-point set {γ1 , γ2 }, let L be the power set
of {γ1 , γ2 }, and let M be an uncertain measure determined by M{γ1 } = 0.6
and M{γ2 } = 0.4. Then (Γ, L, M) is an uncertainty space.
Example 1.11: Let Γ be a three-point set {γ1 , γ2 , γ3 }, let L be the power set
of {γ1 , γ2 , γ3 }, and let M be an uncertain measure determined by M{γ1 } =
0.6, M{γ2 } = 0.3 and M{γ3 } = 0.2. Then (Γ, L, M) is an uncertainty space.
Example 1.12: Let Γ be the interval [0, 1], let L be the Borel algebra over
[0, 1], and let M be the Lebesgue measure. Then (Γ, L, M) is an uncertainty
space.
For practical purposes, the study of uncertainty spaces is sometimes re-
stricted to complete uncertainty spaces.
Section 1.4 - Product Uncertain Measure 19
Exercise 1.13: Let Γ = [0, 1], let L be the Borel algebra over Γ, and let M
be the Lebesgue measure. Show that (Γ, L, M) is a continuous uncertainty
space.
Exercise 1.14: Let Γ = [0, 1], and let L be the power set over Γ. For each
subset Λ of Γ, define
0, if Λ = ∅
M{Λ} = 1, if Λ = Γ (1.29)
0.5, otherwise.
Γ = Γ1 × Γ2 × · · · (1.30)
20 Chapter 1 - Uncertain Measure
that is the set of all ordered tuples of the form (γ1 , γ2 , · · · ), where γk ∈ Γk
for k = 1, 2, · · · A measurable rectangle in Γ is a set
Λ = Λ1 × Λ2 × · · · (1.31)
L = L1 × L2 × · · · (1.32)
Remark 1.8: Note that (1.33) defines a product uncertain measure only for
rectangles. How do we extend the uncertain measure M from the class of
rectangles to the product σ-algebra L? For each event Λ ∈ L, we have
min Mk {Λk },
sup
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
if sup min Mk {Λk } > 0.5
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
M{Λ} = 1− sup min Mk {Λk }, (1.34)
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
min Mk {Λk } > 0.5
if sup
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
0.5, otherwise.
Remark 1.9: The sum of the uncertain measures of the maximum rectangles
in Λ and Λc is always less than or equal to 1, i.e.,
Γ.2
...
..........
...
.. ............................
.............. .........
... ........ .......
... ....... ......
... .
......... .....
.
. ...
.... .....
.
.........................................
.
. ..... ................................................................................... ......
...... .. .. .
. ... ...
. . . ...
.... .... .... ..
.
..
.
... ...
... ... ... .... ...
... ... .... ... ..... ...
. ... ... ... .... ...
... ...
... ... ... ...
... .... ... ..
Λ 2 .. .
. .
... .
.
..
.
Λ .
.
..
. ..
.
...
. . ... . ... .
.
.... .... ... .
.
. .
.. ...
... ... ... .
. . .
.
... ... ... .... ..... ...
... ... ... ...
......... ... ... ... ..
....................................... ... .... .. ...
... ..... .................................................................................. ......
... ..... .
...... . ......
... ... ......
... ... ............ .......
... .. .......... ............... ...
...................................
... .. ...
... .. ..
.. . .
.................................................................................................................................................................................................... Γ1
.... .... ....
... ... ...
.... . .
...................................
Λ1 ...................................
Remark 1.10: It is clear that for each Λ ∈ L, the uncertain measure M{Λ}
defined by (1.34) takes possible values on the interval
sup min Mk {Λk }, 1 − sup min Mk {Λk } .
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
Thus (1.34) coincides with the maximum uncertainty principle (Liu [77]),
that is, M{Λ} takes the value as close to 0.5 as possible within the above
interval.
Remark 1.11: If the sum of the uncertain measures of the maximum rect-
angles in Λ and Λc is just 1, i.e.,
sup min Mk {Λk } + sup min Mk {Λk } = 1,
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
Exercise 1.15: Let (Γ1 , L1 , M1 ) be the interval [0, 1] with Borel algebra and
Lebesgue measure, and let (Γ2 , L2 , M2 ) be also the interval [0, 1] with Borel
algebra and Lebesgue measure. Then
Λ = {(γ1 , γ2 ) ∈ Γ1 × Γ2 | γ1 + γ2 ≤ 1} (1.36)
22 Chapter 1 - Uncertain Measure
Exercise 1.16: Let (Γ1 , L1 , M1 ) be the interval [0, 1] with Borel algebra and
Lebesgue measure, and let (Γ2 , L2 , M2 ) be also the interval [0, 1] with Borel
algebra and Lebesgue measure. Then
Proof: In order to prove that the product uncertain measure (1.34) is indeed
an uncertain measure, we should verify that the product uncertain measure
satisfies the normality, duality and subadditivity axioms.
Step 1: The product uncertain measure is clearly normal, i.e., M{Γ} = 1.
Step 2: We prove the duality, i.e., M{Λ} + M{Λc } = 1. The argument
breaks down into three cases. Case 1: Assume
and
sup min Mk {Λk } ≤ 0.5.
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
It follows from (1.34) that M{Λ} = M{Λc } = 0.5 which proves the duality.
Step 3: Let us prove that M is an increasing set function. Suppose Λ
and ∆ are two events in L with Λ ⊂ ∆. The argument breaks down into
three cases. Case 1: Assume
Then
Then
Thus
M{Λ} = 1 − sup min Mk {Λk }
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
Case 3: Assume
sup min Mk {Λk } ≤ 0.5
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞
and
sup min Mk {∆k } ≤ 0.5.
∆1 ×∆2 ×···⊂∆c 1≤k<∞
Then
M{Λ} ≤ 0.5 ≤ 1 − M{∆c } = M{∆}.
cases. Case 1: Assume M{Λ} < 0.5 and M{∆} < 0.5. For any given ε > 0,
there are two rectangles
Λ1 × Λ2 × · · · ⊂ Λc , ∆ 1 × ∆2 × · · · ⊂ ∆c
such that
1 − min Mk {Λk } ≤ M{Λ} + ε/2,
1≤k<∞
Note that
(Λ1 ∩ ∆1 ) × (Λ2 ∩ ∆2 ) × · · · ⊂ (Λ ∪ ∆)c .
It follows from the duality and subadditivity axioms that
Mk {Λk ∩ ∆k } = 1 − Mk {(Λk ∩ ∆k )c } = 1 − Mk {Λck ∪ ∆ck }
≥ 1 − (Mk {Λck } + Mk {∆ck })
= 1 − (1 − Mk {Λk }) − (1 − Mk {∆k })
= Mk {Λk } + Mk {∆k } − 1
for any k. Thus
M{Λ ∪ ∆} ≤ 1 − min Mk {Λk ∩ ∆k }
1≤k<∞
≤ M{Λ} + M{∆} + ε.
Letting ε → 0, we obtain
Case 2: Assume M{Λ} ≥ 0.5 and M{∆} < 0.5. When M{Λ ∪ ∆} = 0.5, the
subadditivity is obvious. Now we consider the case M{Λ ∪ ∆} > 0.5, i.e.,
M{Λc ∩ ∆c } < 0.5. By using Λc ∪ ∆ = (Λc ∩ ∆c ) ∪ ∆ and Case 1, we get
Thus
M{Λ ∪ ∆} = 1 − M{Λc ∩ ∆c } ≤ 1 − M{Λc ∪ ∆} + M{∆}
≤ 1 − M{Λc } + M{∆} = M{Λ} + M{∆}.
Case 3: If both M{Λ} ≥ 0.5 and M{∆} ≥ 0.5, then the subadditivity is
obvious because M{Λ} + M{∆} ≥ 1. The theorem is proved.
1.5 Independence
Definition 1.10 (Liu [84]) The events Λ1 , Λ2 , · · · , Λn are said to be inde-
pendent if ( n )
\ n
^
M ∗
Λi = M{Λ∗i } (1.40)
i=1 i=1
Remark 1.12: Especially, two events Λ1 and Λ2 are independent if and only
if
M {Λ∗1 ∩ Λ∗2 } = M{Λ∗1 } ∧ M{Λ∗2 } (1.41)
where Λ∗i are arbitrarily chosen from {Λi , Λci }, i = 1, 2, respectively. That is,
the following four equations hold:
Example 1.14: The sure event Γ is independent of any event Λ because the
following four equations hold:
where Λ∗i
are arbitrarily chosen from i = 1, 2, · · · , n, respectively. {Λi , Λci , ∅},
The equation (1.42) is proved. Conversely, if the equation (1.42) holds, then
( n ) ( n ) n n
\ [ _ ^
M Λi = 1 − M
∗
Λi∗c
=1− M{Λ∗ci } = M{Λ∗i }.
i=1 i=1 i=1 i=1
where Λ∗i
are arbitrarily chosen from i = 1, 2, · · · , n, respectively. {Λi , Λci , Γ},
The equation (1.40) is true. The theorem is proved.
Γ.2 ..........................................................................
... ...
.......... ..
... ...
.... ... ...
...
... .
.
... ..
. ...
... ..
. ...
..
.
. .
.
.
..............................................................................................................................................................................
.
....... .. .. ...
. . ....
.... .... .... ..... ...
... ... ... .... ...
.. ... ... ... ...
... ... ... ...
... ... ...
Λ 2 ... .
. .
.
..
.
Λ ×Λ 1 2 ...
...
. ...
...
.
.... .... .... .... ...
... ... ... ... ...
... ... ... ... ...
....... . .
. .
. . ..
...............................................................................................................................................................................
... ... ....
... ... ...
... ... ...
... ... ...
... ... ...
... ... ...
.
. .
. ..
..................................................................................................................................................................................... Γ1
.... ..... .....
... .. ..
.. ................................. .................................
Λ1
Proof: For simplicity, we only prove the case of n = 2. It follows from the
product axiom that the product uncertain measure of the intersection is
Γ.2
...
..........
... ...................... ..........................
.... ... ... ... ............................
.. ... ... .... ... ... ...
... ... ... ... ... ... ..
... ... ... ..
. .
... ... ..........................
... ... ... .
. .
. ...
... . ... . ...
... ... ...................... ....................... .
. ....................... .
.
... ... ... .. ... .... .
.........................
... ... ... .
. .
... .
. ...
... ... ... ..
. ...
.
.
. ...
... ... ... ..
. . .
.
. .
... ... ... ...
...
.... .........................
... ... ......................... ......................... ......................... ... ...
... ... ... .
. ... .
. ...
.
... ... ...
... .... ...
... .... .........................
... ... ... ... ... ... ...
... ... . ... ... ... ...
... ..................................................................... .
.
...................... ......................
.
. ..
...
...
.
................................................................................................................................................................................................................................................................................. Γ1
..
...
Thus
M{Λ1k × Λ2k } + M{Λc1k × Λc2,k+1 } = 1.
Case II: If
M{Λ1k × Λ2k } = M2 {Λ2k },
then the maximum rectangle in Λc is Λc1,k−1 × Λc2k , and
Thus
M{Λ1k × Λ2k } + M{Λc1,k−1 × Λc2k } = 1.
No matter what case happens, the sum of the uncertain measures of the
maximum rectangles in Λ and Λc is always 1. It follows from the product
axiom that (1.49) holds.
Remark 1.13: Since M{Λ1i × Λ2i } = M1 {Λ1i } ∧ M2 {Λ2i } for each index i,
we also have
m
_
M{Λ} = M1 {Λ1i } ∧ M2 {Λ2i }. (1.50)
i=1
Γ.2
...
..........
... .
.... .... ...... .......
.. ...... ...... ......
... ........ ... ... ... ....
... ...... ... .... ... ....
... .... .... ... ...
... ... ......
... ... ... ..
.. . ... .....
... .... .......
... ... .... ...
.
.. .
....... ... ........
... .... ... ....... ...
........ ... .............
... ... ... .............. .
.. .
. ...................
......
... ... .... ....... ..
....... .
. ................
... .... ... .....
. ..
...... .
.
.
. ....
...........
... .... ... . ....
... ... .... ..
... .
... .
. .
.. ..
... ... .... ... ... .... ......
... .... ..... ... ... ... ...
... ... ..... ... ...
....... ... ... ... ...
... ... ........ .
. .
.
... ....................................................................................... .....
..
.
......
...
...
.
...............................................................................................................................................................................................................................................................................
.
Γ1
..
...
M{Λ ∩ A}
M{Λ|A} ≤ . (1.51)
M{A}
M{Λc ∩ A}
M{Λ|A} = 1 − M{Λc |A} ≥ 1 − . (1.52)
M{A}
M{Λc ∩ A} M{Λ ∩ A}
0≤1− ≤ ≤ 1. (1.53)
M{A} M{A}
Hence any numbers between 1 − M{Λc ∩ A}/M{A} and M{Λ ∩ A}/M{A} are
reasonable values that the conditional uncertain measure may take. Based
on the maximum uncertainty principle (Liu [77]), we have the following con-
ditional uncertain measure.
M{Λ ∩ A} M{Λ ∩ A}
, if < 0.5
M{A} M{A}
M{Λ|A} = M{Λc ∩ A} M{Λc ∩ A} (1.54)
1− , if < 0.5
M{A} M{A}
0.5, otherwise
Remark 1.16: The conditional uncertain measure M{Λ|A} yields the pos-
terior uncertain measure of Λ after the occurrence of event A.
Theorem 1.10 (Liu [77]) Let (Γ, L, M) be an uncertainty space, and let A
be an event with M{A} > 0. Then M{·|A} defined by (1.54) is an uncertain
measure, and (Γ, L, M{·|A}) is an uncertainty space.
M{Γc ∩ A} M{∅}
M{Γ|A} = 1 − =1− = 1.
M{A} M{A}
M{Λ ∩ A} M{Λc ∩ A}
≥ 0.5, ≥ 0.5,
M{A} M{A}
M{Λ ∩ A} M{Λc ∩ A}
< 0.5 < ,
M{A} M{A}
then we have
M{Λ ∩ A} M{Λ ∩ A}
M{Λ|A} + M{Λ |A} = c
+ 1− = 1.
M{A} M{A}
Section 1.8 - Bibliographic Notes 31
That is, M{·|A} satisfies the duality axiom. Finally, for any countable se-
quence {Λi } of events, if M{Λi |A} < 0.5 for all i, it follows from (1.55) and
the subadditivity axiom that
(∞ ) ∞
[ X
(∞ ) M Λi ∩ A M{Λi ∩ A} ∞
[ i=1 i=1
X
M Λi | A ≤ ≤ = M{Λi |A}.
i=1
M{A} M{A} i=1
If M{∪i Λi |A} > 0.5, we may prove the above inequality by the following
facts:
∞ ∞
!
[ \
c c
Λ1 ∩ A ⊂ (Λi ∩ A) ∪ Λi ∩ A ,
i=2 i=1
∞
(∞ )
X \
M{Λc1 ∩ A} ≤ M{Λi ∩ A} + M Λci ∩A ,
i=2 i=1
(∞ )
\
(∞ ) M Λci ∩A
[ i=1
M Λi | A =1− ,
i=1
M{A}
∞
X
∞
M{Λi ∩ A}
X M{Λc1 ∩ A} i=2
M{Λi |A} ≥ 1 − + .
i=1
M{A} M{A}
If there are at least two terms greater than 0.5, then the subadditivity is
clearly true. Thus M{·|A} satisfies the subadditivity axiom. Hence M{·|A}
is an uncertain measure. Furthermore, (Γ, L, M{·|A}) is an uncertainty space.
inappropriate because both probability theory and fuzzy set theory may lead
to counterintuitive results in this case.
In order to rationally deal with belief degrees, uncertainty theory was
founded by Liu [77] in 2007 and perfected by Liu [80] in 2009. The core of
uncertainty theory is uncertain measure defined by the normality axiom, du-
ality axiom, subadditivity axiom, and product axiom. In practice, uncertain
measure is interpreted as the personal belief degree of an uncertain event
that may happen.
Uncertain measure was also actively investigated by Gao [41], Liu [84],
Zhang [203], Peng-Iwamura [123], and Liu [92], among others. Since then,
the tool of uncertain measure was well developed and became a rigorous
footstone of uncertainty theory.
Chapter 2
Uncertain Variable
<..
...
........
.........
.... ........ ..........
... ..... ....
..... ....
... .... ...
... ....
. ...
...
...
ξ(γ) ...
.
... ...
...
.
..
... ..
... ....
... .
...
... ...
... ...
... ....
.
.. ..
...
.................................. ...
........ ... ....... ...
....... ....... ...
....... ... ...... .
....
.
..
... ...... .....
....... .....
... .........
......................
...
..
..............................................................................................................................................................................................................................................
.... Γ
.
Remark 2.1: Note that the event {ξ ∈ B} is a subset of the universal set
Γ, i.e.,
{ξ ∈ B} = {γ ∈ Γ | ξ(γ) ∈ B}. (2.1)
M{ξ ∈ [0, 2]} = M{γ | ξ(γ) ∈ [0, 2]} = M{[0, 2/3]} = 2/3, (2.7)
M{ξ > 2} = M{γ | ξ(γ) > 2} = M{(2/3, 1]} = 1/3. (2.8)
ξ(γ) ≡ c (2.9)
on the uncertainty space (Γ, L, M). Furthermore, for any Borel set B of real
numbers, we have
Example 2.5: Let ξ1 and ξ2 be two uncertain variables. Then the sum
ξ = ξ1 + ξ2 is an uncertain variable defined by
Φ(x)
...
..........
...
1 ............................................................................
... ....................................
................
... ...........
... .
...
..........
..
... .......
.....
... .....
... ......
... ...
......
... .....
.....
... .....
... ......
... ..
.......
.
.
... .......
.......
... ........
. ...............
......... .........................
............... .
.........................................................................................................................................................................................................................................................
..
.. x
0 ...
..
.
√
(ii) What is the uncertainty distribution of ξ(γ) = γ? (iii) What is the
uncertainty distribution of ξ(γ) = 1/γ?
let (Γ, L, M) be {γ1 , γ2 } with power set and M{γ1 } = M{γ2 } = 0.5. Define
( (
1, if γ = γ1 −1, if γ = γ1
ξ(γ) = η(γ) =
−1, if γ = γ2 , 1, if γ = γ2 .
Thus the two uncertain variables ξ and η are identically distributed but ξ 6= η.
Note that such a sequence is not unique. We define a set function M{B} by
∞ ∞
X X
M{Ai }, M{Ai } < 0.5
inf
∞
if inf
∞
S S
B⊂ A B⊂ Ai i=1
i=1
i
i=1 i=1
∞ ∞
M{B} = X X
1 − inf∞ M{Ai }, if inf∞ M{Ai } < 0.5
S S
B c ⊂ Ai i=1 Bc ⊂ Ai i=1
i=1 i=1
0.5, otherwise.
Then the set function M is indeed an uncertain measure on <, and the un-
certain variable defined by the identity function ξ(γ) = γ has the uncertainty
distribution Φ.
Example 2.6: It follows from the sufficient and necessary condition that
the function
Φ(x) ≡ 0.5 (2.23)
is an uncertainty distribution. Take an uncertainty space (Γ, L, M) to be <
with power set and
0,
if Λ = ∅
M{Λ} = 1, if Λ = < (2.24)
0.5, otherwise.
Then the uncertain variable ξ(γ) = γ has the uncertainty distribution (2.23).
for any real number x. (ii) Design an uncertain variable whose uncertainty
distribution is
Φ(x) = 0.6 (2.26)
if x ≤ a
0,
x−a
Φ(x) = , if a ≤ x ≤ b (2.28)
b−a
1, if x ≥ b
Φ(x)
...
..........
...
...
1 ..........................................................
...
...
...........................................................
..... .
..... ...
... .....
... ..
....... ...
..
... ..... ..
... ..... ..
... ...
...... ..
... ......
. ..
... ..
..... ..
... ......
. ..
... .
...... ..
... ...
.... ..
... ..
..... ..
... .
....... ..
... ..
..... ..
... ......
. ..
... ..
..... ..
.
.................................................................................................................................................................................................................................. x
..
0 a
....
... b
Example 2.7: John’s age (2.21) is a linear uncertain variable L(24, 28), and
James’ height (2.22) is another linear uncertain variable L(180, 185).
uncertainty distribution
0, if x ≤ a
x−a
if a ≤ x ≤ b
,
2(b − a)
Φ(x) = (2.29)
x + c − 2b
, if b ≤ x ≤ c
2(c − b)
1, if x ≥ c
Φ(x)
...
..........
...
.............................................................
1 ...
... ...... .
...... ...
......................................................
.......
... ......
... ....
....... ...
....
... ...... ..
... ...... ..
... ..
......... ..
... .
........ ..
...
........................................ ..
0.5 ...
.
.
.
.. ..
. ..
..
... .
.. ..
... .. ..
... ... ..
. ..
... .
... .
. ..
... .... .. ..
... .... ... ..
... ... . ..
......................................................................................................................................................................................................................................... x
...
0 a...
... b c
denoted by LOGN (e, σ), where e and σ are real numbers with σ > 0.
42 Chapter 2 - Uncertain Variable
Φ(x)
....
........
..
...
1 .........................................................................
..
.... ..............................
.............
........
... ..........
... ..
...
.........
.
... ......
.....
... ......
... .....
... ...
......
... ....
............
0.5 .........................................................................
...
... ..... ...
.
... ...
...... .. ....
... ..... ... ...
...... ...
... ...... .. ...
... ....... .. ...
... ...
...
........ . ....
.
. .
...
...
...
....... .
. .....
.........
.................... .
...........................................................................................................................................................................................................
........................ . ...............
......................................... x
..
0 ..
....
e
.
Φ(x)
...
..........
...
.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........ .
1 .... .......................................
...................
... ...........
... ........
... ...
.........
... .....
.....
... .....
... .....
.... ..
.....
.
.. . . . . . . . . . . . . . . ........
0.5 ... ....
... .
... .... ..
... ..
... .... .
.... .
... .... .
... .... .
... .
..
...... .
.. .
... ...................... .
......................... ..............................................................................................................................................................................................
.....
x
0 ..
...exp(e)
Proof: The equation M{ξ ≤ x} = Φ(x) follows from the definition of uncer-
tainty distribution immediately. By using the duality of uncertain measure,
Section 2.2 - Uncertainty Distribution 43
Φ(x)
....
........
..
...
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .......................................
1 ....
..
..
.. ....
α 5 .............................................................................................................................• .
...
.. .
.........
.
α .. ............. ..
4 .......................................................................• ...... .
..
... .. ... ..
... ... ... ....
... ... .. ..
... ... .. ..
... ... ... ....
... ... .. ...
... ..
. .. ..
α 3 ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...........•
.......... ..
.... ..
.
.
.
.
...
..
.... ..
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ................. .. .. ....
α 2 ... .
..
.
.•
.
.
.. ..
.
.
.
..
.
..
.
.
..
..
.... ...... ...
. .. .. ..
... . .. ..
... .
...
.
. .. .. .
.. ....
... .
...
.
. .
. .
. .
. ..
... ..
. .
.. . .. ..
... ..
.... . ..
. . ....
... ..
.... ..
. .
.
..
. ...
. ..
α .... .. .. .. .. .. .. .. .. .• ....... ... ..
.. ..
1 ... .. ..
..
.
.
. .
. ...
... .... . .
. .
. ..
........................................................................................................................................................................................................................................................................ x
x ... x x x x
0 ..
..
1 2 3 4 5
we get
M{ξ > x} = 1 − M{ξ ≤ x} = 1 − Φ(x).
Remark 2.3: Perhaps some readers would like to get an exactly scalar value
of the uncertain measure M{a ≤ ξ ≤ b}. Generally speaking, it is an impos-
sible job (except a = −∞ or b = +∞) if only an uncertainty distribution is
available. I would like to ask if there is a need to know it. In fact, it is not
necessary for practical purpose. Would you believe? I hope so!
Definition 2.13 (Liu [84]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ(x). Then the inverse function Φ−1 (α) is called the
inverse uncertainty distribution of ξ.
Note that the inverse uncertainty distribution Φ−1 (α) is well defined on the
open interval (0, 1). If needed, we may extend the domain to [0, 1] via
Φ−1 (0) = lim Φ−1 (α), Φ−1 (1) = lim Φ−1 (α). (2.36)
α↓0 α↑1
Φ−1 (α)
... ...
.......... .
.................................................................
b ...
... ..
..
..
.
..
... ...
.
... ...... ...
.....
... ...... ..
... ......
... ..
..
...... ..
..
... ...
...... ..
... ...
..... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
.
. ..
.....
.
........................................................................................................................................................................................
.
α
.... .
......
0 ... ......
.
....
.
1
... ...........
... .........
.......
a ....
Φ−1 (α)
.... ...
......... .
.... ..
c .........................................................
.... .....
....... .
....... .
... ... ..
....... ..
... .......
.. ....... ..
... .
...
......... ..
... ...
.......
. ..
...
.......................................... ..
b ...
... ...
.
..
..
... ..
. ..
..
.
.... .
. ..
... . .
.... .
... ... .
. ..
... ...
.... . ..
.... .
. ..
... . .
..... .
.
. .
.. .
....................................................................................................................................................................................... α
.. .
....
0 ...
...
.
....
.....
..
.
0.5 1
... .........
.. ....
a .......
...
Φ−1 (α)
.... ...
......... .
.... ...
... ....
.
... .. ..
.
... ... ..
... ... ..
... .... .
... ..
....... ...
.
... ...... ...
......
... ....... ..
... .........
... ..
...
............
. ..
..
....
e ..................................................
... .
...
... .
..
..
... ...
...
....... .
. ..
... ...
.......
. .
. ..
... ...
...... .
. ..
..
.... ....... ... .
....................................................................................................................................................................................... α
... ...
0 ... ...
......
0.5 1
....
Φ−1 (α)
.... ...
......... .
.... . ..
... .... ...
... ... ..
... .. .
... ... ..
... ... ..
... ... ..
... ... ..
... ... ...
... .. ..
... .
...
. ..
.
... .... ..
... ...
..... ..
... .
...
..... ..
... ...
...
....... ..
... ..
...
...
......... ..
... ..
...
...
...
..........
. ..
......
... .............................. ..
..........
..................................................................................................................................................................................
.
α
0 .... 1
..
That is, Φ is the uncertainty distribution of ξ and Φ−1 is its inverse uncer-
tainty distribution. The theorem is verified.
Thus Φ−1 (α) is just the inverse uncertainty distribution of the uncertain
variable ξ. The theorem is verified.
2.3 Independence
Note that an uncertain variable is a measurable function from an uncertainty
space to the set of real numbers. The independence of two functions means
that knowing the value of one does not change our estimation of the value
of another. What uncertain variables meet this condition? A typical case is
Section 2.3 - Independence 47
that they are defined on different uncertainty spaces. For example, let ξ1 (γ1 )
and ξ2 (γ2 ) be uncertain variables on the uncertainty spaces (Γ1 , L1 , M1 ) and
(Γ2 , L2 , M2 ), respectively. It is clear that they are also uncertain variables
on the product uncertainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ). Then for any
Borel sets B1 and B2 of real numbers, we have
M{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )}
= M {(γ1 , γ2 ) | ξ1 (γ1 ) ∈ B1 , ξ2 (γ2 ) ∈ B2 }
= M {(γ1 | ξ1 (γ1 ) ∈ B1 ) × (γ2 | ξ2 (γ2 ) ∈ B2 )}
= M1 {γ1 | ξ1 (γ1 ) ∈ B1 } ∧ M2 {γ2 | ξ2 (γ2 ) ∈ B2 }
= M {ξ1 ∈ B1 } ∧ M {ξ2 ∈ B2 } .
That is,
M{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )} = M {ξ1 ∈ B1 } ∧ M {ξ2 ∈ B2 } . (2.42)
Thus we say two uncertain variables are independent if the equation (2.42)
holds. Generally, we may define independence in the following form.
Definition 2.14 (Liu [80]) The uncertain variables ξ1 , ξ2 , · · · , ξn are said to
be independent if
( n ) n
\ ^
M (ξi ∈ Bi ) = M {ξi ∈ Bi } (2.43)
i=1 i=1
Exercise 2.11: John gives Tom 2 dollars. Thus John gets “−2 dollars”
and Tom “+2 dollars”. Are John’s “−2 dollars” and Tom’s “+2 dollars”
independent? Why?
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.47)
Proof: For simplicity, we only prove the case n = 2. At first, we always have
= M{ξ1 ≤ Φ−1 −1
1 (α)} ∧ M{ξ2 ≤ Φ2 (α)}
= α ∧ α = α.
= M{ξ1 ≤ Φ−1 −1
1 (α)} ∨ M{ξ2 ≤ Φ2 (α)}
= α ∨ α = α.
It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.
ξ = ξ1 × ξ2 × · · · × ξn (2.51)
Φ−1
1 (α) = α, (2.57)
Theorem 2.9 Assume that ξ1 and ξ2 are independent linear uncertain vari-
ables L(a1 , b1 ) and L(a2 , b2 ), respectively. Then the sum ξ1 + ξ2 is also a
linear uncertain variable L(a1 + a2 , b1 + b2 ), i.e.,
The product of a linear uncertain variable L(a, b) and a scalar number k > 0
is also a linear uncertain variable L(ka, kb), i.e.,
Φ−1
1 (α) = (1 − α)a1 + αb1 ,
Φ−1
2 (α) = (1 − α)a2 + αb2 .
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is
(
(1 − 2α)a2 + 2αb2 , if α < 0.5
Φ−1
2 (α) =
(2 − 2α)b2 + (2α − 1)c2 , if α ≥ 0.5.
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is
(
−1 (1 − 2α)(a1 + a2 ) + 2α(b1 + b2 ), if α < 0.5
Ψ (α) =
(2 − 2α)(b1 + b2 ) + (2α − 1)(c1 + c2 ), if α ≥ 0.5.
The product of a lognormal uncertain variable LOGN (e, σ) and a scalar num-
ber k > 0 is also a lognormal uncertain variable LOGN (e + ln k, σ), i.e.,
Remark 2.4: Keep in mind that the sum of lognormal uncertain variables
is no longer lognormal.
54 Chapter 2 - Uncertain Variable
f (x) = −x,
f (x) = exp(−x),
1
f (x) = , x > 0.
x
Theorem 2.13 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f
is a strictly decreasing function, then
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.70)
Proof: For simplicity, we only prove the case n = 2. At first, we always have
= M{ξ1 ≥ Φ−1 −1
1 (1 − α)} ∧ M{ξ2 ≥ Φ2 (1 − α)}
= α ∧ α = α.
= M{ξ1 ≥ Φ−1 −1
1 (1 − α)} ∨ M{ξ2 ≥ Φ2 (1 − α)}
= α ∨ α = α.
It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.
f (x1 , x2 ) = x1 − x2 ,
f (x1 , x2 ) = x1 /x2 , x1 , x2 > 0,
f (x1 , x2 ) = x1 /(x1 + x2 ), x1 , x2 > 0.
Note that both strictly increasing function and strictly decreasing function
are special cases of strictly monotone function.
56 Chapter 2 - Uncertain Variable
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.76)
On the one hand, since the function f (x1 , x2 ) is strictly increasing with re-
spect to x1 and strictly decreasing with x2 , we obtain
= M{ξ1 ≤ Φ−1 −1
1 (α)} ∧ M{ξ2 ≥ Φ2 (1 − α)}
= α ∧ α = α.
On the other hand, since the function f (x1 , x2 ) is strictly increasing with
respect to x1 and strictly decreasing with x2 , we obtain
= M{ξ1 ≤ Φ−1 −1
1 (α)} ∨ M{ξ2 ≥ Φ2 (1 − α)}
= α ∨ α = α.
It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.
Φ−1
1 (α)
Ψ−1 (α) = −1 . (2.79)
Φ2 (1 − α)
Φ−1
1 (α)
Ψ−1 (α) = . (2.80)
Φ−1
1 (α) + Φ−1
2 (1 − α)
A Useful Theorem
In many cases, it is required to calculate M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0}. Perhaps
the first idea is to find the uncertainty distribution Ψ(x) of f (ξ1 , ξ2 , · · ·, ξn ),
and then the uncertain measure is just Ψ(0). However, for convenience, we
may use the following theorem.
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) = 0. (2.82)
Remark 2.5: Keep in mind that sometimes the equation (2.82) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) < 0 (2.83)
58 Chapter 2 - Uncertain Variable
is strictly increasing with respect to α. See Figure 2.12. Thus its root α may
be estimated by the bisection method:
... ...
.......... .....
... .. .
.. ... ...
.
... .. .
... .
... ..... ...
... .....
... ........... ..
..
.
... ........
... ...
............... ..
..
. .........
.
.. •
. .
.........
. .
.
.........................................................................................................................................................................................
. ..
.. α
0 ...
... .
........
..
........
.
...
...
1 ..
..
... .....
.. ..
... ..
... ..
... ....... ..
... ... ..
... ... ..
... ... ..
....... ..
.... .
...
...
.
Φ−1 −1 −1
1 (1 − α) ∨ Φ2 (1 − α) = Φ3 (α) + 5. (2.88)
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.89)
ξ = ξ1 + ξ2 + · · · + ξn (2.94)
ξ = ξ1 ξ2 · · · ξn (2.96)
ξ = k-max[ξ1 , ξ2 , · · · , ξn ] (2.108)
Si = ξ1 + ξ2 + · · · + ξi (2.110)
for i = 1, 2, · · · , n. Define
S = f (ξ1 , ξ2 , · · · , ξn ).
= min Ψi (x).
1≤i≤n
S = f (ξ1 , ξ2 , · · · , ξn ).
= max Ψi (x).
1≤i≤n
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.115)
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.121)
for i = 1, 2, · · · , n, respectively.
Then we have
Case 2: Assume
sup min νi (xi ) > 0.5.
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
68 Chapter 2 - Uncertain Variable
Then we have
Case 3: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Then we have
Case 4: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Then we have
and (
1, if γ = γ1
ξ2 (γ) = (2.134)
0, if γ = γ2
is also a Boolean uncertain variable with
Proof: The corresponding Boolean function for the kth order statistic is
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.142)
M{ξ = 1} = a1 ∧ a2 ∧ · · · ∧ an . (2.143)
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.144)
M{ξ = 1} = a1 ∨ a2 ∨ · · · ∨ an . (2.145)
Definition 2.16 (Liu [77]) Let ξ be an uncertain variable. Then the expected
value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx (2.149)
0 −∞
Proof: It follows from the measure inversion theorem that for almost all
numbers x, we have M{ξ ≥ x} = 1 − Φ(x) and M{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞
Φ(x)
....
........
....
...........................................................................................................................................
1 ... .. .. .. ... ... .. ... .. ............................
... ... ... .. .. .. ... ..............
... .. .. .. .. .............
... .. ... .. ............
... ... .. ..........
... .. .. .......
... .. .......
... ........
. ..
.
..........
.... ... ...
...
..... ... ...
..... . . ..
....... . .. ..
....... .. .. .. ..
....... .. . . . ..
.................. ... ... .. ... ... ....
...... . . .
. . .
.................... . .. .. . .. . .. .. ..
........................................................................................................................................................................................................................................................................... x
....
0 ..
...
Z +∞ Z 0
Figure 2.13: E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Proof: It follows from the integration by parts and Theorem 2.23 that the
expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
Φ(x)
....
........
..
.
...............................................................................................................................
1 ....................................................................................
............................................................
.............................................
.......................................
.....................................
............................
........................
..................
..............
.......
...
.........
.
..
.
.............
.
..
..................
.
.
............................
......
................................
........................................
.
............................................................
.....................................................................
....................................................................................................................................................................................................................................................................... x
..
0 ...
...
.
Z +∞ Z 1
Figure 2.14: E[ξ] = xdΦ(x) = Φ−1 (α)dα
−∞ 0
Theorem 2.25 (Liu [84]) Let ξ be an uncertain variable with regular uncer-
tainty distribution Φ. Then
Z 1
E[ξ] = Φ−1 (α)dα. (2.154)
0
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.24 that the expected value is
Z +∞ Z 1
E[ξ] = xdΦ(x) = Φ−1 (α)dα.
−∞ 0
Exercise 2.45: Show that the linear uncertain variable ξ ∼ L(a, b) has an
expected value
a+b
E[ξ] = . (2.155)
2
Exercise 2.46: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an expected value
a + 2b + c
E[ξ] = . (2.156)
4
Exercise 2.47: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an expected value e, i.e.,
E[ξ] = e. (2.157)
74 Chapter 2 - Uncertain Variable
Exercise 2.48: Show that the lognormal uncertain variable ξ ∼ LOGN (e, σ)
has an expected value
( √ √ √
σ 3 exp(e) csc(σ 3), if σ < π/ 3
E[ξ] = √ (2.158)
+∞, if σ ≥ π/ 3.
This formula was first discovered by Dr. Zhongfeng Qin with the help of
Maple software, and was verified again by Dr. Kai Yao through a rigorous
mathematical derivation.
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.160)
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.
It is easy to verify that E[ξ] = 0.9, E[η] = 1 and E[ξ + η] = 2. Thus we have
Then
0, if γ = γ1
(ξ + η)(γ) = 4, if γ = γ2
3, if γ = γ3 .
It is easy to verify that E[ξ] = 0.6, E[η] = 1 and E[ξ + η] = 1.5. Thus we
have
E[ξ + η] < E[ξ] + E[η].
Therefore, the independence condition cannot be removed.
It follows that
holds for each α. That is, Φ−1 (α) + Ψ−1 (α) is the inverse uncertainty distri-
bution of f (ξ) + g(ξ). By using Theorem 2.25, we obtain
Z 1
E[f (ξ) + g(ξ)] = (Φ−1 (α) + Ψ−1 (α))dα
0
Z 1 Z 1
−1
= Φ (α)dα + Ψ−1 (α)dα
0 0
Some Inequalities
Theorem 2.29 (Liu [77]) Let ξ be an uncertain variable, and let f be a
nonnegative even function. If f is decreasing on (−∞, 0] and increasing on
[0, ∞), then for any given number t > 0, we have
E[f (ξ)]
M{|ξ| ≥ t} ≤ . (2.172)
f (t)
Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) − f (x0 , y0 ) ≤ a(x − x0 ) + b(y − y0 ), ∀x ≥ 0, y ≥ 0.
Letting x0 = E[|ξ|p ], y0 = E[|η|q ], x = |ξ|p and y = |η|q , we have
f (|ξ|p , |η|q ) − f (E[|ξ|p ], E[|η|q ]) ≤ a(|ξ|p − E[|ξ|p ]) + b(|η|q − E[|η|q ]).
Taking the expected values on both sides, we obtain
E[f (|ξ|p , |η|q )] ≤ f (E[|ξ|p ], E[|η|q ]).
Hence the inequality (2.174) holds.
80 Chapter 2 - Uncertain Variable
Theorem 2.32 (Liu [77], Minkowski Inequality) Let p be a real number with
p ≥ 1, and let ξ and η be independent uncertain variables. Then
pp
p p
E[|ξ + η|p ] ≤ p E[|ξ|p ] + p E[|η|p ]. (2.175)
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function
√
f (x, y) = ( p x + p y)p is a concave function on {(x, y) : x ≥ 0, y ≥ 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that
Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain
2.8 Variance
The variance of uncertain variable provides a degree of the spread of the
distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 2.17 (Liu [77]) Let ξ be an uncertain variable with finite expected
value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (2.178)
This definition tells us that the variance is just the expected value of
(ξ − e)2 . Since (ξ − e)2 is a nonnegative uncertain variable, we also have
Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx. (2.179)
0
M{(ξ − e)2 = 0} = 1.
That is, M{ξ = e} = 1. Conversely, assume M{ξ = e} = 1. Then we
immediately have M{(ξ − e)2 = 0} = 1 and M{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx = 0.
0
The theorem is proved.
82 Chapter 2 - Uncertain Variable
V [ξ]
M {|ξ − E[ξ]| ≥ t} ≤ . (2.182)
t2
Proof: It is a special case of Theorem 2.29 when the uncertain variable ξ is
replaced with ξ − E[ξ], and f (x) = x2 .
Proof: This theorem is based on Stipulation 2.1 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0
√ 2
Substituting e + y with x and y with (x − e) , the change of variables and
integration by parts produce
+∞ +∞ +∞
√
Z Z Z
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e)2 = (x − e)2 dΦ(x).
0 e e
√
Similarly, substituting e − y with x and y with (x − e)2 , we obtain
+∞ −∞ e
√
Z Z Z
2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞
Theorem 2.39 (Yao [178]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ and finite expected value e. Then
Z 1
V [ξ] = (Φ−1 (α) − e)2 dα. (2.185)
0
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.38 that the variance is
Z +∞ Z 1
V [ξ] = 2
(x − e) dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0
Exercise 2.58: Show that the linear uncertain variable ξ ∼ L(a, b) has a
variance
(b − a)2
V [ξ] = . (2.186)
12
84 Chapter 2 - Uncertain Variable
Exercise 2.59: Show that the normal uncertain variable ξ ∼ N (e, σ) has a
variance
V [ξ] = σ 2 . (2.187)
Remark 2.9: If ξ and η are independent linear uncertain variables, then the
condition (2.188) is met. If they are independent normal uncertain variables,
then the condition (2.188) is also met.
2.9 Moment
Definition 2.18 (Liu [77]) Let ξ be an uncertain variable and let k be a
positive integer. Then E[ξ k ] is called the k-th moment of ξ.
Proof: When k is an odd number, Theorem 2.40 says that the k-th moment
is Z +∞ Z 0
√ √
E[ξ k ] = (1 − Φ( k y))dy − Φ( k y)dy.
0 −∞
√
Substituting y with x and y with xk , the change of variables and integration
k
by parts produce
+∞ +∞ +∞
√
Z Z Z
(1 − Φ( k y))dy = (1 − Φ(x))dxk = xk dΦ(x)
0 0 0
and
0 0 0
√
Z Z Z
Φ( k y)dy = Φ(x)dxk = − xk dΦ(x).
−∞ −∞ −∞
Thus we have
Z +∞ Z 0 Z +∞
k k k
E[ξ ] = x dΦ(x) + x dΦ(x) = xk dΦ(x).
0 −∞ −∞
86 Chapter 2 - Uncertain Variable
When k is an even number, the theorem is based on Stipulation 2.2 that says
the k-th moment is
Z +∞
√ √
E[ξ k ] = (1 − Φ( k y) + Φ(− k y))dy.
0
√
Substituting k y with x and y with xk , the change of variables and integration
by parts produce
+∞ +∞ +∞
√
Z Z Z
k
(1 − Φ( y))dy =
k
(1 − Φ(x))dx = xk dΦ(x).
0 0 0
√
Similarly, substituting − k y with x and y with xk , we obtain
+∞ 0 0
√
Z Z Z
k
Φ(− y)dy =
k
Φ(x)dx = xk dΦ(x).
0 −∞ −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.41 that the k-th moment is
Z +∞ Z 1
E[ξ k ] = xk dΦ(x) = (Φ−1 (α))k dα.
−∞ 0
Exercise 2.61: Show that the second moment of linear uncertain variable
ξ ∼ L(a, b) is
a2 + ab + b2
E[ξ 2 ] = . (2.194)
3
Exercise 2.62: Show that the second moment of normal uncertain variable
ξ ∼ N (e, σ) is
E[ξ 2 ] = e2 + σ 2 . (2.195)
Section 2.10 - Distance 87
2.10 Distance
Definition 2.19 (Liu [77]) The distance between uncertain variables ξ and
η is defined as
d(ξ, η) = E[|ξ − η|]. (2.196)
That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain variable, we always have
Z +∞
d(ξ, η) = M{|ξ − η| ≥ x}dx. (2.197)
0
Theorem 2.43 (Liu [77]) Let ξ, η, τ be uncertain variables, and let d(·, ·) be
the distance. Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ ) + 2d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the subadditivity axiom that
Z +∞
d(ξ, η) = M {|ξ − η| ≥ x} dx
0
Z +∞
≤ M {|ξ − τ | + |τ − η| ≥ x} dx
0
Z +∞
≤ M {(|ξ − τ | ≥ x/2) ∪ (|τ − η| ≥ x/2)} dx
0
Z +∞
≤ (M{|ξ − τ | ≥ x/2} + M{|τ − η| ≥ x/2}) dx
0
It is easy to verify that d(ξ, τ ) = d(τ, η) = 0.5 and d(ξ, η) = 1.5. Thus
d(ξ, η) = 1.5(d(ξ, τ ) + d(τ, η)).
A conjecture is d(ξ, η) ≤ 1.5(d(ξ, τ )+d(τ, η)) for arbitrary uncertain variables
ξ, η and τ . This is an open problem.
88 Chapter 2 - Uncertain Variable
−1
where Υ (α) is the inverse uncertainty distribution of ξ − η, and
Proof: Substituting Υ(x) with α and x with Υ−1 (α), it follows from the
change of variables and Theorem 2.44 that the distance is
Z +∞ Z 1
d(ξ, η) = |x|dΥ(x) = |Υ−1 (α)|dα.
−∞ 0
2.11 Entropy
This section defines an entropy as the degree of difficulty of predicting the
realization of an uncertain variable.
Definition 2.20 (Liu [80]) Suppose that ξ is an uncertain variable with un-
certainty distribution Φ. Then its entropy is defined by
Z +∞
H[ξ] = S(Φ(x))dx (2.204)
−∞
S(t)
.
....
.......
..
...
.
... . . . . . . . . . . . . . . ..............................
ln 2 .... .......
.....
.
.
.......
.....
.....
... ..... . .....
... ..... . ....
... ..
..... .
. ....
... ....
. . ....
...
... ...
. .
. ...
... ..
. . ...
...
... ... . ...
... ... . ...
.. .
... . . ...
... .... .
.
...
...
... ... . ...
... ... . ...
... ... .
. ...
... ... . ...
... ... . ...
...... . ...
...... . ...
.
....................................................................................................................................................................................
. .
.
.. t
0 ...
.. 0.5 1
Example 2.20: Let ξ be a linear uncertain variable L(a, b). Then its entropy
is
Z b
x−a x−a b−x b−x b−a
H[ξ] = − ln + ln dx = . (2.206)
a b−a b−a b−a b−a 2
Exercise 2.65: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an entropy
c−a
H[ξ] = . (2.207)
2
Exercise 2.66: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an entropy
πσ
H[ξ] = √ . (2.208)
3
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.
Proof: It is clear that S(α) is a derivable function whose derivative has the
form
α
S 0 (α) = − ln .
1−α
Since Z Φ(x)Z 1
S(Φ(x)) = S 0 (α)dα = − S 0 (α)dα,
0 Φ(x)
we have
Z +∞ Z 0 Z Φ(x) Z +∞ Z 1
H[ξ] = S(Φ(x))dx = S 0 (α)dαdx − S 0 (α)dαdx.
−∞ −∞ 0 0 Φ(x)
ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.212)
has an entropy
Z 1
α
H[ξ] = f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) ln dα.
0 1−α
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
H[aξ + bη] = H[aξ] + H[bη] = |a|H[ξ] + |b|H[η].
The theorem is proved.
Thus
M{(ξ ≤ x) ∩ (ξ > t)}
Φ(x|(t, +∞)) = = 0.
M{ξ > t}
and
M{(ξ ≤ x) ∩ (ξ > t)} Φ(x)
≤ .
M{ξ > t} 1 − Φ(t)
It follows from the maximum uncertainty principle that
Φ(x)
Φ(x|(t, +∞)) = ∧ 0.5.
1 − Φ(t)
Thus
Exercise 2.70: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
of ξ given ξ > t is
0, if x ≤ t
x−a
∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|(t, +∞)) = b−t
x−t
∧ 1, if (b + t)/2 ≤ x.
b−t
Section 2.12 - Conditional Uncertainty Distribution 97
Φ(x|(t, +∞))
....
........
..
...
..
....
1 ........................................................................
... .......................................
.......
............
... .
... ..
.. ..
.
... ... ....
. ....
... ............
... .. ....
... ..... .......
... . ...
... ..... .......
.. ..
... ..... .........
... .
. .
..
.
................................................... .... ..
...
.
0.5 .... ............................................
.... .
... ..... .....
... .... .
..... ... .
.
... .
.... .....
...
... ..........
... .. .
... ..... ..
... ... ..
... ... ...
.
... ..... ..
... . .
................................................................................................................................................................................................................................................... x
....
0 ...
.
t
and
M{(ξ > x) ∩ (ξ ≤ t)} 1 − Φ(x)
≤ ,
M{ξ ≤ t} Φ(t)
i.e.,
M{(ξ > x) ∩ (ξ ≤ t)} Φ(x) + Φ(t) − 1
1− ≥ .
M{ξ ≤ t} Φ(t)
It follows from the maximum uncertainty principle that
Φ(x) + Φ(t) − 1
Φ(x|(−∞, t]) = ∨ 0.5.
Φ(t)
Thus
M{(ξ > x) ∩ (ξ ≤ t)}
Φ(x|(−∞, t]) = 1 − = 1 − 0 = 1.
M{ξ ≤ t}
The theorem is proved.
Exercise 2.71: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
of ξ given ξ ≤ t is
x−a
∨ 0, if x ≤ (a + t)/2
t−a
Φ(x|(−∞, t]) = b−x
1− ∨ 0.5, if (a + t)/2 ≤ x < t
t−a
if x ≥ t.
1,
Definition 2.22 (Liu [77]) The uncertain sequence {ξi } is said to be con-
vergent a.s. to ξ if there exists an event Λ with M{Λ} = 1 such that
Φ(x|(−∞, t])
....
........
..
...
..
....
1 ........................................................................
...
.........................................................................
.. ..
.....
... ..
... .
.. .....
... .. .. .
... .. ....
... ..
... ......
... ..
. .
... .. .......
.
.. .
.
... ..... ........ ...
... .
. .
.. ..
... .... ..
..... ..
0.5 ....................................
...
... ..
.
...........................................
.... . ...
..
..
... .
.... ......
. ..
... ..
.... .... ..
... ...
.
... ..
.... .......
..
... ..
.
... ..
..
... ..
. ..
... ..
.... ..... ..
... ..
....... ..
... ..
.... ... ..
... ..
.......... ..
.
...
... .
. .
................................................................................................................................................................................................................................................... x
....
0 ..
..
t
Definition 2.23 (Liu [77]) The uncertain sequence {ξi } is said to be con-
vergent in measure to ξ if
lim M {|ξi − ξ| ≥ ε} = 0 (2.221)
i→∞
E[|ξi − ξ|]
M{|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in measure to ξ. The theorem is proved.
for each i. That is, the sequence {ξi } does not converge in measure to ξ.
Example 2.26: Convergence a.s. does not imply convergence in mean. Take
an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with power set and
X 1
M{Λ} = .
2j
γj ∈Λ
Example 2.27: Convergence in mean does not imply convergence a.s. Take
an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra and Lebesgue
measure. For any positive integer i, there is an integer j such that i = 2j + k,
where k is an integer between 0 and 2j − 1. Define uncertain variables as
(
1, if k/2j ≤ γ ≤ (k + 1)/2j
ξi (γ) =
0, otherwise
It is clear that Φi (x) does not converge to Φ(x) at x > 0. That is, the
sequence {ξi } does not converge in distribution to ξ.
104 Chapter 2 - Uncertain Variable
B = B ⊂ <k {ξ ∈ B} is an event .
is an event. Next, the class B is a σ-algebra over <k because (i) we have
<k ∈ B since {ξ ∈ <k } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and
{ξ ∈ B c } = {ξ ∈ B}c
Remark 2.10: However, the equation (2.227) does not imply that the un-
certain variables are independent. For example, let ξ be an uncertain variable
with uncertainty distribution Φ. Then the joint uncertainty distribution Ψ
of uncertain vector (ξ, ξ) is
for any real numbers x1 and x2 . But, generally speaking, an uncertain vari-
able is not independent of itself.
τ = (τ1 , τ2 , · · · , τm ) (2.230)
ξ = e + στ (2.236)
for some real vector e and some real matrix σ, where τ is a standard normal
uncertain vector. Note that ξ, e and τ are understood as column vectors.
Please also note that for every index i, the component ξi is a normal uncertain
variable with expected value ei and standard variance
m
X
|σij |. (2.237)
j=1
η = c + Dξ (2.238)
21 ξ22 · · ·
ξ ξ2q
ξ= .. .. .. ..
(2.239)
.
. . .
ξp1 ξp2 · · · ξpq
where ξij , i = 1, 2, · · · , p, j = 1, 2, · · · , q are uncertain variables.
Proof: Suppose that ξ is defined on the uncertainty space (Γ, L, M). For
any Borel set B of real numbers, the set
B = B ⊂ <p×q {ξ ∈ B} is an event .
· · · (a1q , b1q )
(a11 , b11 ) (a12 , b12 )
· · · (a2q , b2q )
p \ q
(a21 , b21 ) (a22 , b22 )
\
ξ ∈ .. .. .. ..
= {ξij ∈ (aij , bij )}
.
. . .
i=1 j=1
· · · (apq , bpq )
(ap1 , bp1 ) (ap2 , bp2 )
Section 2.15 - Uncertain Matrix 109
is an event. Next, the class B is a σ-algebra over <p×q because (i) we have
<p×q ∈ B since {ξ ∈ <p×q } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and
{ξ ∈ B c } = {ξ ∈ B}c
is an event. This means that B c ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · · , then
{ξ ∈ Bi } are events and
∞ ∞
( )
[ [
ξ∈ Bi = {ξ ∈ Bi }
i=1 i=1
Exercise 2.74: Let (ξij )3×3 and (ηij )3×3 be independent uncertain matrices.
Show that (ξ11 , ξ12 ) and (η31 , η32 , η33 ) are independent uncertain vectors.
Exercise 2.75: Let (ξij )3×3 and (ηij )3×3 be independent uncertain matrices.
Show that
! η11 η12
ξ11 ξ12 ξ13
and η21 η22
ξ21 ξ22 ξ23
η31 η32
are independent uncertain matrices.
Theorem 2.63 (Liu [98]) The p × q uncertain matrices ξ 1 , ξ 2 , · · · , ξ n are
independent if and only if
( n ) n
[ _
M (ξ i ∈ Bi ) = M {ξ i ∈ Bi } (2.241)
i=1 i=1
Uncertain Programming
Uncertain programming was founded by Liu [79] in 2009. This chapter will
provide the theory of uncertain programming, and present some uncertain
programming models for machine scheduling problem, vehicle routing prob-
lem, and project scheduling problem.
Definition 3.1 (Liu [79]) A vector x is called a feasible solution to the un-
certain programming model (3.3) if
for j = 1, 2, · · · , p.
g(x, Φ−1 −1 −1 −1
1 (α), · · · , Φk (α), Φk+1 (1 − α), · · · , Φn (1 − α)) ≤ 0. (3.9)
Proof: It follows from the operational law of uncertain variables that the
inverse uncertainty distribution of g(x, ξ1 , ξ2 , · · · , ξn ) is
Thus (3.8) holds if and only if Ψ−1 (α) ≤ 0. The theorem is thus verified.
Section 3.1 - Uncertain Programming 115
where (
hi (x), if hi (x) > 0
h+
i (x) = (3.16)
0, if hi (x) ≤ 0,
(
−hi (x), if hi (x) < 0
h−
i (x) = (3.17)
0, if hi (x) ≥ 0
for i = 1, 2, · · · , n.
116 Chapter 3 - Uncertain Programming
√ √ √
Note that x1 + ξ1 + x2 + ξ2 + x3 + ξ3 is a strictly increasing function
with respect to ξ1 , ξ2 , ξ3 , and (x1 + η1 )2 + (x2 + η2 )2 + (x3 + η3 )2 is a strictly
increasing function with respect to η1 , η2 , η3 . It is easy to verify that the
uncertain programming model can be converted to the crisp model,
Z 1 q q q
−1 −1 −1
max x1 + Φ1 (α) + x2 + Φ2 (α) + x3 + Φ3 (α) dα
x ,x ,x
1 2 3 0
subject to:
(x1 + Ψ−1 2 −1 2 −1
1 (0.9)) + (x2 + Ψ2 (0.9)) + (x3 + Ψ3 (0.9)) ≤ 100
2
x1 , x2 , x3 ≥ 0
where Φ−1 −1 −1 −1 −1 −1
1 , Φ2 , Φ3 , Ψ1 , Ψ2 , Ψ3 are inverse uncertainty distributions of
uncertain variables ξ1 , ξ2 , ξ3 , η1 , η2 , η3 , respectively. The Matlab Uncertainty
Toolbox (http://orsc.edu.cn/liu/resources.htm) may solve this model and ob-
tain an optimal solution
Example 3.2: Assume that x1 and x2 are decision variables, ξ1 and ξ2 are iid
linear uncertain variables L(0, π/2). Consider the uncertain programming,
min E [x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 )]
x1 ,x2
subject to:
π π
0 ≤ x1 ≤ , 0 ≤ x2 ≤ .
2 2
It is clear that x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 ) is strictly decreasing with
respect to ξ1 and strictly increasing with respect to ξ2 . Thus the uncertain
programming is equivalent to the crisp model,
Z 1
x1 sin(x1 − Φ−1 −1
min 1 (1 − α)) − x2 cos(x2 + Φ2 (α)) dα
x ,x
1 2 0
subject to:
π π
0 ≤ x1 ≤ , 0 ≤ x2 ≤
2 2
where Φ−1 −1
1 , Φ2 are inverse uncertainty distributions of ξ1 , ξ2 , respectively.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this model and obtain an optimal solution
Machine
.. ...
.......
..
.
.............................................................................................................................................................................................
.... .... ....
... ... ...
... ... ...
M 3 .. .
. J 6 .
. J 7 ...
.. .... ...
... ... ..
..............................................................................................................................................................................................
.... .... .... ..
... ... ... ..
.. ... ... ..
M 2 ...... J 4 ...
...
J 5 ...
...
..
..
.... ... ... ..
......................................................................................................................................................................... ..
.... ... ... ... ..
... ... ... ... ..
... ... ... ... ..
M 1 ... J .
. 1 .
.
..
.
J 2 .
.
..
.
J 3 .
.
..
.
..
..
..... .... .... .... ..
.......................................................................................................................................................................................................................
... ..
..
Time
... .
.
............................................. Makespan .............................................
as follows,
y0 y1 y2 y3
... ... ... ...
... ....... ... ... ..
... ...... ........ ................... ... .......
...... ........ ................... ... .......
...... ........
.......
...... ........
.......
...... ........ ....
... .... ... ... ... .... ... ... ... .... ... ... .. ...
... ... x ..
. ..... x . ... ... x ..
. ..... x . ... .. x . ..
. .
.
... 6 .. x ..
. .
.
... 7 .. ... x .
... ..... 1...... ... 2 .... ... ..... 3...... ... 4 .... ... ..... 5...... .... . .
...... ...... . ...
... ............. ................. ... ............. ................. ... ............. ................ ....... ...
... ... ... ...
.................................. ........................................................................... . . ... . .
... ........................................................
. ....
... ... M-1 . M-2 .
......................................................................................
. M-3 .
Completion Times
and
Cxyk−1 +j (x, y, ξ) = Cxyk−1 +j−1 (x, y, ξ) + ξxyk−1 +j k (3.21)
for 2 ≤ j ≤ yk − yk−1 .
If the machine k is used, then the completion time Cxyk−1 +1 (x, y, ξ) of
job xyk−1 +1 is an uncertain variable whose inverse uncertainty distribution is
Ψ−1
xy (x, y, α) = Φ−1
xy k (α). (3.22)
k−1 +1 k−1 +1
Generally, suppose the completion time Cxyk−1 +j−1 (x, y, ξ) has an in-
verse uncertainty distribution Ψ−1xyk−1 +j−1 (x, y, α). Then the completion time
Cxyk−1 +j (x, y, ξ) has an inverse uncertainty distribution
Ψ−1
xy (x, y, α) = Ψ−1
xy (x, y, α) + Φ−1
xy k (α). (3.23)
k−1 +j k−1 +j−1 k−1 +j
Makespan
Note that, for each k (1 ≤ k ≤ m), the value Cxyk (x, y, ξ) is just the time
that the machine k finishes all jobs assigned to it. Thus the makespan of the
schedule (x, y) is determined by
Since Υ−1 (x, y, α) is the inverse uncertainty distribution of f (x, y, ξ), the
machine scheduling model is simplified as follows,
Z 1
Υ−1 (x, y, α)dα
min
x,y 0
subject to:
1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.27)
xi 6= xj , i 6= j, i, j = 1, 2, · · · , n
0 ≤ y1 ≤ y2 · · · ≤ ym−1 ≤ n
xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers.
Numerical Experiment
Assume that there are 3 machines and 7 jobs with the following linear un-
certain processing times
where i is the index of jobs and k is the index of machines. The Matlab
Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the
Section 3.4 - Vehicle Routing Problem 121
optimal solution is
Figure 3.3: A Vehicle Routing Plan with Single Depot and 7 Customers
k = 1, 2, · · · , m: vehicles;
Dij : travel distance from customers i to j, i, j = 0, 1, 2, · · · , n;
Tij : uncertain travel time from customers i to j, i, j = 0, 1, 2, · · · , n;
Φij : uncertainty distribution of Tij , i, j = 0, 1, 2, · · · , n;
[ai , bi ]: time window of customer i, i = 1, 2, · · · , n.
Operational Plan
y0 y1 y2 y3
... ... ... ...
... ..... ..... ... ..... ..... ... ..... ..... ..... ..
... ....... ......... ....... ......... ... ....... ......... ....... ......... ... ....... ......... ....... ......... ....... ......... ....
... .... ... ... .... ... ... .... ... ... .. ...
... ... x
... ..... 1......
..
. .... x
... 2 ....
..
. ... ... x
... ..... 3......
..
. .... x
... 4 ....
..
. ... ... x
... ..... 5.....
..
.. .... x
... 6 ...
..
.. .... x
... 7 ... .... .. .
... ............. ................. ... ............. ................. ... ............. ................. ................. ...
... ... ... ...
.
.......................................... .............................................................................. ................................................................................................ ............................................................
. V-1 . V-2 . V-3 .
It is clear that this type of representation is intuitive, and the total number
of decision variables is n + 2m − 1. We also note that the above decision
variables x, y and t ensure that: (a) each vehicle will be used at most one
time; (b) all tours begin and end at the depot; (c) each customer will be
visited by one and only one vehicle; and (d) there is no subtour.
Section 3.4 - Vehicle Routing Problem 123
Arrival Times
Let fi (x, y, t) be the arrival time function of some vehicles at customers i
for i = 1, 2, · · · , n. We remind readers that fi (x, y, t) are determined by the
decision variables x, y and t, i = 1, 2, · · · , n. Since unloading can start either
immediately, or later, when a vehicle arrives at a customer, the calculation of
fi (x, y, t) is heavily dependent on the operational strategy. Here we assume
that the customer does not permit a delivery earlier than the time window.
That is, the vehicle will wait to unload until the beginning of the time window
if it arrives before the time window. If a vehicle arrives at a customer after
the beginning of the time window, unloading will start immediately. For each
k with 1 ≤ k ≤ m, if vehicle k is used (i.e., yk > yk−1 ), then we have
and
fxyk−1 +j (x, y, t) = fxyk−1 +j−1 (x, y, t) ∨ axyk−1 +j−1 + Txyk−1 +j−1 xyk−1 +j
for 2 ≤ j ≤ yk − yk−1 . If the vehicle k is used, i.e., yk > yk−1 , then the arrival
time fxyk−1 +1 (x, y, t) at the customer xyk−1 +1 is an uncertain variable whose
inverse uncertainty distribution is
Ψ−1
xy (x, y, t, α) = tk + Φ−1
0xy (α).
k−1 +1 k−1 +1
Generally, suppose the arrival time fxyk−1 +j−1 (x, y, t) has an inverse uncer-
tainty distribution Ψ−1
xyk−1 +j−1 (x, y, t, α). Then fxyk−1 +j (x, y, t) has an in-
verse uncertainty distribution
Ψ−1
xy (x, y, t, α) = Ψ−1
xy (x, y, t, α)∨axyk−1 +j−1 +Φ−1
xy xyk−1 +j (α)
k−1 +j k−1 +j−1 k−1 +j−1
Travel Distance
Let g(x, y) be the total travel distance of all vehicles. Then we have
m
X
g(x, y) = gk (x, y) (3.29)
k=1
where
k −1
yP
D
0xyk−1 +1 + Dxj xj+1 + Dxyk 0 , if yk > yk−1
gk (x, y) = j=yk−1 +1
0, if yk = yk−1
for k = 1, 2, · · · , m.
124 Chapter 3 - Uncertain Programming
If we want to minimize the total travel distance of all vehicles subject to the
time window constraint, then we have the following vehicle routing model,
min g(x, y)
x,y,t
subject to:
M{fi (x, y, t) ≤ bi } ≥ αi , i = 1, 2, · · · , n
1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.31)
xi 6= xj , i 6= j, i, j = 1, 2, · · · , n
0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n
xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers
which is equivalent to
min g(x, y)
x,y,t
subject to:
Ψ−1
i (x, y, t, αi ) ≤ bi , i = 1, 2, · · · , n
1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.32)
xi 6= xj , i 6= j, i, j = 1, 2, · · · , n
0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n
xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1,
integers
Numerical Experiment
Assume that there are 3 vehicles and 7 customers with time windows shown in
Table 3.1, and each customer is visited within time windows with confidence
level 0.90.
We also assume that the distances are Dij = |i − j| for i, j = 0, 1, 2, · · · , 7,
and the travel times are normal uncertain variables
x∗ = (1, 3, 2, 5, 7, 4, 6),
y ∗ = (2, 5), (3.33)
t∗ = (6 : 18, 4 : 18, 8 : 18).
.................... ....
.................
... ... .. ...
...
..... ..
.. 2
................................ .......................................................................... 5 ..
..........................................
..... . .... .. ......
..... ..... ......
.... .... ......
...... ...... ......
...... ...... ......
....
....... ...
. ....... ......
......
.
. .
. ....
. .
........ ......
........
. ...
. ..... ......
...... .
..
................ .......... . ................. .......... .................. .......... .................
...
. .... . .... . ... ....
.... ....................................................................... ....................................................................... ....................................................................... ...
...
.... 1 .. .
..
.
. .. . . ...
.
3 .. .
.... .. . .
...
.
6 ..... ... ...
... 8 ...
..
. .
............... ........
......
.
................. .......
......
..................
...................................
...... ......
. .
........
...... ...... .
...... ...... ......
...... ...... ......
...... ...... ......
...... ...... ......
......
......
......
...... ...
........
........ ......... ....... ......... ..........
................ ...... .................. .............
. ... . .. ...
..... ..........................................................................
... 4
...................
.. ... 7
...................
...
.
Starting Times
For simplicity, we write ξ = {ξij : (i, j) ∈ A} and x = (x1 , x2 , · · · , xn ). Let
Ti (x, ξ) denote the starting time of all activities (i, j) in A. According to the
assumptions, the starting time of the total project (i.e., the starting time of
of all activities (1, j) in A) should be
T1 (x, ξ) = x1 (3.34)
Ψ−1
1 (x, α) = x1 . (3.35)
From the starting time T1 (x, ξ), we deduce that the starting time of activity
(2, 5) is
T2 (x, ξ) = x2 ∨ (x1 + ξ12 ) (3.36)
whose inverse uncertainty distribution may be written as
Ψ−1 −1
2 (x, α) = x2 ∨ (x1 + Φ12 (α)). (3.37)
Generally, suppose that the starting time Tk (x, ξ) of all activities (k, i) in A
has an inverse uncertainty distribution Ψ−1 k (x, α). Then the starting time
Ti (x, ξ) of all activities (i, j) in A should be
Ψ−1 Ψ−1 −1
i (x, α) = xi ∨ max k (x, α) + Φki (α) . (3.39)
(k,i)∈A
Completion Time
The completion time T (x, ξ) of the total project (i.e, the finish time of all
activities (k, n + 1) in A) is
Total Cost
Based on the completion time T (x, ξ), the total cost of the project can be
written as
dT (x,ξ)−xi e
X
C(x, ξ) = cij (1 + r) (3.42)
(i,j)∈A
where dae represents the minimal integer greater than or equal to a. Note that
C(x, ξ) is a discrete uncertain variable whose inverse uncertainty distribution
is
dΨ−1 (x;α)−xi e
X
Υ−1 (x, α) = cij (1 + r) (3.43)
(i,j)∈A
Numerical Experiment
Consider a project scheduling problem shown by Figure 3.5 in which there are
8 milestones and 11 activities. Assume that all duration times of activities
are linear uncertain variables,
cij = i + j, ∀(i, j) ∈ A.
In addition, we also suppose that the interest rate is r = 0.02, the due date is
T0 = 60, and the confidence level is α0 = 0.85. The Matlab Uncertainty Tool-
box (http://orsc.edu.cn/liu/resources.htm) yields that the optimal solution
is
x∗ = (7, 24, 17, 16, 35, 33, 30). (3.46)
In other words, the optimal allocating times of all loans needed for all activ-
ities are shown in Table 3.2 whose expected total cost is 190.6, and
Date 7 16 17 24 30 33 35
Node 1 4 3 2 7 6 5
Loan 12 11 27 7 15 14 13
Section 3.6 - Uncertain Multiobjective Programming 129
l m
−
(uij d+
P P
min Pj i + vij di )
x j=1
i=1
subject to:
(3.52)
E[fi (x, ξ)] + d− +
i − di = bi , i = 1, 2, · · · , m
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p
+ −
di , di ≥ 0, i = 1, 2, · · · , m
where Pj are the preemptive priority factors, uij and vij are the weighting
−
factors, d+
i are the positive deviations, di are the negative deviations, fi are
the functions in goal constraints, gj are the functions in real constraints, bi
are the target values, αj are the confidence levels, l is the number of priorities,
m is the number of goal constraints, and p is the number of real constraints.
Section 3.8 - Uncertain Multilevel Programming 131
and (
bi − E[fi (x, ξ)], if E[fi (x, ξ)] < bi
d−
i = (3.54)
0, otherwise
for each i. Sometimes, the objective function in the goal programming model
is written as follows,
(m m m
)
X X X
+ − + − + −
lexmin (ui1 di + vi1 di ), (ui2 di + vi2 di ), · · · , (uil di + vil di )
i=1 i=1 i=1
to minimize the expected objective of the leader, Liu-Yao [97] proposed the
following uncertain multilevel programming,
Definition 3.5 Suppose that x∗ is a feasible control vector of the leader and
(y ∗1 , y ∗2 , · · · , y ∗m ) is a Nash equilibrium of followers with respect to x∗ . We call
the array (x∗ , y ∗1 , y ∗2 , · · · , y ∗m ) a Stackelberg-Nash equilibrium to the uncertain
multilevel programming (3.57) if
The term risk has been used in different ways in literature. Here the risk
is defined as the “accidental loss” plus “uncertain measure of such loss”.
Uncertain risk analysis is a tool to quantify risk via uncertainty theory. One
main feature of this topic is to model events that almost never occur. This
chapter will introduce a definition of risk index and provide some useful
formulas for calculating risk index. This chapter will also discuss structural
risk analysis and investment risk analysis in uncertain environments.
Example 4.1: Consider a series system in which there are n elements whose
lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works whenever
all elements work. Thus the system lifetime is
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (4.2)
If the loss is understood as the case that the system fails before the time T ,
then we have a loss function
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (4.3)
134 Chapter 4 - Uncertain Risk Analysis
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − (ξ1 + ξ2 + · · · + ξn ). (4.9)
Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.
..... ................................
....... .. ...
............................. ..................................
... .. 1
...............................
.................................
.
....
..... ...
..
... ....
. .. . . ...
......................... . ...
..... .
. .
................................................................
.
Input ............................................................. .................................. Output
... .
.
.
2 .
... ..
...
...
.... ............................. ...
.... ..... .............................. .
.
... ....... ... .. ...
........................... ................................... .................................
3
.
...............................
..
Remark 4.2: Keep in mind that sometimes the equation (4.12) may not
have a root. In this case, if
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) < 0 (4.13)
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) > 0 (4.14)
Consider a series system in which there are n elements whose lifetimes are
independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty dis-
tributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the case
that the system fails before the time T , then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn (4.15)
Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = T. (4.17)
Risk = Φ1 (T ) ∨ Φ2 (T ) ∨ · · · ∨ Φn (T ). (4.18)
Section 4.6 - Standby System 137
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∨ ξ2 ∨ · · · ∨ ξn (4.19)
Φ−1 −1 −1
1 (α) ∨ Φ2 (α) ∨ · · · ∨ Φn (α) = T. (4.21)
Risk = Φ1 (T ) ∧ Φ2 (T ) ∧ · · · ∧ Φn (T ). (4.22)
k-max [Φ−1 −1 −1
1 (α), Φ2 (α), · · · , Φn (α)] = T. (4.25)
Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = T. (4.29)
Example 4.5: (The Simplest Case) Assume there is only a single strength
variable ξ and a single load variable η with regular uncertainty distributions
Φ and Ψ, respectively. In this case, the structural risk index is
It follows from the risk index theorem that the risk index is just the root α
of the equation
Φ−1 (α) = Ψ−1 (1 − α). (4.31)
Especially, if the strength variable ξ has a normal uncertainty distribution
N (es , σs ) and the load variable η has a normal uncertainty distribution
N (el , σl ), then the structural risk index is
−1
π(es − el )
Risk = 1 + exp √ . (4.32)
3(σs + σl )
Section 4.7 - Structural Risk Analysis 139
That is,
Risk = Φ1 (c1 ) ∨ Φ2 (c2 ) ∨ · · · ∨ Φn (cn ). (4.33)
That is,
Risk = α1 ∨ α2 ∨ · · · ∨ αn (4.34)
where αi are the roots of the equations
Φ−1 −1
i (α) = Ψi (1 − α) (4.35)
for i = 1, 2, · · · , n, respectively.
However, generally speaking, the load variables η1 , η2 , · · · , ηn are neither
constants nor independent. For examples, the load variables η1 , η2 , · · · , ηn
may be functions of independent uncertain variables τ1 , τ2 , · · · , τm . In this
case, the formula (4.34) is no longer valid. Thus we have to deal with those
structural systems case by case.
whenever the load variable η exceeds at least one of the strength variables
ξ1 , ξ2 , · · · , ξn . Hence the structural risk index is
( n )
[
Risk = M (ξi < η) = M{ξ1 ∧ ξ2 ∧ · · · ∧ ξn < η}.
i=1
f (ξ1 , ξ2 , · · · , ξn , η) = η − ξ1 ∧ ξ2 ∧ · · · ∧ ξn .
Then
Risk = M{f (ξ1 , ξ2 , · · · , ξn , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , · · · , ξn , it follows from the risk index theo-
rem that the risk index is just the root α of the equation
Ψ−1 (1 − α) − Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = 0. (4.36)
Ψ−1 (1 − α) = Φ−1
i (α) (4.37)
Risk = α1 ∨ α2 ∨ · · · ∨ αn . (4.38)
////////////////
....................................................................................................................................................................................
...
...
...
...
...
...
.
............
........
...
...
...
...
...
.
............
........
...
...
...
...
...
............
........
...
...
...
...
...
...
.
.........................................
..
····
....
....
...
····
.... ...
···· ... ...
···· ...
.........................................
..
Example 4.9: Consider a structural system shown in Figure 4.5 that consists
of 2 rods and an object. Assume that the strength variables of the left and
Section 4.8 - Investment Risk Analysis 141
Thus the structural system fails whenever for any one rod, the load variable
exceeds its strength variable. Hence the structural risk index is
η sin θ2 η sin θ1
Risk = M ξ1 < ∪ ξ2 <
sin(θ1 + θ2 ) sin(θ1 + θ2 )
ξ1 η ξ2 η
=M < ∪ <
sin θ2 sin(θ1 + θ2 ) sin θ1 sin(θ1 + θ2 )
ξ1 ξ2 η
=M ∧ <
sin θ2 sin θ1 sin(θ1 + θ2 )
Then
Risk = M{f (ξ1 , ξ2 , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , it follows from the risk index theorem that
the risk index is just the root α of the equation
Risk = α1 ∨ α2 . (4.42)
142 Chapter 4 - Uncertain Risk Analysis
////////////////
.......................................................................................................................................................................................
... .. .
... ...
... ... ...
...
.. ...
...
... . ...
.
... ... ..
... ...
... .. ...
... ..
... ..
. ....
.
...
... ... ...
..
...
... .. ...
... .. ...
... .. ...
.
... .. ..
... ...
. ...
...
...
θ
... 1 ... 2 .... θ . .
..
... .
... .. .....
... .. ..
... . ...
... . ...
........
.......................................
...
...···· ...
...
····
.... ...
····
... ...
...
····...
...
......................................
..
Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = c. (4.44)
4.9 Value-at-Risk
As a substitute of risk index (4.10), a concept of value-at-risk is given by the
following definition.
Definition 4.3 (Peng [121]) Assume that a system contains uncertain fac-
tors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the value-at-risk is defined
as
VaR(α) = sup{x | M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x} ≥ α}. (4.45)
Note that VaR(α) represents the maximum possible loss when α percent of
the right tail distribution is ignored. In other words, the loss f (ξ1 , ξ2 , · · · , ξn )
will exceed VaR(α) with uncertain measure α. See Figure 4.6. If the uncer-
tainty distribution Φ(x) of f (ξ1 , ξ2 , · · · , ξn ) is continuous, then
Φ(x)
...
..........
...
..
1 .... ....
......................................................................
......... ...........................
...............
... ..........
...
. ........
α ... .
...
........
... .....
.
......... ......
... ......
... ... .....
.................................
.... ..
.......
.
.... .
.. ..... ...
... .....
... ..... ...
... ...
...... ..
... ...... ..
... ...
....... ..
... .
...
....... ..
..
.... .................... ..
............. ...
.. .
........................................................................................................................................................................................................................................................................ x
...
0 ...
... VaR(α)
VaR(α) = f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)). (4.48)
Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution
If its inverse uncertainty distribution Φ−1 (α) exists, then the expected loss
is Z 1
+
L= Φ−1 (α) dα. (4.51)
0
Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution
0, if Φ(x) ≤ Φ(t)
Φ(x)
∧ 0.5, if Φ(t) < Φ(x) ≤ (1 + Φ(t))/2
Φ(x|t) = 1 − Φ(t) (4.53)
Φ(x) − Φ(t) , if (1 + Φ(t))/2 ≤ Φ(x)
1 − Φ(t)
Exercise 4.1: Let ξ be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the hazard distribution at time t is
0, if x ≤ t
x−a
∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|t) = b−t
x−t
∧ 1, if (b + t)/2 ≤ x.
b−t
Theorem 4.5 (Liu [83], Conditional Risk Index Theorem) Assume that a
system contains uncertain factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f .
Suppose ξ1 , ξ2 , · · · , ξn are independent uncertain variables with regular un-
certainty distributions Φ1 , Φ2 , · · · , Φn , respectively, and f (ξ1 , ξ2 , · · · , ξn ) is
strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with
respect to ξm+1 , ξm+2 , · · · , ξn . If it is observed that all elements are working
at some time t, then the risk index is just the root α of the equation
f (Φ−1 −1 −1 −1
1 (1 − α|t), · · · , Φm (1 − α|t), Φm+1 (α|t), · · · , Φn (α|t)) = 0 (4.54)
0, if Φi (x) ≤ Φi (t)
Φi (x)
∧ 0.5, if Φi (t) < Φi (x) ≤ (1 + Φi (t))/2
Φi (x|t) = 1 − Φi (t) (4.55)
Φi (x) − Φi (t) ,
if (1 + Φi (t))/2 ≤ Φi (x)
1 − Φi (t)
for i = 1, 2, · · · , n.
Proof: It follows from Definition 4.5 that each hazard distribution of ele-
ment is determined by (4.55). Thus the conditional risk index is obtained by
Theorem 4.2 immediately.
Exercise 4.2: State and prove conditional value-at-risk theorem and condi-
tional expected loss theorem.
Uncertain Reliability
Analysis
Example 5.1: For a series system, the structure function is a mapping from
{0, 1}n to {0, 1}, i.e.,
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (5.4)
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (5.5)
.................................
.. ...
................................ 1 ...............................
... ................................... ...
... ...
... ................................ ...
... . ... ...
. . . .....................................................................
Input .... .
............................................................
..
.
... 2
..............................
.
. .. Output
... ...
... ...
... ................................ ..
.................................. ...................................
... 3
...............................
...
Example 5.3: For a k-out-of-n system that works whenever at least k of the
n elements work, the structure function is a mapping from {0, 1}n to {0, 1},
i.e.,
f (x1 , x2 , · · · , xn ) = k-max [x1 , x2 , · · · , xn ]. (5.6)
Especially, when k = 1, it is a parallel system; when k = n, it is a series
system.
Definition 5.2 (Liu [83]) Assume a Boolean system has uncertain elements
ξ1 , ξ2 , · · · , ξn and a structure function f . Then the reliability index is the
uncertain measure that the system is working, i.e.,
Reliability = M{f (ξ1 , ξ2 , · · · , ξn ) = 1}. (5.8)
Theorem 5.1 (Liu [83], Reliability Index Theorem) Assume that a system
contains uncertain elements ξ1 , ξ2 , · · ·, ξn , and has a structure function f . If
ξ1 , ξ2 , · · · , ξn are independent uncertain elements with reliabilities a1 , a2 , · · · ,
an , respectively, then the reliability index is
sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
Reliability = (5.9)
1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n
if sup min νi (xi ) ≥ 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
for i = 1, 2, · · · , n, respectively.
Proof: Since ξ1 , ξ2 , · · · , ξn are independent Boolean uncertain variables and
f is a Boolean function, the equation (5.9) follows from Definition 5.2 and
Theorem 2.21 immediately.
It follows from the reliability index theorem that the reliability index is
It follows from the reliability index theorem that the reliability index is the
kth largest value of a1 , a2 , · · · , an , i.e.,
in uncertain measure.
Uncertain Propositional
Logic
Example 6.1: “Tom is tall with truth value 0.7” is an uncertain proposition,
where “Tom is tall” is a statement, and its truth value is 0.7 in uncertain
measure.
154 Chapter 6 - Uncertain Propositional Logic
Example 6.2: “John is young with truth value 0.8” is an uncertain propo-
sition, where “John is young” is a statement, and its truth value is 0.8 in
uncertain measure.
Example 6.3: “Beijing is a big city with truth value 0.9” is an uncertain
proposition, where “Beijing is a big city” is a statement, and its truth value
is 0.9 in uncertain measure.
Connective Symbols
In addition to the proposition symbols X and Y , we also need the negation
symbol ¬, conjunction symbol ∧, disjunction symbol ∨, conditional symbol
→, and biconditional symbol ↔. Note that
T (X ∨ ¬X) = 1. (6.15)
Proof: It follows from the definition of truth value and the property of
uncertain measure that
T (X ∧ ¬X) = 0. (6.16)
Proof: It follows from the definition of truth value and the property of
uncertain measure that
Z = f (X1 , X2 , · · · , Xn ). (6.23)
for i = 1, 2, · · · , n, respectively.
Z = X1 ↔ X2 (6.26)
At first, we have
Thus we have
α1 ∧ α2 , if α1 ≥ 0.5 and α2 ≥ 0.5
(1 − α1 ) ∨ α2 , ≥ 0.5
if α1 and α2 < 0.5
T (Z) = (6.27)
α1 ∨ (1 − α2 ), if α1 < 0.5 and α2 ≥ 0.5
(1 − α1 ) ∧ (1 − α2 ),
if α1 < 0.5 and α2 < 0.5.
and (
1, if γ = γ1
X2 (γ) = (6.30)
0, if γ = γ2
is also an uncertain proposition with truth value
Z = X1 ∧ X2 ∧ · · · ∧ Xn (6.34)
T (Z) = α1 ∧ α2 ∧ · · · ∧ αn . (6.35)
Z = X1 ∨ X2 ∨ · · · ∨ Xn (6.36)
T (Z) = α1 ∨ α2 ∨ · · · ∨ αn . (6.37)
A run of Boolean System Calculator shows that the truth value of Z is 0.7
in uncertain measure.
Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
(∀a)X(a) = 0 and ¬(∀a)X(a) = 1. Thus
Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
¬X(b) = 1 and
Proof: The argument breaks into two cases. Case 1: If (∀a)X(a) = 0, then
¬(∀a)X(a) = 1 and
Uncertain Entailment
Yj = fj (X1 , X2 , · · · , Xn ) (7.1)
0 ≤ αi ≤ 1, i = 1, 2, · · · , n. (7.3)
T (Yj ) = cj (7.4)
166 Chapter 7 - Uncertain Entailment
for j = 1, 2, · · · , m and
(
αi , if xi = 1
νi (xi ) = (7.6)
1 − αi , if xi = 0
Since the truth values α1 , α2 , · · · , αn are not uniquely determined, the truth
value T (Z) is not unique too. In this case, we have to use the maximum
uncertainty principle to determine the truth value T (Z). That is, T (Z)
should be assigned the value as close to 0.5 as possible. In other words,
we should minimize the value |T (Z) − 0.5| via choosing appreciate values of
α1 , α2 , · · · , αn . The uncertain entailment model is thus written by Liu [81]
as follows,
min |T (Z) − 0.5|
subject to:
(7.8)
0 ≤ αi ≤ 1, i = 1, 2, · · · , n
T (Yj ) = cj , j = 1, 2, · · · , m
Y1 = A ∨ B, Y2 = A ∧ B, Z = A → B.
It is clear that
T (Y1 ) = α1 ∨ α2 = a,
T (Y2 ) = α1 ∧ α2 = b,
T (Z) = (1 − α1 ) ∨ α2 .
In this case, the uncertain entailment model (7.8) becomes
min |(1 − α1 ) ∨ α2 − 0.5|
subject to:
0 ≤ α1 ≤ 1
(7.10)
0 ≤ α2 ≤ 1
α1 ∨ α2 = a
α1 ∧ α2 = b.
When a ≥ b, there are only two feasible solutions (α1 , α2 ) = (a, b) and
(α1 , α2 ) = (b, a). If a + b < 1, the optimal solution produces
When a < b, there is no feasible solution and the truth values are ill-assigned.
In summary, from T (A ∨ B) = a and T (A ∧ B) = b we entail
1 − a, if a ≥ b and a + b < 1
a or b, if a ≥ b and a + b = 1
T (A → B) = (7.11)
b, if a ≥ b and a + b > 1
illness, if a < b.
T (A → C) = a, T (B → C) = b, T (A ∨ B) = c. (7.12)
168 Chapter 7 - Uncertain Entailment
T (A → C) = a, T (B → D) = b, T (A ∨ B) = c. (7.13)
Y1 = A, Y2 = A → B, Z = B.
It is clear that
T (Y1 ) = α1 = a,
T (Y2 ) = (1 − α1 ) ∨ α2 = b,
T (Z) = α2 .
In this case, the uncertain entailment model (7.8) becomes
min |α2 − 0.5|
subject to:
0 ≤ α1 ≤ 1
(7.15)
0 ≤ α2 ≤ 1
α1 = a
(1 − α1 ) ∨ α2 = b.
When a + b > 1, there is a unique feasible solution and then the optimal
solution is
α1∗ = a, α2∗ = b.
Thus T (B) = α2∗ = b. When a + b = 1, the feasible set is {a} × [0, b] and the
optimal solution is
α1∗ = a, α2∗ = 0.5 ∧ b.
Section 7.3 - Uncertain Modus Tollens 169
Thus T (B) = α2∗ = 0.5 ∧ b. When a + b < 1, there is no feasible solution and
the truth values are ill-assigned. In summary, from
T (A) = a, T (A → B) = b (7.16)
we entail
b, if a + b > 1
T (B) = 0.5 ∧ b, if a + b = 1 (7.17)
illness, if a + b < 1.
This result coincides with the classical modus ponens that if both A and
A → B are true, then B is true.
Y1 = A → B, Y2 = B, Z = A.
It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = α2 = b,
T (Z) = α1 .
In this case, the uncertain entailment model (7.8) becomes
min |α1 − 0.5|
subject to:
0 ≤ α1 ≤ 1
(7.18)
0 ≤ α2 ≤ 1
− α1 ) ∨ α2 = a
(1
α2 = b.
When a > b, there is a unique feasible solution and then the optimal solution
is
α1∗ = 1 − a, α2∗ = b.
Thus T (A) = α1∗ = 1 − a. When a = b, the feasible set is [1 − a, 1] × {b} and
the optimal solution is
T (A → B) = a, T (B) = b (7.19)
we entail
1 − a, if a > b
T (A) = (1 − a) ∨ 0.5, if a = b (7.20)
illness, if a < b.
This result coincides with the classical modus tollens that if A → B is true
and B is false, then A is false.
Y1 = A → B, Y2 = B → C, Z = A → C.
It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = (1 − α2 ) ∨ α3 = b,
T (Z) = (1 − α1 ) ∨ α3 .
In this case, the uncertain entailment model (7.8) becomes
min |(1 − α1 ) ∨ α3 − 0.5|
subject to:
0 ≤ α1 ≤ 1
0 ≤ α2 ≤ 1 (7.21)
0 ≤ α3 ≤ 1
(1 − α1 ) ∨ α2 = a
(1 − α2 ) ∨ α3 = b.
Write the optimal solution by (α1∗ , α2∗ , α3∗ ). When a ∧ b ≥ 0.5, we have
T (A → C) = (1 − α1∗ ) ∨ α3∗ = a ∧ b.
When a + b < 1, there is no feasible solution and the truth values are ill-
assigned. In summary, from
T (A → B) = a, T (B → C) = b (7.22)
we entail
a ∧ b, if a ≥ 0.5 and b ≥ 0.5
T (A → C) = 0.5, if a + b ≥ 1 and a ∧ b < 0.5 (7.23)
illness, if a + b < 1.
This result coincides with the classical hypothetical syllogism that if both
A → B and B → C are true, then A → C is true.
Uncertain Set
Uncertain set was first proposed by Liu [82] in 2010 for modelling unsharp
concepts. This chapter will introduce the concepts of uncertain set, member-
ship function, independence, expected value, variance, distance, and entropy.
This chapter will also introduce the operational law for uncertain sets via
membership functions or inverse membership functions. Finally, conditional
uncertain set and conditional membership function are documented.
Remark 8.1: Note that the events {B ⊂ ξ} and {ξ ⊂ B} are subsets of the
universal set Γ, i.e.,
{B ⊂ ξ} = {γ ∈ Γ | B ⊂ ξ(γ)}, (8.1)
{ξ ⊂ B} = {γ ∈ Γ | ξ(γ) ⊂ B}. (8.2)
Remark 8.2: It is clear that uncertain set (Liu [82]) is very different from
random set (Robbins [130] and Matheron [113]) and fuzzy set (Zadeh [192]).
The essential difference among them is that different measures are used, i.e.,
random set uses probability measure, fuzzy set uses possibility measure and
uncertain set uses uncertain measure.
174 Chapter 8 - Uncertain Set
Remark 8.3: What is the difference between uncertain variable and un-
certain set? Both of them belong to the same broad category of uncertain
concepts. However, they are differentiated by their mathematical definitions:
the former refers to one value, while the latter to a collection of values. Es-
sentially, the difference between uncertain variable and uncertain set focuses
on the property of exclusivity. If the concept has exclusivity, then it is an
uncertain variable. Otherwise, it is an uncertain set. Consider the statement
“John is a young man”. If we are interested in John’s real age, then “young”
is an uncertain variable because it is an exclusive concept (John’s age can-
not be more than one value). For example, if John is 20 years old, then it
is impossible that John is 25 years old. In other words, “John is 20 years
old” does exclude the possibility that “John is 25 years old”. By contrast,
if we are interested in what ages can be regarded “young”, then “young” is
an uncertain set because the concept now has no exclusivity. For example,
both 20-year-old and 25-year-old men can be considered “young”. In other
words, “a 20-year-old man is young” does not exclude the possibility that “a
25-year-old man is young”.
<..
.
........
...
...
5 ...
......................................................... ..........
... ... ...
... ... ...
... ... ...
4 ...
...................................
...
..........
... ..
... ...
... ...
... ...
... ... ... ... ....
... ... ... ... ..
3 ...
.........................................................
...
..........
... ... ... ...
.......
..
... .... ... ..... .... ..
... ... ... ... ... ..
2 ...................................
...
...
... ...
... ...
... ..
........
..
..
..
... .. ..
... .... .... .. ..
1 ............ ...
...
.......
..
..
..
..
..
... .. .. ..
..
. .. .. .
.........................................................................................................................................................................................................
γ .... γ γ Γ
. 1 2 3
ξ(γ) ≡ A, ∀γ ∈ Γ. (8.11)
Example 8.4: Let ξ be an uncertain set and let x be a real number. Then
Exercise 8.1: Let ξ be an uncertain set and let B be a Borel set of real
numbers. Show that {B ⊂ ξ} and {B 6⊂ ξ} are opposite events, and
Exercise 8.2: Let ξ be an uncertain set and let B be a Borel set of real
numbers. Show that {ξ ⊂ B} and {ξ 6⊂ B} are opposite events, and
Exercise 8.3: Let ξ and η be two uncertain sets. Show that {ξ ⊂ η} and
{ξ 6⊂ η} are opposite events, and
Exercise 8.4: Let ∅ be the empty set, and let ξ be an uncertain set. Show
that
M{∅ ⊂ ξ} = 1. (8.20)
Exercise 8.5: Let ξ be an uncertain set, and let < be the set of real numbers.
Show that
M{ξ ⊂ <} = 1. (8.21)
It follows from (8.25) and (8.26) that (8.23) holds. The first equation is
proved. Next we verify the second equation. For any γ ∈ {ξ ⊂ B}, we have
ξ(γ) ⊂ B. Thus x 6∈ ξ(γ) whenever x ∈ B c . This means γ ∈ {x 6∈ ξ} and
then {ξ ⊂ B} ⊂ {x 6∈ ξ} for any x ∈ B c . Hence
\
{ξ ⊂ B} ⊂ {x 6∈ ξ}. (8.27)
x∈B c
It follows from (8.27) and (8.28) that (8.24) holds. The theorem is proved.
ξ(γ) = ∅ (8.30)
ξ(γ) = ∅, ∀γ ∈ Γ (8.32)
their intersection is
∅, if γ = γ1
(ξ ∩ η)(γ) = (2, 3], if γ = γ2
(2, 4], if γ = γ3 ,
(−∞, 2] ∪ [3, +∞), if γ = γ1
c
η (γ) = (−∞, 2] ∪ [4, +∞), if γ = γ2
(−∞, 2] ∪ [5, +∞), if γ = γ3 .
ξ ∪ ξ = ξ, ξ ∩ ξ = ξ. (8.37)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ξ)(γ) = ξ(γ) ∪ ξ(γ) = ξ(γ).
Thus we have ξ ∪ ξ = ξ. In addition, the intersection is
Thus we have ξ ∩ ξ = ξ.
Section 8.1 - Uncertain Set 179
ξ ∪ (η ∩ τ ) = (ξ ∪ η) ∩ (ξ ∪ τ ), ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ). (8.42)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
Thus we have ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ).
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
we get ξ ∩ (ξ ∪ η) = ξ.
Theorem 8.9 (De Morgan’s Law) Let ξ and η be uncertain sets. Then we
have
(ξ ∪ η)c = ξ c ∩ η c , (ξ ∩ η)c = ξ c ∪ η c . (8.44)
we get (ξ ∩ η)c = ξ c ∪ η c .
Section 8.1 - Uncertain Set 181
Exercise 8.7: Let ξ be an uncertain set and let x be a real number. Show
that
{x ∈ ξ c } = {x 6∈ ξ} (8.45)
and
M{x ∈ ξ c } = M{x 6∈ ξ}. (8.46)
Exercise 8.8: Let ξ be an uncertain set and let x be a real number. Show
that {x ∈ ξ} and {x ∈ ξ c } are opposite events, and
Exercise 8.9: Let ξ be an uncertain set and let B be a Borel set of real
numbers. Show that {B ⊂ ξ} and {B ⊂ ξ c } are not necessarily opposite
events.
Exercise 8.10: Let ξ and η be two uncertain sets. Show that {ξ ⊂ η} and
{η c ⊂ ξ c } are identical events, i.e.,
{ξ ⊂ η} = {η c ⊂ ξ c }. (8.48)
Exercise 8.11: Let ξ and η be two uncertain sets. Show that {ξ ⊂ η} and
{ξ ⊂ η c } are not necessarily opposite events.
Example 8.8: Note that the empty set ∅ annihilates every other set. For
example, A + ∅ = ∅ and A × ∅ = ∅. Take an uncertainty space (Γ, L, M) to
be {γ1 , γ2 , γ3 } with power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2.
Define two uncertain sets,
∅,
if γ = γ1 (2, 3), if γ = γ1
ξ(γ) = [1, 3], if γ = γ2 η(γ) = (2, 4), if γ = γ2
[1, 4], if γ = γ3 , (2, 5), if γ = γ3 .
182 Chapter 8 - Uncertain Set
Exercise 8.12: Let ξ be an uncertain set. (i) Show that ξ + ξ 6≡ 2ξ. (ii) Do
you think the same of crisp set?
Exercise 8.13: Let ξ be an uncertain set. What are the potential values of
the difference ξ − ξ?
Proof: For any number x, it follows from the first measure inversion formula
that
M{x ∈ ξ} = M{{x} ⊂ ξ} = inf µ(y) = µ(x).
y∈{x}
µ(x) µ(x)
.... ....
........ .... ........ ....
.. ...... ........... .. ...... ...........
... .... .... ... .... ....
... ... .... ... ... ....
... ... ...
... ... ... ...
...
... ... ... ... ... ...
... ... ... ... ..
. .
... .. ... .................................................................................
...
... .
.
..
..
. ...
... sup µ(x) ...
.
. .
..
....
... .
... ..
... ... .
... ... . x∈B c .... .... ... .....
............. ................................................................... ... ... ..
inf µ(x) . ...
.
... .... ..
..
......
.. .....
... ... ..
....
......
.. ......
x∈B . ... .... ..
......
. .. .. ...... ....
. . .. ......
...... .
. .. ....... ...... .
. .. .....
.
...... ... . .. .....
.. ...
.... ... .
. ..
.....
...
.... ... .. . .... ... .
. .
............................................................................................................................................................................ x ................................................................................................................................................................................. x
.... .. .. . .
. .... .............................. . .
0 .. B
............................. ............................. 0 .. ... B ..............................
Remark 8.4: The value of µ(x) is just the membership degree that x belongs
to the uncertain set ξ. If µ(x) = 1, then x completely belongs to ξ; if µ(x) = 0,
then x does not belong to ξ at all. Thus the larger the value of µ(x) is, the
more true x belongs to ξ.
Thus we have
M{∅ ⊂ ξ} = 1 = inf µ(x).
x∈∅
That is, the first measure inversion formula always holds for B = ∅. Further-
more, we have
M{ξ ⊂ <} = 1 = 1 − sup µ(x).
x∈∅
That is, the second measure inversion formula always holds for B = <.
Example 8.9: The set < of real numbers is a special uncertain set ξ(γ) ≡ <.
Such an uncertain set has a membership function
µ(x) ≡ 1 (8.57)
that is just the indicator function of <. In order to prove it, we must verify
that < and µ simultaneously satisfy the two measure inversion formulas (8.51)
and (8.52). Let B be a Borel set of real numbers. If B = ∅, then the first
measure inversion formula always holds. If B 6= ∅, then
The first measure inversion formula is verified. Next we prove the second
measure inversion formula. If B = <, then the second measure inversion
formula always holds. If B 6= <, then
Exercise 8.14: The empty set ∅ is a special uncertain set ξ(γ) ≡ ∅. Show
that such an uncertain set has a membership function
µ(x) ≡ 0 (8.58)
(i) What is the membership function of ξ? (ii) Please justify your answer.
(Hint: If ξ does have a membership function, then µ(x) = M{x ∈ ξ}.)
Exercise 8.20: It is not true that every uncertain set has a membership
function. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with power set
and M{γ1 } = 0.4, M{γ2 } = 0.6. Show that the uncertain set
(
[1, 3], if γ = γ1
ξ(γ) = (8.64)
[2, 4], if γ = γ2
µ(x) µ(x)
.... ....
........ ........
.. ..
... .. ... ...................................................
... ....... ... .... .....
... ... .. .... ... ..... ..
... ... . ... ... ... . . ....
... ... . .... ... .. .
. . ...
... ... .. ..... ... ... . . ...
... ... . ... ... .. ..
.
.
. .....
... ... . ... ... ... . . ..
... ... . ... ... .. ..
. . ...
.. . ... .. . ...
... . . ... ... . . . ...
... ... . ... ... ..
. . . ...
... ... .
. ... ... ... .
.
.
. ...
... ... . ... ... ..
. . . ...
... .. . ... ... .... . . ...
. . ... . . ...
... ... . ... ... .
. . . ...
... ..
. . . ... .... . .
...................................................................................................................................... x ............................................................................................................................................................. x
a .... c a .... c
.. b .. b d
What is “young”?
Sometimes we say “those students are young”. What ages can be considered
“young”? In this case, “young” may be regarded as an uncertain set whose
membership function is
0, if x ≤ 15
(x − 15)/5, if 15 ≤ x ≤ 20
µ(x) = 1, if 20 ≤ x ≤ 35 (8.69)
(45 − x)/10, if 35 ≤ x ≤ 45
0, if x ≥ 45.
µ(x)
...
..........
... .........................................................................................
.. ..... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. .. ...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. .. ...
... .. .. ...
. .. ...
... ... .. .. ...
... ..
. .. .. ...
... ... .. ...
.. .. ...
... . .. .. ...
... ... .. .. ...
... ..
. .. ...
. .. ...
... .
.
. .
. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr
What is “tall”?
µ(x)
..
.........
... .........................................................................................
.... ..... ......
.. .... .. ...
... ... .. .. ....
... ... ..
... ... .. .. ....
... ... .. .. ...
.. .. ...
... ... ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... ...
... .. .. .. ...
. .. .. ...
... ... ...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
. . .. .. .
..........................................................................................................................................................................................................................................................
. .
.
x
..
180cm 185cm ... 195cm 200cm
What is “warm”?
Sometimes we say “those days are warm”. What temperatures can be con-
sidered “warm”? In this case, “warm” may be regarded as an uncertain set
whose membership function is
0, if x ≤ 15
(x − 15)/3, if 15 ≤ x ≤ 18
µ(x) = 1, if 18 ≤ x ≤ 24 (8.71)
(28 − x)/4, if 24 ≤ x ≤ 28
0, if 28 ≤ x.
What is “most”?
Sometimes we say “most students are boys”. What percentages can be con-
sidered “most”? In this case, “most” may be regarded as an uncertain set
Section 8.2 - Membership Function 189
µ(x)
....
........
..
... ........................................................................
... ..... ......
... ... . .. ...
... ... ... .. ....
... .. ..
. .. ....
... ... .. .. ...
... .. ..
. .. ...
... .. .. .. ...
. ...
... ... .. . ...
... .. .. .
. ...
. .
... ..
. .. .
.
...
... ... .. . ...
...
... .. .. .
.
. . ...
... ..
. .. .
. ...
... ... .. . ...
... .. .. .
. ...
. . ...
... ..
. .. .
. ...
.
. .. .. .
...................................................................................................................................................................................................................................
.
x
..
... ◦ ◦ ◦ ◦
15 C 18 C 24 C 28 C
µ(x)
.
....
.......
..
... .....................................................................
... ..... ....
... ... . .. ....
... ... ... .. ...
... ... .. .. ....
... .. ..
. .. ...
... ... .. .. ....
... ..
. .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
... ..
. .. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
. .
. .. .. .
.......................................................................................................................................................................................................................
. .
x
....
70% 75% .. 85% 90%
is of total order.
is of total order.
Exercise 8.22: Let ξ be a totally ordered uncertain set. Show that its
complement ξ c is also of total order.
Exercise 8.24: Let ξ and η be totally ordered uncertain sets. Show that
their union ξ ∪ η is not necessarily of total order.
Exercise 8.25: Let ξ and η be totally ordered uncertain sets. Show that
their intersection ξ ∩ η is not necessarily of total order.
Theorem 8.13 (Liu [99]) Let ξ be a totally ordered uncertain set, and let
B be a crisp set of real numbers. Then (i) the collection {x ∈ ξ} indexed by
x ∈ B is of total order, and (ii) the collection {x 6∈ ξ} indexed by x ∈ B is
also of total order.
{x 6∈ ξ} = {x ∈ ξ}c
Theorem 8.14 (Liu [99], Existence Theorem) Let ξ be a totally ordered un-
certain set on a continuous uncertainty space. Then its membership function
always exists, and
µ(x) = M{x ∈ ξ}. (8.76)
The first measure inversion formula is verified. Next, Theorem 8.1 states that
\
{ξ ⊂ B} = {x 6∈ ξ}.
x∈B c
Then
ξ(γ) = (−γ, γ), ∀γ ∈ (0, 1) (8.78)
is a totally ordered uncertain set on a discontinuous uncertainty space. If it
indeed has a membership function, then
1, if x = 0
µ(x) = 0.5, if − 1 < x < 0 or 0 < x < 1 (8.79)
0, otherwise.
However,
That is, the first measure inversion formula is not valid and then ξ has
no membership function. Therefore, the continuity condition cannot be re-
moved.
Example 8.15: Some non-totally ordered uncertain sets may have mem-
bership functions. For example, take an uncertainty space (Γ, L, M) to be
{γ1 , γ2 , γ3 , γ4 } with power set and
0, if Λ = ∅
M{Λ} = 1, if Λ = Γ (8.81)
0.5, otherwise.
Then
{1}, if γ = γ1
{1, 2},
if γ = γ2
ξ(γ) = (8.82)
{1, 3}, if γ = γ3
{1, 2, 3},
if γ = γ4
Section 8.2 - Membership Function 193
because ξ and µ can simultaneously satisfy the two measure inversion formu-
las (8.51) and (8.52).
Remark 8.9: In practice, the unsharp concepts like “young”, “tall”, “warm”,
and “most” can be regarded as totally ordered uncertain sets on a continuous
uncertainty space.
Figure 8.8: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra
and Lebesgue measure. Then ξ(γ) = {x ∈ < | µ(x) ≥ γ} has the membership
function µ. Keep in mind that ξ is not the unique uncertain set whose
membership function is µ.
Proof: Since the membership function µ exists, it follows from the second
measure inversion formula that
Thus ξ is (i) nonempty if and only if M{ξ = ∅} = 0, i.e., (8.92) holds, (ii)
empty if and only if M{ξ = ∅} = 1, i.e., (8.93) holds, and (iii) half-empty if
and only if otherwise.
Exercise 8.28: Some people prefer the uncertain set whose height (i.e.,
the supremum of the membership function) achieves 1. When the height is
below 1, they divide all its membership values by the height and obtain a
“normalized” membership function. Why is this idea wrong and harmful?
µ(x)
....
.........
..........................
.... ..... .....
... .... .....
... .
....... .....
. .....
... ... .....
... . ... .....
... .
. . .....
.....
α ............ ...
... .
.
..
...
.
................................. ...
... ........
... ... ..
. .. ......
... .... .. . .....
. ... .....
... ....... ... .....
... ...... .. .. .....
.....
.......... .. .. ......
.
.. . . . .......
....... ...
. .
. .. .......
....
. ..
....... .. ... ..
.................................................................................................................................................................................................................. x
... .... . .
.
0 ... ......................... −1
µ (α) . ...
.................. ...
..
.
Example 8.18: Note that an inverse membership function may take value
of the empty set ∅. Let ξ be an uncertain set with membership function
(
0.8, if 1 ≤ x ≤ 2
µ(x) = (8.96)
0, otherwise.
For each x 6∈ µ−1 (α), we have µ(x) < α. It follows from the second measure
inversion formula that
M{ξ ⊂ µ−1 (α)} = 1 − sup µ(x) ≥ 1 − α.
x6∈µ−1 (α)
Section 8.3 - Independence 197
µ−1
l (α) = inf µ
−1
(α) (8.103)
µ−1
r (α) = sup µ
−1
(α) (8.104)
is called the right inverse membership function. It is clear that the left inverse
membership function µ−1 l (α) is increasing, and the right inverse membership
function µ−1r (α) is decreasing with respect to α.
Conversely, suppose an uncertain set ξ has a left inverse membership
function µ−1 −1
l (α) and right inverse membership function µr (α). Then the
membership function µ is determined by
if x ≤ µ−1
0, l (0)
if µ−1 −1 −1
α, l (0) ≤ x ≤ µl (1) and µl (α) = x
µ(x) = 1, if µ−1 −1
l (1) ≤ x ≤ µr (1)
(8.105)
β, if µ−1 −1 −1
r (1) ≤ x ≤ µr (0) and µr (β) = x
if x ≥ µ−1
0, r (0).
Note that the values of α and β may not be unique. In this case, we will take
the maximum values.
8.3 Independence
M{(ξ1 ⊂ B1 ) ∩ (ξ2 ⊂ B2 )}
= M {(γ1 , γ2 ) | ξ1 (γ1 ) ⊂ B1 , ξ2 (γ2 ) ⊂ B2 }
= M {(γ1 | ξ1 (γ1 ) ⊂ B1 ) × (γ2 | ξ2 (γ2 ) ⊂ B2 )}
= M1 {γ1 | ξ1 (γ1 ) ⊂ B1 } ∧ M2 {γ2 | ξ2 (γ2 ) ⊂ B2 }
= M {ξ1 ⊂ B1 } ∧ M {ξ2 ⊂ B2 } .
That is
and ( )
n
[ n
_
M (ξi∗ ⊂ Bi ) = M {ξi∗ ⊂ Bi } (8.115)
i=1 i=1
Remark 8.11: Note that (8.114) and (8.115) represent 2n+1 equations. For
example, when n = 2, they represent the 8 equations from (8.106) to (8.113).
Exercise 8.29: Show that a crisp set of real numbers (a special uncertain
set) is always independent of any uncertain set.
Section 8.3 - Independence 199
Theorem 8.19 (Liu [91]) Let ξ1 , ξ2 , · · · , ξn be uncertain sets, and let ξi∗ be
arbitrarily chosen uncertain sets from {ξi , ξic }, i = 1, 2, · · · , n, respectively.
Then ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are indepen-
dent.
Proof: Let ξi∗∗ be arbitrarily chosen uncertain sets from {ξi∗ , ξi∗c }, i =
1, 2, · · · , n, respectively. Then ξ1∗ , ξ2∗ , · · · , ξn∗ and ξ1∗∗ , ξ2∗∗ , · · · , ξn∗∗ represent
the same 2n combinations. This fact implies that (8.114) and (8.115) are
equivalent to ( n )
\ ^n
M ∗∗
(ξi ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } , (8.116)
i=1 i=1
( n
) n
[ _
M (ξi∗∗ ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } . (8.117)
i=1 i=1
Hence ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are indepen-
dent.
Exercise 8.32: Show that the following four statements are equivalent: (i)
ξ1 and ξ2 are independent; (ii) ξ1c and ξ2 are independent; (iii) ξ1 and ξ2c are
independent; and (iv) ξ1c and ξ2c are independent.
and ( )
n
[ n
_
M (Bi ⊂ ξi∗ ) = M {Bi ⊂ ξi∗ } (8.119)
i=1 i=1
n
^ n
^
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }, (8.121)
i=1 i=1
( n
) ( n
)
[ [
M (Bi ⊂ ξi∗ ) =M (ξi∗c ⊂ Bic ) , (8.122)
i=1 i=1
n
_ n
_
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }. (8.123)
i=1 i=1
It follows from (8.120), (8.121), (8.122) and (8.123) that (8.118) and (8.119)
are valid if and only if
( n ) n
\ ^
M (ξi∗c ⊂ Bic ) = M{ξi∗c ⊂ Bic }, (8.124)
i=1 i=1
( n
) n
[ _
M (ξi∗c ⊂ Bic ) = M{ξi∗c ⊂ Bic }. (8.125)
i=1 i=1
The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.
Thus
M{B ⊂ (ξ ∪ η)} ≥ inf µ(x) ∨ ν(x). (8.127)
x∈B
Thus
M{B ⊂ (ξ ∪ η)} ≤ inf µ(x) ∨ ν(x). (8.128)
x∈B
The first measure inversion formula is verified. Next we prove the second
measure inversion formula. By the independence of ξ and η, we have
That is,
M{(ξ ∪ η) ⊂ B} = 1 − sup µ(x) ∨ ν(x). (8.130)
x∈B c
λ(x)
.....
.......
µ(x) ν(x)
. ..........
.... .................. ..... ........
.... ... ....
... ... ... .. ...
... ..
. .... .
. ...
... ... ... ... ...
...
... ... ... ... ...
... ... ... ... ...
... ... ...
... ..
. ...
... ..
. ... . .
.. ...
... .. . . ...
. ... .. . ...
... ..
. . .
..... ...
... ..
. ...
... .
.. ...... ...
... ...
. .. .. ...
... ..
.... . . . ..
..
....
.....
... ..
....
. . .. . .....
.. . ... ......
... ............ .. . . . ..... ......
................................................................................................................................................................................................................................................................
.
. .
. . .
x
...
....
and (
[0, 2], if γ = γ1
η(γ) =
[0, 1], if γ = γ2
is also an uncertain set with membership function
1, if 0 ≤ x ≤ 1
ν(x) = 0.5, if 1 < x ≤ 2
0, otherwise.
Note that ξ and η are not independent, and ξ ∪ η ≡ [0, 2] whose membership
function is (
1, if 0 ≤ x ≤ 2
λ(x) =
0, otherwise.
Thus
λ(x) 6= µ(x) ∨ ν(x). (8.131)
Therefore, the independence condition cannot be removed.
Exercise 8.34: Some people suggest λ(x) = µ(x) + ν(x) − µ(x) · ν(x) and
λ(x) = min{1, µ(x) + ν(x)} for the membership function of the union of
uncertain sets. Why is this idea wrong and harmful?
Exercise 8.35: Why is λ(x) = µ(x) ∨ ν(x) the only option for the member-
ship function of the union of uncertain sets?
That is,
M{B ⊂ (ξ ∩ η)} = inf µ(x) ∧ ν(x). (8.133)
x∈B
The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
Letting ε → 0, we get
Thus
M{(ξ ∩ η) ⊂ B} ≤ 1 − sup µ(x) ∧ ν(x). (8.135)
x∈B c
λ(x)
....
........
µ(x) ν(x)
.. ....... .......
... .. .. .. ..
... .. .. .. ..
... ... .. . .. ..
... .
. .. .. ..
.. .. . ..
... .. .. ... ..
... .
. .. .. ..
... .. ... .. ..
... . .. .. . ..
... ..
. .. . . ..
... .. ... ..
... .. .. .
. ..
. .. ... ..
... ..
.
... .... ..
... . ... ..... ..
... ... ..
.... ..
..... ..
... ...
......
. ... ...
... .....
.
...... .... ......
....
...
.
...............................................................................................................................................................................................................................................
....................... x
....
..
and (
[0, 2], if γ = γ1
η(γ) =
[0, 1], if γ = γ2
Note that ξ and η are not independent, and ξ ∩ η ≡ [0, 1] whose membership
function is (
1, if 0 ≤ x ≤ 1
λ(x) =
0, otherwise.
Thus
λ(x) 6= µ(x) ∧ ν(x). (8.137)
Therefore, the independence condition cannot be removed.
Section 8.4 - Set Operational Law 205
Exercise 8.37: Some people suggest λ(x) = max{0, µ(x) + ν(x) − 1} and
λ(x) = µ(x)·ν(x) for the membership function of the intersection of uncertain
sets. Why is this idea wrong and harmful?
Exercise 8.38: Why is λ(x) = µ(x) ∧ ν(x) the only option for the member-
ship function of the intersection of uncertain sets?
λ(x)
..
.........
µ(x)
... .............. ........... ..............
... ........ ... .. ..........
...
....... .. .. .......
...... .. .. .......
.
... ..... . . .. .
...
.
... .....
..... .. .. .....
... ..... .. .. .....
... .... .. .. .....
.... ... .. ........
... .... .. .. ... .
... .... .
... ......
... .. ... ... ..
... .. ..... ... ...
... . .. ... ....
. ..
. .... . ..
... .. .... .... ..
... .. .... .... ..
... . . .. .....
..
.....
. ...
.. ..... .... ...
... . . ..
...... .
.. ....
. ........
. ........ ............. .... .
....................................................................................................................................................................................................................................... ................................. x
....
..
Why is Theorem 8.21 not applicable to the union of ξ and ξ c ? (ii) It is known
that ξ ∩ ξ c ≡ ∅ whose membership function is λ(x) ≡ 0, and
ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.143)
Proof: For simplicity, we only prove the case n = 2. Let B be any Borel set
of real numbers, and write
β = inf λ(x).
x∈B
≥ M{(µ−1 −1
1 (β) ⊂ ξ1 ) ∩ (µ2 (β) ⊂ ξ2 )}
= M{µ−1 −1
1 (β) ⊂ ξ1 } ∧ M{µ2 (β) ⊂ ξ2 }
≥ β ∧ β = β.
Section 8.5 - Arithmetic Operational Law 207
Thus
M{B ⊂ ξ} ≥ inf λ(x). (8.145)
x∈B
On the other hand, for any given number ε > 0, we have B 6⊂ λ−1 (β + ε).
Since λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)), we obtain
≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}
= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}
≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε
and then
M{B ⊂ ξ} = 1 − M{B 6⊂ ξ} ≤ β + ε.
Letting ε → 0, we get
M{B ⊂ ξ} ≤ β = inf λ(x). (8.146)
x∈B
The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
β = sup λ(x).
x∈B c
Then for any given number ε > 0, we have λ−1 (β + ε) ⊂ B. Please note that
λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)). By the independence of ξ1 and ξ2 ,
we obtain
M{ξ ⊂ B} ≥ M{ξ ⊂ λ−1 (β + ε)} = M{ξ ⊂ f (µ−1 −1
1 (β + ε), µ2 (β + ε))}
≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}
= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}
≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε.
Letting ε → 0, we get
M{ξ ⊂ B} ≥ 1 − sup λ(x). (8.148)
x∈B c
On the other hand, for any given number ε > 0, we have λ−1 (β − ε) 6⊂ B.
Since λ−1 (β − ε) = f (µ−1 −1
1 (β − ε), µ2 (β − ε)), we obtain
≥ M{(µ−1 −1
1 (β − ε) ⊂ ξ1 ) ∩ (µ2 (β − ε) ⊂ ξ2 )}
= M{µ−1 −1
1 (β − ε) ⊂ ξ1 } ∧ M{µ2 (β − ε) ⊂ ξ2 }
≥ (β − ε) ∧ (β − ε) = β − ε
208 Chapter 8 - Uncertain Set
and then
M{ξ ⊂ B} = 1 − M{ξ 6⊂ B} ≤ 1 − β + ε.
Letting ε → 0, we get
It follows from the operational law that the sum ξ + η has an inverse mem-
bership function,
ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 ). (8.154)
ξ − η = (a1 − b3 , a2 − b2 , a3 − b1 ). (8.156)
That is, the product k · ξ is a triangular uncertain set (ka1 , ka2 , ka3 ). When
k < 0, the product k · ξ has an inverse membership function,
That is, the product k · ξ is a triangular uncertain set (ka3 , ka2 , ka1 ). In
summary, we have
(
(ka1 , ka2 , ka3 ), if k ≥ 0
k·ξ = (8.159)
(ka3 , ka2 , ka1 ), if k < 0.
ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 ), (8.160)
ξ − η = (a1 − b4 , a2 − b3 , a3 − b2 , a4 − b1 ), (8.161)
(
(ka1 , ka2 , ka3 , ka4 ), if k ≥ 0
k·ξ = (8.162)
(ka4 , ka3 , ka2 , ka1 ), if k < 0.
µ−1
1 (α) = [α − 1, 1 − α], (8.164)
and
ξ2 (γ) = [γ − 1, 1 − γ] (8.165)
is also a triangular uncertain set (−1, 0, 1) with inverse membership function
µ−1
2 (α) = [α − 1, 1 − α]. (8.166)
Note that ξ1 and ξ2 are not independent, and ξ1 + ξ2 ≡ [−1, 1] whose inverse
membership function is
λ−1 (α) = [−1, 1]. (8.167)
Thus
λ−1 (α) 6= µ−1 −1
1 (α) + µ2 (α). (8.168)
Therefore, the independence condition cannot be removed.
210 Chapter 8 - Uncertain Set
ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.169)
λ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (8.170)
−1 −1 −1
λ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)), (8.171)
where λ−1 −1 −1 −1
l , µ1l , µ2l , · · · , µnl are left inverse membership functions, and λr ,
−1
−1 −1 −1
µ1r , µ2r , · · · , µnr are right inverse membership functions of ξ, ξ1 , ξ2 , · · · , ξn ,
respectively.
is also an interval. Thus ξ has a regular membership function, and its left and
right inverse membership functions are determined by (8.170) and (8.171),
respectively.
Exercise 8.43: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the sum ξ + η has left and right inverse
membership functions,
λ−1 −1 −1
l (α) = µl (α) + νl (α), (8.172)
λ−1 −1 −1
r (α) = µr (α) + νr (α). (8.173)
Exercise 8.44: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the difference ξ − η has left and right
inverse membership functions,
λ−1 −1 −1
l (α) = µl (α) − νr (α), (8.174)
Section 8.5 - Arithmetic Operational Law 211
−1
λ−1 −1
r (α) = µr (α) − νl (α). (8.175)
Exercise 8.45: Let ξ and η be independent and positive uncertain sets with
left inverse membership functions µ−1
l and νl−1 and right inverse membership
−1 −1
functions µr and νr , respectively. Show that
ξ
(8.176)
ξ+η
µ−1
l (α)
λ−1
l (α) = , (8.177)
µl (α) + νr−1 (α)
−1
µ−1
r (α)
λ−1
r (α) = . (8.178)
µr (α) + νl−1 (α)
−1
Proof: Let λ be the membership function of ξ. For any given real number
x, write λ(x) = β. By using Theorem 8.24, we get
Since x ∈ λ−1 (β), there exist real numbers xi ∈ µ−1 i (β), i = 1, 2, · · · , n such
that f (x1 , x2 , · · · , xn ) = x. Noting that µi (xi ) ≥ β for i = 1, 2, · · · , n, we
have
λ(x) = β ≤ min µi (xi )
1≤i≤n
and then
λ(x) ≤ sup min µi (xi ). (8.181)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤n
On the other hand, assume x1 , x2 , · · · , xn are any given real numbers with
f (x1 , x2 , · · · , xn ) = x. Write
min µi (xi ) = β.
1≤i≤n
212 Chapter 8 - Uncertain Set
x = f (x1 , x2 , · · · , xn ) ∈ f (µ−1 −1 −1
1 (β), µ2 (β), · · · , µn (β)) = λ
−1
(β).
Hence
λ(x) ≥ β = min µi (xi )
1≤i≤n
and then
λ(x) ≥ sup min µi (xi ). (8.182)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤n
and
ξ2 (γ) = [γ − 1, 1 − γ] (8.185)
is also a triangular uncertain set (−1, 0, 1) with membership function
(
1 − |x|, if − 1 ≤ x ≤ 1
µ2 (x) = (8.186)
0, otherwise.
Note that ξ1 and ξ2 are not independent, and ξ1 + ξ2 ≡ [−1, 1] whose mem-
bership function is
(
1, if − 1 ≤ x ≤ 1
λ(x) = (8.187)
0, otherwise.
Thus
λ(x) 6= sup µ1 (x1 ) ∧ µ2 (x2 ). (8.188)
x1 +x2 =x
Section 8.6 - Inclusion Relation 213
Especially, for any point x, Liu [88] also gave a formula for calculating the
uncertain measure of containment relation,
A general formula was derived by Yao [180] for calculating the uncertain
measure of inclusion relation between uncertain sets.
Theorem 8.27 (Yao [180]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then
= M{ξ ∩ η c ⊂ ∅}
Example 8.28: Consider two special uncertain sets ξ = [1, 2] and η = [0, 3]
that are essentially crisp intervals whose membership functions are
(
1, if 1 ≤ x ≤ 2
µ(x) =
0, otherwise,
(
1, if 0 ≤ x ≤ 3
ν(x) =
0, otherwise,
Example 8.29: Consider two special uncertain sets ξ = [0, 2] and η = [1, 3]
that are essentially crisp intervals whose membership functions are
(
1, if 0 ≤ x ≤ 2
µ(x) =
0, otherwise,
(
1, if 1 ≤ x ≤ 3
ν(x) =
0, otherwise,
(
[0, 3], if γ = γ1 or γ3
η(γ) = (8.197)
[1, 2], if γ = γ2 or γ4 .
We may verify that ξ and η are independent, and share a common member-
ship function,
1,
if 1 ≤ x ≤ 2
µ(x) = 0.8, if 0 ≤ x < 1 or 2 < x ≤ 3 (8.198)
0, otherwise.
Note that
M{ξ ⊂ η} = M{γ1 , γ3 , γ4 } = 0.8. (8.199)
By using (8.194), we also obtain
(ii) Is it possible to re-do (i) when c is below 0.5? (iii) Is it stupid to think
that ξ ⊂ η if and only if µ(x) ≤ ν(x) for all x? (iv) Is it stupid to think that
ξ = η if and only if µ(x) = ν(x) for all x? (Hint: Use (8.195), (8.196) and
(8.197) as a reference.)
and
η(γ) = [−γ, γ] (8.205)
216 Chapter 8 - Uncertain Set
Note that ξ and η are not independent (in fact, they are the same one), and
M{ξ ⊂ η} = 1. However, by using (8.194), we obtain
M{ξ ⊂ η} = inf (1 − µ(x)) ∨ ν(x) = 0.5 6= 1. (8.207)
x∈<
1
M{ξ x} = (M{ξ ≤ x} + 1 − M{ξ > x}) . (8.210)
2
Example 8.32: Let [a, b] be a crisp interval and assume a > 0 for simplicity.
Then
ξ(γ) ≡ [a, b], ∀γ ∈ Γ
M{ξ x} ≡ 0, ∀x ≤ 0.
Thus
Z a Z b
a+b
E[ξ] = 1dx + 0.5dx = .
0 a 2
M{ξ x} ≡ 0, ∀x ≤ 0.
Thus
Z 1 Z 2 Z 3 Z 4
E[ξ] = 1dx + 0.7dx + 0.3dx + 0.1dx = 2.1.
0 1 2 3
218 Chapter 8 - Uncertain Set
Theorem 8.28 (Liu [84]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then for any real number x, we have
1
M{ξ x} = sup µ(y) + 1 − sup µ(y) , (8.211)
2 y≥x y<x
1
M{ξ x} = sup µ(y) + 1 − sup µ(y) . (8.212)
2 y≤x y>x
Proof: Since the uncertain set ξ has a membership function µ, the second
measure inversion formula tells us that
Thus (8.211) follows from (8.209) immediately. We may also prove (8.212)
similarly.
Theorem 8.29 (Liu [84]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then
Z +∞ Z x0
1 1
E[ξ] = x0 + sup µ(y)dx − sup µ(y)dx (8.213)
2 x0 y≥x 2 −∞ y≤x
Proof: Since µ achieves 1 at x0 , it follows from Theorem 8.28 that for almost
all x, we have
1 − y<x
sup µ(x)/2, if x ≤ x0
M{ξ x} = (8.214)
sup µ(x)/2,
if x > x0
y≥x
and
sup µ(x)/2, if x < x0
y≤x
M{ξ x} = (8.215)
1 − sup µ(x)/2, if x ≥ x0 .
y>x
Section 8.7 - Expected Value 219
If x0 ≥ 0, then
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx
0 −∞
Z x0 Z +∞ Z 0
µ(x) µ(x) µ(x)
= 1 − sup dx + sup dx − sup dx
0 y≤x 2 x0 y≥x 2 −∞ y≤x 2
1 +∞ 1 x0
Z Z
= x0 + sup µ(y)dx − sup µ(y)dx.
2 x0 y≥x 2 −∞ y≤x
If x0 < 0, then
Z +∞ Z 0
E[ξ] = M{ξ x}dx − M{ξ x}dx
0 −∞
Z +∞ Z x0 Z 0
µ(x) µ(x) µ(x)
= sup dx − sup dx − 1 − sup dx
0 y≥x 2 −∞ y≤x 2 x0 y≥x 2
1 +∞ 1 x0
Z Z
= x0 + sup µ(y)dx − sup µ(y)dx.
2 x0 y≥x 2 −∞ y≤x
Theorem 8.30 (Liu [84]) Let ξ be an uncertain set with regular membership
function µ. Then
1 +∞ 1 x0
Z Z
E[ξ] = x0 + µ(x)dx − µ(x)dx (8.216)
2 x0 2 −∞
Exercise 8.50: Show that the triangular uncertain set ξ = (a, b, c) has an
expected value
a + 2b + c
E[ξ] = . (8.219)
4
220 Chapter 8 - Uncertain Set
Exercise 8.51: Show that the trapezoidal uncertain set ξ = (a, b, c, d) has
an expected value
a+b+c+d
E[ξ] = . (8.220)
4
Theorem 8.31 (Liu [88]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then
1 1
Z
inf µ−1 (α) + sup µ−1 (α) dα
E[ξ] = (8.221)
2 0
where inf µ−1 (α) and sup µ−1 (α) are the infimum and supremum of the α-cut,
respectively.
Proof: Since ξ is a nonempty uncertain set and has a finite expected value,
we may assume that there exists a point x0 such that µ(x0 ) = 1 (perhaps
after a small perturbation). It is clear that the two integrals
Z +∞ Z 1
sup µ(y)dx and (sup µ−1 (α) − x0 )dα
x0 y≥x 0
1 +∞ 1 x0
Z Z
E[ξ] = x0 + sup µ(y)dx − sup µ(y)dx
2 x0 y≥x 2 −∞ y≤x
Z 1 Z 1
1 1
= x0 + sup µ−1 (α)dα − x0 − x0 − inf µ−1 (α)dα
2 0 2 0
Z 1
1
= (inf µ−1 (α) + sup µ−1 (α))dα.
2 0
The theorem is thus verified.
Theorem 8.32 (Liu [88]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets
with regular membership functions µ1 , µ2 , · · · , µn , respectively. If the func-
tion f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then
ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.222)
Section 8.7 - Expected Value 221
where µ−1 −1
l (α) and µr (α) are determined by
µ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (8.224)
−1 −1 −1
µ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)). (8.225)
Exercise 8.53: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that
1 1 µ−1 µ−1
Z
ξ l (α) r (α)
E = + −1 dα. (8.227)
η 2 0 νr−1 (α) νl (α)
Exercise 8.54: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that
1 1 µ−1 µ−1
Z
ξ l (α) r (α)
E = + dα. (8.228)
ξ+η 2 0 µ−1 −1
l (α) + νr (α) µ−1 −1
r (α) + νl (α)
1 1
Z
inf ν −1 (α) + sup ν −1 (α) dα.
E[η] =
2 0
222 Chapter 8 - Uncertain Set
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.
It is easy to verify that E[ξ] = 2.2, E[η] = 2.5 and E[ξ + η] = 4.75. Thus we
have
E[ξ + η] > E[ξ] + E[η].
If the uncertain sets are defined by
[1, 4],
if γ = γ1 [1, 4], if γ = γ1
ξ(γ) = [1, 3], if γ = γ2 η(γ) = [1, 6], if γ = γ2
[1, 2], if γ = γ3 , [1, 2], if γ = γ3 ,
then
[2, 8], if γ = γ1
(ξ + η)(γ) = [2, 9], if γ = γ2
[2, 4], if γ = γ3 .
It is easy to verify that E[ξ] = 2.2, E[η] = 2.6 and E[ξ + η] = 4.75. Thus we
have
E[ξ + η] < E[ξ] + E[η].
Therefore, the independence condition cannot be removed.
8.8 Variance
The variance of uncertain set provides a degree of the spread of the member-
ship function around its expected value.
Definition 8.13 (Liu [85]) Let ξ be an uncertain set with finite expected
value e. Then the variance of ξ is defined by
This definition says that the variance is just the expected value of (ξ −e)2 .
Since (ξ − e)2 is a nonnegative uncertain set, we also have
Z +∞
V [ξ] = M{(ξ − e)2 x}dx. (8.231)
0
[x, +∞)”. What is the appropriate value of M{(ξ − e)2 x}? Intuitively,
it is too conservative if we take the value M{(ξ − e)2 ≥ x}, and it is too
adventurous if we take the value 1 − M{(ξ − e)2 < x}. Thus we assign
M{(ξ − e)2 x} the middle value between them. That is,
1
M{(ξ − e)2 x} = M{(ξ − e)2 ≥ x} + 1 − M{(ξ − e)2 < x} . (8.232)
2
Theorem 8.34 If ξ is an uncertain set with finite expected value, a and b
are real numbers, then
V [aξ + b] = a2 V [ξ]. (8.233)
224 Chapter 8 - Uncertain Set
Theorem 8.35 Let ξ be an uncertain set with expected value e. Then V [ξ] =
0 if and only if ξ = {e} almost surely.
Proof: We first assume V [ξ] = 0. It follows from the equation (8.231) that
Z +∞
M{(ξ − e)2 x}dx = 0
0
which implies M{(ξ − e)2 x} = 0 for any x > 0. Hence M{ξ = {e}} = 1.
Conversely, assume M{ξ = {e}} = 1. Then we have M{(ξ − e)2 x} = 0 for
any x > 0. Thus
Z +∞
V [ξ] = M{(ξ − e)2 x}dx = 0.
0
M{(ξ − e) < x} = 1 −
2
sup µ(y).
(y−e)2 ≥x
8.9 Distance
Definition 8.14 (Liu [85]) The distance between nonempty uncertain sets ξ
and η is defined as
d(ξ, η) = E[|ξ − η|]. (8.236)
That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain set, we have
Z +∞
d(ξ, η) = M{|ξ − η| x}dx. (8.237)
0
Theorem 8.39 (Liu [95]) Let ξ and η be nonempty uncertain sets. Then
the distance between ξ and η is
!
1 +∞
Z
d(ξ, η) = sup λ(y) + 1 − sup λ(y) dx (8.240)
2 0 |y|≥x |y|<x
8.10 Entropy
This section defines an entropy as the degree of difficulty of predicting the
realization of an uncertain set.
Definition 8.15 (Liu [85]) Suppose that ξ is an uncertain set with member-
ship function µ. Then its entropy is defined by
Z +∞
H[ξ] = S(µ(x))dx (8.241)
−∞
Remark 8.13: Note that the entropy (8.241) has the same form with de
Luca and Termini’s entropy for fuzzy set [24].
and entropy is
Z +∞ Z +∞
H[ξ] = S(µ(x))dx = 0dx = 0.
−∞ −∞
Exercise 8.55: Let ξ = (a, b, c) be a triangular uncertain set. Show that its
entropy is
c−a
H[ξ] = . (8.243)
2
Proof: Without loss of generality, assume the uncertain sets ξ and η have
regular membership functions µ and ν, respectively.
Step 1: We prove H[aξ] = |a|H[ξ]. If a > 0, then the left and right
inverse membership functions of aξ are
λ−1 −1
l (α) = aµl (α), λ−1 −1
r (α) = aµr (α).
and
Z 1
−1 α
H[aξ] = (aµ−1
r (α) − aµl (α)) ln dα = (−a)H[ξ] = |a|H[ξ].
0 1−α
Thus we always have H[aξ] = |a|H[ξ].
Step 2: We prove H[ξ + η] = H[ξ] + H[η]. Note that the left and right
inverse membership functions of ξ + η are
λ−1 −1 −1
l (α) = µl (α) + νl (α), λ−1 −1 −1
r (α) = µr (α) + νr (α).
Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
Exercise 8.57: Let ξ be an uncertain set, and let A be a crisp set. Show
that
H[ξ + A] = H[ξ]. (8.249)
That is, the entropy is invariant under arbitrary translations.
H[ξ] = 1, (8.251)
and
η(γ) = [γ − 1, 1 − γ] (8.252)
is also a triangular uncertain set (−1, 0, 1) with entropy
H[η] = 1. (8.253)
Note that ξ and η are not independent, and ξ + η ≡ [−1, 1] whose entropy is
H[ξ + η] = 0. (8.254)
Thus
H[ξ + η] 6= H[ξ] + H[η]. (8.255)
Therefore, the independence condition cannot be removed.
M{(ξ ⊂ B) ∩ A} M{(ξ ⊂ B) ∩ A}
, if < 0.5
M{A} M{A}
M{ξ ⊂ B|A} = M{(ξ 6⊂ B) ∩ A} M{(ξ 6⊂ B) ∩ A}
1− , if < 0.5
M{A} M{A}
0.5, otherwise.
Definition 8.16 (Liu [95]) Let ξ be an uncertain set, and let A be an event
with M{A} > 0. Then the conditional uncertain set ξ given A is said to have
a membership function µ(x|A) if for any Borel set B of real numbers, we
have
M{B ⊂ ξ|A} = inf µ(x|A), (8.256)
x∈B
Example 8.37: The total order condition in Theorem 8.45 cannot be re-
moved. For example, take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 , γ4 }
with power set and
0, if Λ = ∅
M{Λ} = 1, if Λ = Γ (8.259)
0.5, otherwise.
Then
[1, 4], if γ = γ1
[1, 3], if γ = γ2
ξ(γ) = (8.260)
[2, 4], if γ = γ3
[2, 3], if γ = γ4
is a non-totally ordered uncertain set on a continuous uncertainty space, but
has a membership function
1, if 2 ≤ x ≤ 3
µ(x) = 0.5, if 1 ≤ x < 2 or 3 < x ≤ 4 (8.261)
0, otherwise.
Section 8.11 - Conditional Membership Function 231
That is, the second measure inversion formula is not valid and then the con-
ditional membership function does not exist. Thus the total order condition
cannot be removed.
Then
ξ(γ) = (−γ, γ), ∀γ ∈ [0, 1] (8.266)
is a totally ordered uncertain set on a discontinuous uncertainty space, but
has a membership function
(
0.5, if − 1 < x < 1
µ(x) = (8.267)
0, otherwise.
That is, the first measure inversion formula is not valid and then the con-
ditional membership function does not exist. Thus the continuity condition
cannot be removed.
Theorem 8.46 (Yao [186]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then for any real number a, the
conditional uncertain set η given a ∈ ξ has a membership function
ν(y)
, if ν(y) < µ(a)/2
µ(a)
ν(y|a ∈ ξ) = ν(y) + µ(a) − 1 (8.271)
, if ν(y) > 1 − µ(a)/2
µ(a)
0.5, otherwise.
Proof: In order prove that ν(y|a ∈ ξ) is the membership function of the
conditional uncertain set η given a ∈ ξ, we must verify the two measure
inversion formulas,
M{B ⊂ η|a ∈ ξ} = inf ν(y|a ∈ ξ), (8.272)
y∈B
First, for any Borel set B of real numbers, by using the definition of condi-
tional uncertainty and independence of ξ and η, we have
M{B ⊂ η} M{B ⊂ η}
, if < 0.5
M{a ∈ M{a ∈ ξ}
ξ}
M{B ⊂ η|a ∈ ξ} = M{B 6⊂ η} M{B 6⊂ η}
1− , if < 0.5
M{a ∈ ξ} M{a ∈ ξ}
0.5, otherwise.
Since
M{B ⊂ η} = inf ν(y), M{B 6⊂ η} = 1 − inf ν(y), M{a ∈ ξ} = µ(a),
y∈B y∈B
we get
inf ν(y)
y∈B
, if inf ν(y) < µ(a)/2
µ(a)
y∈B
M{B ⊂ η|a ∈ ξ} = inf ν(y) + µ(a) − 1
y∈B
, if inf ν(y) > 1 − µ(a)/2
µ(a) y∈B
0.5, otherwise.
Section 8.11 - Conditional Membership Function 233
That is,
M{B ⊂ η|a ∈ ξ} = inf ν(y|a ∈ ξ).
y∈B
The first measure inversion formula is verified. Next, by using the definition
of conditional uncertainty and independence of ξ and η, we have
M{η ⊂ B} M{η ⊂ B}
, if < 0.5
M{a ∈ M{a ∈ ξ}
ξ}
M{η ⊂ B|a ∈ ξ} = M{η 6⊂ B} M{η 6⊂ B}
1− , if < 0.5
M{a ∈ ξ} M{a ∈ ξ}
0.5, otherwise.
Since
M{η ⊂ B} = 1 − sup ν(y), M{η 6⊂ B} = sup ν(y), M{a ∈ ξ} = µ(a),
y∈B c y∈B c
we get
1 − sup ν(y)
y∈B c
, if sup ν(y) > 1 − µ(a)/2
µ(a)
y∈B c
M{η ⊂ B|a ∈ ξ} = µ(a) − sup ν(y)
y∈B c
, if sup ν(y) < µ(a)/2
µ(a) y∈B c
0.5, otherwise.
That is,
M{η ⊂ B|a ∈ ξ} = 1 − sup ν(y|a ∈ ξ).
y∈B c
ν(y)
, if ν(y) < min µi (ai )/2
min µi (ai ) 1≤i≤m
1≤i≤m
ν ∗ (y) = ν(y) + min µi (ai ) − 1
1≤i≤m
, if ν(y) > 1 − min µi (ai )/2
min µi (ai )
1≤i≤m
1≤i≤m
0.5, otherwise.
Uncertain Logic
A = {21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40} (9.3)
whose elements are ages in years. When we talk about “those sportsmen
are tall”, we should know the individual feature data of all sportsmen, for
example,
175, 178, 178, 180, 183, 184, 186, 186
A= (9.4)
188, 190, 192, 192, 193, 194, 195, 196
whose elements are heights in centimeters.
236 Chapter 9 - Uncertain Logic
whose elements are ages and heights in years and centimeters, respectively.
Example 9.3: The quantifier “there does not exist one” on the universe A
is a special uncertain quantifier
Q ≡ {0} (9.12)
Q ≡ {m, m + 1, · · · , n} (9.16)
Q ≡ {0, 1, 2, · · · , m} (9.18)
λ(x)
....
........
..
............................................................. ...............................
... ..... ..
... .. . ..
... ... ...
. ..
... .... .. ..
... .. ..
. ..
... .. .
. ..
. .
... ... . ..
.. . ..
... . .
.
... ... . ..
.. . ..
... .. .
. ..
... .. .
... .... ... ..
... ..
. .
.
..
... ... .
.
..
... ..
. .
.
..
... ... .
.
..
. .. . .
................................................................................................................................................................................................
.
..
.
x
..
n−5 n−2 n
λ(x)
.....
.......
...................................
... ......
... .. ...
... .. ....
... .. ....
... .. ...
...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
... .. .
................................................................................................................................................................................... x
...
2 .. 5
λ(x)
....
........
..
................................................. ..................................................................
... .... .. .. ..
... .. .. .. .. ....
... ... ..
. .. .. ....
.
... .... .. .. .. ...
... ... .. .. .. ....
... .. .. . .. ...
. .. ...
... ... ..
.
.. ...
... ..
. .. .
..
.. ...
... ... ..
.
.. ...
... ... .. . .. ...
...
.
. .. .. ..
... . ...
... .... .. ... .. ...
... ..
. .. .. .. ...
... ... .. .. .. ...
...
... ... .. .. .. ...
... ..
. .. .. .. ...
. .. .. . .. .
..........................................................................................................................................................................................................................................................
.
..
..
.
x
7 9 10 11 13
λ(x)
...
..........
.....................................................................................................
... .... .....
... ..... .....
... ... .. .. ....
... .... .. .. ...
... ... .. .. ....
... . .
. .. ...
.
... .... .. .. ....
... . .
. .. ...
... .... ... .. ...
...
... .
. .. .. ...
... .... .. .. ...
... .
. .. .. ...
... .... .. ..
..
...
...
... ... ..
...
... .
. . ..
.. ...
... .... ..
.. ...
... ... .. .
......................................................................................................................................................................................................................................................................... x
...
.
60% 65% 75% 80%
The uncertain quantifiers “almost all” and “almost none” are monotone,
but “about 10” and “about 70%” are not monotone. Note that both increas-
ing uncertain quantifiers and decreasing uncertain quantifiers are monotone.
In addition, any monotone uncertain quantifiers are unimodal.
Negated Quantifier
What is the negation of an uncertain quantifier? The following definition
gives a formal answer.
Definition 9.4 (Liu [85]) Let Q be an uncertain quantifier. Then the negated
quantifier ¬Q is the complement of Q in the sense of uncertain set, i.e.,
¬Q = Qc . (9.24)
Example 9.12: Let ∀ = {n} be the universal quantifier. Then its negated
quantifier
¬∀ ≡ {0, 1, 2, · · · , n − 1}. (9.25)
..
.........
... ¬λ(x)
....................................................................................................................................
λ(x)
....... ....... ...
... ...
..
... ...
... ...
... ... .
... ... ..
.
.
... ...
... ... .
... ... ....
...
... ..... .
... ....
... .. ...
.
... ... .....
... . ...
... ..
.
. ...
... ...
...
... ..
... ... ...
...
... . ..
............................................................................................................................................................................................................................................................. x
...
. n−5 n−2
Dual Quantifier
Definition 9.5 (Liu [85]) Let Q be an uncertain quantifier. Then the dual
quantifier of Q is
Q∗ = ∀ − Q. (9.30)
Remark 9.1: Note that Q and Q∗ are dependent uncertain sets such that
Q + Q∗ ≡ ∀. Since the cardinality of the universe A is n, we also have
Q∗ = {n} − Q. (9.31)
242 Chapter 9 - Uncertain Logic
Proof: This theorem follows from the operational law of uncertain set im-
mediately.
...
.......... ∗
....
λ (x) λ(x)
................................ ....... ....... ...
.... ... ..
... ...
... ...
... ... .
... ...
... ...
... ... ..
... ... ...
... ...
... ... ..
... ... ...
... ... ..
... .
... ... .
... ... .
... ...
... ...
... ... ..
... ... ...
... ...
. . ..
............................................................................................................................................................................................................................................................. x
....
5 n−5
..... ∗
.......
.... λ (x) λ(x)
... ................................ ..... ....... .......
.. ..
. ... ...
... ... ... .
.. ..
... ..
. ... .
... ... ...
..
...
... ..
. ...
... .
. ..
... ... ... ...
... .... ... .
. ..
... ... ... ..
... ... ... .. ...
... ... ... .. ..
... .. ... ...
... ..
. ... ...
. ..
... ..
. ...
...
.
... .... . ...
... . ..
... ... ... ..
... .. ... ...
..
. .. ... .. .
................................................................................................................................................................................................................................................................................
. .
x
....
20% 40% 60% 80%
Example 9.22: “Warm days are here again” is a statement in which “warm
days” is an uncertain subject that is an uncertain set on the universe of “all
244 Chapter 9 - Uncertain Logic
ν(x)
....
.........
..
... ........................................................................
... ..... ......
... ... . .. ...
... ... ... .. ....
... .. ..
. .. ....
... .. ..
. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ... .. .
. ...
... ..
. .. . ...
... ..
. .. ... ...
...
... ..
. .. .
. ...
... ... .. . ...
... ... .. ... ...
... ... .. .
.
...
... ..
. .. . ...
...
... ..
. .. ... ...
. .
. .. .
........................................................................................................................................................................................................................................................................
. .
x
..
... ◦ ◦ ◦ ◦
15 C 18 C 24 C 28 C
ν(x)
....
........
..
... .............................................................................................
... ..... .. .....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. .. ...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. .. ...
. .
. .. .
............................................................................................................................................................................................................................................................
. .
x
..
15yr 20yr ... 35yr 45yr
ν(x)
...
..........
.... ..........................................................................................
... ..... .....
... .. . .. ....
... ... ...
. .. ....
... .... ... .. ...
... ... . .. ....
... .. .. .. ...
. ..
... ... .. ...
...
... ... .. ..
.. ...
... ... . ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... .. .. ...
... ... .. ..
...
... .. ...
.
. .. .. ...
... .
. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm
that
M{ai ∈ S} = ν(ai ), i = 1, 2, · · · , n. (9.42)
In many cases, we are interested in some individuals a’s with ν(a) ≥ ω, where
ω is a confidence level. Thus we have a subuniverse,
Sω = {a ∈ A | ν(a) ≥ ω} (9.43)
that will play a new universe of individuals we are talking about, and the
individuals out of Sω will be ignored at the confidence level ω.
Theorem 9.7 Let ω1 and ω2 be confidence levels with ω1 > ω2 , and let Sω1
and Sω2 be subuniverses with confidence levels ω1 an ω2 , respectively. Then
µ(x)
.....
.......
.... ....................................................................
... ..... ....
... ... .. .. ....
... .. ..
. .. ....
... ... .. .. ...
... ... .. .. ....
... ... .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... .. .. .. ...
. ..
... ... .. ...
...
... ... .. ..
.. ...
... ... .. ...
... ... .. .. ...
... ..
. .. .. ...
... .. .. .. ...
. .. ...
... ... .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................................. x
.... ◦ ◦ ◦ ◦
15 C 18 C
. 24 C 28 C
µ(x)
....
........
..
... .............................................................................................
... ..... .. .....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. .. ...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. .. ...
. .
. .. .
............................................................................................................................................................................................................................................................
. .
x
..
15yr 20yr ... 35yr 45yr
µ(x)
....
........
..... ..........................................................................................
... ..... .....
.. ... .. .. ....
... ... .. .. ....
... .... .. .. ...
... ... .. .. ....
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
...
... ... .. .. ...
... ... .. .. ...
... ..
. .. . ...
... ..
. .. .. ...
... .... .. .. ...
...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
.... .. .. .. ...
. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm
Negated Predicate
Definition 9.8 (Liu [85]) Let P be an uncertain predicate. Then its negated
predicate ¬P is the complement of P in the sense of uncertain set, i.e.,
¬P = P c . (9.48)
Proof: The theorem follows from the definition of negated predicate and the
operational law of uncertain set immediately.
....
........ ¬µ(x) µ(x) ¬µ(x)
........................................................... ....... ....... ....... ....... ....... ...........................................
... ... .. ... ...
.. .. ..
... ...
... ..
... ...
.
. .... .
.
... ... .. . ...
... ... . ... ...
... ... ... ..
... ...
..
...
.
... ... ... ... ....
.... ....
... . .
... .. .. ... ..
... .. ..... ... ...
... . ... ..
. ...
... ... ...
... ... ..
... ..
... ...
...
. .
... ...
..
. ... .
.
... ... .
. ...
.. .. . ...
.............................................................................................................................................................................................................................................................................. x
.. ◦ ◦ ◦ ◦
15 C 18 C
... 24 C 28 C
....
........
.
¬µ(x) µ(x) ¬µ(x)
............................................... .. ....... ....... ....... ....... ....... ....... ...... .............................
.. ... .. ...
... ... ... ...
... ... . .. ...
... ... ... . .....
... ... .
... ... ... . ...
... ... . ... ...
... .. ...
... . ....
... ...... ... . ..
... .... ....
... .... ...
... ... .... ... ...
... .
. .... ..
.
... .. .. .. ...
... ..... ... ..
... ... ...
... .. ... ... ...
... .. ... ....
.
..
.. ... .. ...
. ... .
......................................................................................................................................................................................................................................................................... x
....
15yr 20yr
.. 35yr 45yr
...
µ(x)
.......... µ(x) ¬µ(x)
................................................ ....... ....... ....... ....... ....... ....... ....... ............................
.. ... .. ... ...
... ... ... .. ...
... ... ..
...
... .
.. ...
. .....
... .. .
... ... ... ...
... ...
... ...
. .. ...
... ..
...
...
. .. ...
.
..... .....
... .... .
... .. ...
. ....
... ... .... .. ...
... . .... . .
.
... ...
... ..
.. ... ... .
... ... ... ...
... .. ... ..
... .. ... ...
. ..
... . ...
... . .. ...
........................................................................................................................................................................................................................................................................ x
....
180cm 185cm . 195cm 200cm
where
Kω = {K ⊂ Sω | λ(|K|) ≥ ω} , (9.59)
K∗ω = {K ⊂ Sω | λ(|Sω | − |K|) ≥ ω} , (9.60)
Sω = {a ∈ A | ν(a) ≥ ω} . (9.61)
Remark 9.5: Keep in mind that the truth value formula (9.58) is vacuous
if the individual feature data of the universe A are not available.
Remark 9.6: The symbol |K| represents the cardinality of the set K. For
example, |∅| = 0 and |{2, 5, 6}| = 3.
Show that
T (∀, A, P ) = inf µ(a). (9.69)
a∈A
Show that
T (¬∃, A, P ) = 1 − sup µ(a). (9.77)
a∈A
Theorem 9.11 (Liu [85], Truth Value Theorem) Let (Q, S, P ) be an uncer-
tain proposition in which Q is a unimodal uncertain quantifier with member-
ship function λ, S is an uncertain subject with membership function ν, and P
is an uncertain predicate with membership function µ. Then the truth value
of (Q, S, P ) is
where
kω = min {x | λ(x) ≥ ω} , (9.79)
∆(kω ) = kω -max{µ(ai ) | ai ∈ Sω }, (9.80)
kω∗ = |Sω | − max{x | λ(x) ≥ ω}, (9.81)
∗
∆ (kω∗ ) = kω∗ -max{1 − µ(ai ) | ai ∈ Sω }. (9.82)
Proof: Since the supremum is achieved at the subset with minimum cardi-
nality, we have
Example 9.35: Assume that the daily temperatures of some week from
Monday to Sunday are
Note that the uncertain quantifier is Q = {2, 3}. We also suppose that the
uncertain predicate P = “warm” has a membership function
0, if x ≤ 15
(x − 15)/3, if 15 ≤ x ≤ 18
µ(x) = 1, if 18 ≤ x ≤ 24 (9.93)
(28 − x)/4, if 24 ≤ x ≤ 28
0, if 28 ≤ x.
254 Chapter 9 - Uncertain Logic
It is clear that Monday and Tuesday are warm with truth value 1, and
Wednesday is warm with truth value 0.75. But Thursday to Sunday are
not “warm” at all (in fact, they are “hot”). Intuitively, the uncertain propo-
sition “two or three days are warm” should be completely true. The truth
value formula (9.58) yields that the truth value is
Example 9.36: Assume that in a class there are 15 students whose ages are
21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40 (9.97)
The truth value formula (9.58) yields that the uncertain proposition has a
truth value
T (“almost all students are young”) = 0.9. (9.101)
Example 9.37: Assume that in a team there are 16 sportsmen whose heights
are
175, 178, 178, 180, 183, 184, 186, 186
(9.102)
188, 190, 192, 192, 193, 194, 195, 196
in centimeters. Consider an uncertain proposition
The truth value formula (9.58) yields that the uncertain proposition has a
truth value
T (“about 70% of sportsmen are tall”) = 0.8. (9.106)
Example 9.38: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (9.107)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
Note that each individual is described by a feature data (y, z), where y rep-
resents ages and z represents heights. In this case, the uncertain subject
256 Chapter 9 - Uncertain Logic
The truth value formula (9.58) yields that the uncertain proposition has a
truth value
T (“most young students are tall”) = 0.8. (9.112)
A = {a1 , a2 , · · · , an }. (9.113)
Next, we should have some linguistic terms to represent quantifiers, for exam-
ple, “most” and “all”. Denote them by a collection of uncertain quantifiers,
Q = {Q1 , Q2 , · · · , Qm }. (9.114)
Then, we should have some linguistic terms to represent subjects, for exam-
ple, “young students” and “old students”. Denote them by a collection of
uncertain subjects,
S = {S1 , S2 , · · · , Sn }. (9.115)
Section 9.7 - Linguistic Summarizer 257
Last, we should have some linguistic terms to represent predicates, for exam-
ple, “short” and “tall”. Denote them by a collection of uncertain predicates,
P = {P1 , P2 , · · · , Pk }. (9.116)
Find Q, S and P
subject to:
Q∈Q
(9.118)
S∈S
P ∈P
T (Q, S, P ) ≥ β.
Example 9.39: Assume that in a class there are 18 students whose ages
and heights are
(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (9.119)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
(
1, if x = 1
λall (x) = (9.122)
0, if 0 ≤ x < 1,
respectively. Denote the collection of uncertain quantifiers by
Q = {“about half ”, “most”,“all”}. (9.123)
We also have three linguistic terms “young students”, “middle-aged students”
and “old students” as uncertain subjects whose membership functions are
0, if y ≤ 15
(y − 15)/5, if 15 ≤ y ≤ 20
νyoung (y) = 1, if 20 ≤ y ≤ 35 (9.124)
(45 − y)/10, if 35 ≤ y ≤ 45
0, if y ≥ 45,
0, if y ≤ 40
(y − 40)/5, if 40 ≤ y ≤ 45
νmiddle (y) = 1, if 45 ≤ y ≤ 55 (9.125)
(60 − y)/5, if 55 ≤ y ≤ 60
0, if y ≥ 60,
0, if y ≤ 55
(y − 55)/5, if 55 ≤ y ≤ 60
νold (y) = 1, if 60 ≤ y ≤ 80 (9.126)
(85 − y)/5, if 80 ≤ y ≤ 85
1, if y ≥ 85,
and then extracts a linguistic summary “most young students are tall”.
Uncertain Inference
Let X and Y be two concepts. It is assumed that we only have a single if-then
rule,
“if X is ξ then Y is η” (10.1)
where ξ and η are two uncertain sets. We first introduce the following infer-
ence rule.
Inference Rule 10.1 (Liu [82]) Let X and Y be two concepts. Assume a
rule “if X is an uncertain set ξ then Y is an uncertain set η”. From X is a
constant a we infer that Y is an uncertain set
η ∗ = η|a∈ξ (10.2)
Theorem 10.1 (Liu [82]) In Inference Rule 10.1, if ξ and η are independent
uncertain sets with membership functions µ and ν, respectively, then η ∗ has
262 Chapter 10 - Uncertain Inference
a membership function
ν(y)
, if ν(y) < µ(a)/2
µ(a)
ν ∗ (y) = ν(y) + µ(a) − 1 (10.4)
, if ν(y) > 1 − µ(a)/2
µ(a)
0.5, otherwise.
Proof: It follows from Inference Rule 10.1 that η ∗ is the conditional uncer-
tain set η given a ∈ ξ. By applying Theorem 8.46, the membership function
of η ∗ is just ν ∗ .
τ ∗ = τ |(a∈ξ)∩(b∈η) (10.5)
Rule 1: If X is ξ1 then Y is η1
Rule 2: If X is ξ2 then Y is η2
(10.9)
From: X is a constant a
Infer: Y is η ∗ determined by (10.8)
µ1 (a) µ2 (a)
η∗ = η∗ + η∗ (10.10)
µ1 (a) + µ2 (a) 1 µ1 (a) + µ2 (a) 2
where η1∗ and η2∗ are uncertain sets whose membership functions are respec-
tively given by
ν1 (y)
, if ν1 (y) < µ1 (a)/2
µ1 (a)
ν1∗ (y) = ν1 (y) + µ1 (a) − 1 (10.11)
, if ν1 (y) > 1 − µ1 (a)/2
µ1 (a)
0.5, otherwise,
ν2 (y)
, if ν2 (y) < µ2 (a)/2
µ2 (a)
ν2∗ (y) = ν2 (y) + µ2 (a) − 1 (10.12)
, if ν2 (y) > 1 − µ2 (a)/2
µ2 (a)
0.5, otherwise.
Proof: It follows from Inference Rule 10.3 that the uncertain set η ∗ is just
k
X ci · ηi∗
η∗ = (10.16)
i=1
c1 + c2 + · · · + ck
where ηi∗ are uncertain sets whose membership functions are given by
νi (y)
, if νi (y) < ci /2
ci
νi∗ (y) = νi (y) + ci − 1 (10.17)
, if νi (y) > 1 − ci /2
ci
0.5, otherwise
for i = 1, 2, · · · , k, respectively.
Proof: For each i, since {a1 ∈ ξi1 }, {a2 ∈ ξi2 }, · · · , {am ∈ ξim } are indepen-
dent events, we immediately have
\m
M (aj ∈ ξij ) = min M{aj ∈ ξij } = min µil (al )
1≤j≤m 1≤l≤m
j=1
1. inputs that are crisp data to be fed into the uncertain system;
5. outputs that are crisp data yielded from the expected value operator.
Now let us consider an uncertain system in which there are m crisp inputs
α1 , α2 , · · · , αm , and n crisp outputs β1 , β2 , · · · , βn . At first, we infer n un-
certain sets η1∗ , η2∗ , · · · , ηn∗ from the m crisp inputs by the rule-base (i.e., a set
of if-then rules),
If ξ11 and ξ12 and· · · and ξ1m then η11 and η12 and· · · and η1n
If ξ21 and ξ22 and· · · and ξ2m then η21 and η22 and· · · and η2n
(10.19)
···
If ξk1 and ξk2 and· · · and ξkm then ηk1 and ηk2 and· · · and ηkn
βj = E[ηj∗ ] (10.22)
Theorem 10.5 Assume ξi1 , ξi2 , · · · , ξim , ηi1 , ηi2 , · · · , ηin are independent un-
certain sets with membership functions µi1 , µi2 , · · · , µim , νi1 , νi2 , · · · , νin , i =
1, 2, · · · , k, respectively. Then the uncertain system from (α1 , α2 , · · · , αm ) to
(β1 , β2 , · · · , βn ) is
k ∗
X ci · E[ηij ]
βj = (10.24)
c + c2 + · · · + ck
i=1 1
∗
for j = 1, 2, · · · , n, where ηij are uncertain sets whose membership functions
are given by
νij (y)
, if νij (y) < ci /2
ci
∗ νij (y) + ci − 1
νij (y) = (10.25)
, if νij (y) > 1 − ci /2
ci
0.5, otherwise
for i = 1, 2, · · · , k, j = 1, 2, · · · , n, respectively.
Proof: It follows from Inference Rule 10.4 that the uncertain sets ηj∗ are
k ∗
X ci · ηij
ηj∗ =
i=1
c1 + c2 + · · · + ck
∗
for j = 1, 2, · · · , n. Since ηij , i = 1, 2, · · · , k, j = 1, 2, · · · , n are independent
uncertain sets, we get the theorem immediately by the linearity of expected
value operator.
Remark 10.1: The uncertain system allows the uncertain sets ηij in the
rule-base (10.19) become constants bij , i.e.,
for j = 1, 2, · · · , n.
Remark 10.2: The uncertain system allows the uncertain sets ηij in the
rule-base (10.19) become functions hij of inputs α1 , α2 , · · · , αm , i.e.,
for j = 1, 2, · · · , n.
The theorem is thus verified. Hence uncertain systems are universal approx-
imators.
.........................................................................
... ...
Inputs of Controller ...
.
...
. Outputs of Controller
......................................................................................................................................... ...............................................................................................................................................
... .
.
. Process .....
..
...
...
... Outputs of Process .
..... .. Inputs of Process ...
... ....................................................................... ...
... ...
....... .
. ...
. . .
....................................... . ............................................................................................................... .................................. .......................................................................................... ........................................
... .
. . .
. ... .. .. ∗ ... .. .. ∗ ... . .
. ...
α (t)
...
... 1
. .. . .
.......................... .............................................................................................. ........................
.
. ... ...
.. .
η (t)
1
......................
.
...
..
β (t)=E[η (t)]
.
. 1 1
..........................
.
...
.. . .
. 1 β (t) .....
.... .... .... .... .... .... ...
...
... ... . . Inference Rule
. .. ..... .....
.
.
. ∗ .....
.............................
.
. ∗ .....
..............................
.
. ...
α (t) ... ............................... ................................................................................................ ............................ η (t) β (t)=E[η (t)] β (t) ....
... 2 ... ... ....... ... ... 2 ... ... 2 2 ... ..
. 2 ...
.... ..
.. ...
...
.
.
..
.
.
.
..
.
.
.
.
.
.
............................................................................
...
...
.
.
.
.
.
.. ...
... ..
.
.
.
.
. ... .
.
.
.
.. ...
...
... .. .. . . .
. ... .
.
.
.
.
.
.
.
. ...
...
...
.
.
.
. . ...
... .
.
.
.
.
...
...
.
.
.
. . ...
...
...
...
.
.
.
.
..........................
.
.
.
. Rule Base
.
.
.
.
..
.
..
..
.
.. ... .. ∗
.
.
.
.
..
.
... ... .
.
.
.
. ∗
..
.
.. ...........
.
.
.
. .....
α (t)
.............m
...
........................
.
.
...
.
.. ...
.
.................................... .. ..
...
...
............................
.............................................................................................................
....
...
...
..
.. .. ...
.
.
.
...
.
.
η (t)
n
................................
...
...
..
.. .. ...
.
.
..
...
..β (t)=E[η (t)]
..
. n n
.
........................................................................................
.
..
.
.
..
...
...
...
..
.. ... n β (t)
......................................
.
...
.
The uncertain controller has two inputs (“angle” and “angular velocity”)
and one output (“force”). Three of them will be represented by uncertain
sets labeled by
“negative large” NL
“negative small” NS
“zero” Z
“positive small” PS
“positive large” PL
The membership functions of those uncertain sets are shown in Figures 10.4,
10.5 and 10.6.
Intuitively, when the inverted pendulum has a large clockwise angle and
a large clockwise angular velocity, we should give it a large force to the right.
Thus we have an if-then rule,
If the angle is negative large
and the angular velocity is negative large,
then the force is positive large.
Similarly, when the inverted pendulum has a large counterclockwise angle
270 Chapter 10 - Uncertain Inference
NL
..................................................
NS ... .
Z .
PS PL
...............................................
... ... ... ...... ...... ...
... ... ..... ... ... ... ... ...
... .. .. ..... .. ..... ..
... ...
.
...
.. .
.... ... .
.... ... .....
... . ... . ... . ... .
... ..... ... .... ... ..... ... ....
... .. ... ... ... .. ... ...
...... ...... ...... ......
...... ...... ...... ......
... ..... ... ..... ... ..... ... ...
..
.
. ... .
... ... .
... ... .
.. .....
.
.... ...
.... .
... ...
.... ...
. ...
.... ...
. ...
...
... ... ..... ... ..... ... ..... ...
... ... ... ... ... ... ... ...
..
. ..... .... ...... ..
............................................................................................................................................................................................................................................................................
.
(rad)
−π/2 −π/4 0 π/4 π/2
NL
..................................................
NS ...
Z PS PL
... .. ... ...... ...... .................................................
... ... ... ... ... ... ... ...
... ... ..... ... ..... ... ..... ...
... ..
. . .
... .. .
... .. .
...
. ... ... ...
...
... ..... ... ..... ... ..... ... .....
... ... ... .. ... .. ... ..
..... ... ... ... ... ... ...
..... ..... ..
.....
. ... . .
. .... .
. .... ......
.
. .
. .
. ... ...
... .....
. .... ..... .... ..... .... .....
..
. ...
... ... ... ... ...
... ... ...
. . ... . . ...
... ... ..... ... ..... ... ..... ...
... ... .. ... .. ... .. ...
... ... .. . .. .
. .
. .
...................................................................................................................................................................................................................................................................................... (rad/sec)
−π/4 −π/8 0 π/8 π/4
Note that each input or output has 5 states and each state is represented by
an uncertain set. This implies that the rule-base contains 5 × 5 if-then rules.
In order to balance the inverted pendulum, the 25 if-then rules in Table 10.1
are accepted.
A lot of simulation results show that the uncertain controller may balance
the inverted pendulum successfully.
NL NS Z PS PL
....... ....... ....... ....... .......
... ... ... ... ... ... ... ... ... ...
... ..... ... ..... ... ..... ... ..... ... .....
.... ... ...
. ... ...
. ... .... ... ...
. ...
. ... . ... . ... . ... . ...
... ... ..... ... ..... ... ..... ... ..... ...
.. ... .. ... .. ... .. ... .. ...
.... .
. .. .. .. .. .. .. .. ...
.... ......... ......... ......... ........
. ...
...
.
.. ... .. ... .. . .. .. ... .. ...
...
. ... .....
. ... .....
. ... .....
. ... .....
. ...
..
. ..
. ... ..
. ... ..
. ... ..
. ... ...
.... ...
.
...
.... .. .... ...
.... .. ...
. ...
.... .. .... ...
....
...
...
... ... ... .. . ... .. . ... .. . ... ...
.
.. ..
. . . . . . . . .
...................................................................................................................................................................................................................................................................................... (N)
−60 −40 −20 0 20 40 60
Uncertain Process
The study of uncertain process was started by Liu [78] in 2008 for modelling
the evolution of uncertain phenomena. This chapter will give the concept of
uncertain process, and introduce sample path, uncertainty distribution, in-
dependent increment process, extreme value, first hitting time, time integral,
and stationary increment process.
Definition 11.1 (Liu [78]) Let (Γ, L, M) be an uncertainty space and let T
be a totally ordered set (e.g. time). An uncertain process is a function Xt (γ)
from T × (Γ, L, M) to the set of real numbers such that {Xt ∈ B} is an event
for any Borel set B of real numbers at each time t.
is an uncertain process.
Xt (γ) = t − γ, ∀γ ∈ Γ (11.2)
274 Chapter 11 - Uncertain Process
is an uncertain process.
Sample Path
Definition 11.2 (Liu [78]) Let Xt be an uncertain process. Then for each
γ ∈ Γ, the function Xt (γ) is called a sample path of Xt .
<..
...
.......
..
... ...... .
... ... .... ....... ..
... ........... ..... .............
... ...... . . ... .....
.
... ... ......
......
... ... ... ...
... ... ... .......
..
.. ...
... .. ....
. .
. . .
. ..
... . .. .... ......... ... .
. .
.........
... ...... ..
. .. .............. .... .......
. ..
.
.
... .. .........
. ... .... .. ........
.. .......... . .... ...... ... .
... . .
. .
. . .
.
... ....... ... ....... ....... .... ... ..... ... ..
... .... ... ... .. . ...... ... ... ... ... .....
... ......... ... .... ..... .. ....
... .... .... .... ..
... .. .. ..
.
... . ....
... ...........
... ...
......
..
..............................................................................................................................................................................................................................................................
t
Example 11.4: The linear uncertain process Xt ∼ L(at, bt) has an uncer-
tainty distribution,
if x ≤ at
0,
x − at
Φt (x) = , if at ≤ x ≤ bt (11.5)
(b − a)t
1, if x ≥ bt.
Example 11.5: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
uncertainty distribution,
0, if x ≤ at
x − at
, if at ≤ x ≤ bt
2(b − a)t
Φt (x) = (11.6)
x + ct − 2bt
, if bt ≤ x ≤ ct
2(c − b)t
if x ≥ ct.
1,
Example 11.6: The normal uncertain process Xt ∼ N (et, σt) has an un-
certainty distribution,
−1
π(et − x)
Φt (x) = 1 + exp √ . (11.7)
3σt
Example 11.7: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an uncertainty distribution,
−1
π(et − ln x)
Φt (x) = 1 + exp √ . (11.8)
3σt
Note that at each time t, the inverse uncertainty distribution Φ−1t (α) is
well defined on the open interval (0, 1). If needed, we may extend the domain
to [0, 1] via
Φ−1 −1
t (0) = lim Φt (α), Φ−1 −1
t (1) = lim Φt (α). (11.15)
α↓0 α↑1
Φ−1
t (α)
....
.........
......
......
.......... α = 0.9
.................
.. ......
... .... .
... ......
.......
... .......... ....
............. ............
...
... ...................
.........................
.......
........ α = 0.8
... ..
...........
. ............
..... ...... .... ...............
.........
... ........ ................................................ ......... α = 0.7
... ......... .................... ........
.......... . ........
................................................................................................................................ .... .................................
............................................................ .............................................
....................................................... ..................................
......... α = 0.6 .
.....................................................................................................................................................................................
.................................................................................. .... ....
.......... ..................
α = 0.5
...................................................... ................................... .........
................... ..................... ........................................................................
... ........... .
......... .....................
.........
........
...........
α = 0.4
.........................
..
... ........ .................................................. .........
...
...
........
........
.........
... ..........
.......
........ ....α = 0.3
...
. ............
.
... ............... .......
........................ ........
... ..........
....... α = 0.2
...........................
... ......
... ......
......
... ......
... .......
............ .....
....
..
..
α = 0.1
........
......................................................................................................................................................................................................................................................
t
Example 11.9: The linear uncertain process Xt ∼ L(at, bt) has an inverse
uncertainty distribution,
Φ−1
t (α) = (1 − α)at + αbt. (11.16)
278 Chapter 11 - Uncertain Process
Example 11.10: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
inverse uncertainty distribution,
(
−1 (1 − 2α)at + 2αbt, if α < 0.5
Φt (α) = (11.17)
(2 − 2α)bt + (2α − 1)ct, if α ≥ 0.5.
Example 11.12: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an inverse uncertainty distribution,
√ !
−1 σt 3 α
Φt (α) = exp et + ln . (11.19)
π 1−α
Xt (γ) = t − γ, ∀γ ∈ Γ. (11.20)
Theorem 11.3 (Liu [94]) A function Φ−1 t (α) : T × (0, 1) → < is an in-
verse uncertainty distribution of uncertain process if at each time t, it is a
continuous and strictly increasing function with respect to α.
Proof: At each time t, since Φ−1 t (α) is a continuous and strictly increasing
function with respect to α, it follows from Theorem 2.5 that there exists an
uncertain variable ξt whose inverse uncertainty distribution is just Φ−1t (α).
Define
Xt = ξt , ∀t ∈ T.
Then Xt is an uncertain process and has the inverse uncertainty distribution
Φ−1
t (α). The theorem is proved.
Theorem 11.4 (Liu [94]) Uncertain processes X1t , X2t , · · · , Xnt are inde-
pendent if and only if for any positive integer k, any times t1 , t2 , · · · , tk , and
any Borel sets B1 , B2 , · · · , Bn of k-dimensional real vectors, we have
( n ) n
[ _
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi } (11.25)
i=1 i=1
Theorem 11.5 (Liu [94], Operational Law) Let X1t , X2t , · · · , Xnt be inde-
pendent uncertain processes with regular uncertainty distributions Φ1t , Φ2t ,
· · · , Φnt , respectively. If the function f (x1 , x2 , · · · , xn ) is strictly increasing
with respect to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 ,
xm+2 , · · · , xn , then
Xt = f (X1t , X2t , · · · , Xnt ) (11.26)
has an inverse uncertainty distribution
Φ−1 −1 −1 −1 −1
t (α) = f (Φ1t (α), · · · , Φmt (α), Φm+1,t (1 − α), · · · , Φnt (1 − α)). (11.27)
Proof: At any time t, it is clear that X1t , X2t , · · · , Xnt are independent un-
certain variables with inverse uncertainty distributions Φ−1 −1
1t (α), Φ2t (α), · · · ,
−1
Φnt (α), respectively. The theorem follows from the operational law of un-
certain variables immediately.
280 Chapter 11 - Uncertain Process
Theorem 11.6 (Operational Law) Let X1t , X2t , · · · , Xnt be independent un-
certain processes with continuous uncertainty distributions Φ1t , Φ2t , · · · , Φnt ,
respectively. If f (x1 , x2 , · · · , xn ) is continuous, strictly increasing with respect
to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn ,
then
Xt = f (X1t , X2t , · · · , Xnt ) (11.28)
has an uncertainty distribution
Φt (x) = sup min Φit (xi ) ∧ min (1 − Φit (xi )) . (11.29)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤m m+1≤i≤n
Proof: At any time t, it is clear that X1t , X2t , · · · , Xnt are independent un-
certain variables. The theorem follows from the operational law of uncertain
variables immediately.
That is, an independent increment process means that its increments are
independent uncertain variables whenever the time intervals do not overlap.
Please note that the increments are also independent of the initial state.
Theorem 11.7 (Liu [94]) Let Φ−1 t (α) be the inverse uncertainty distribution
of an independent increment process. Then (i) Φ−1 t (α) is a continuous and
strictly increasing function with respect to α at each time t, and (ii) Φ−1
t (α)−
Φ−1
s (α) is a monotone increasing function with respect to α for any times
s < t.
Φ−1 −1 −1 −1
t (β) − Φt (α) ≥ Φs (β) − Φs (α).
That is,
Φ−1 −1 −1 −1
t (β) − Φs (β) ≥ Φt (α) − Φs (α).
Section 11.4 - Independent Increment Process 281
Hence Φ−1 −1
t (α) − Φs (α) is a monotone increasing function with respect to α.
The theorem is verified.
Remark 11.2: It follows from Theorem 11.7 that the uncertainty distribu-
tion of independent increment process has a horn-like shape. See Figure 11.3.
Φ−1
t (α)
....
.........
..
α = 0.9
..............
.................
.....................
..
Theorem 11.8 (Liu [94]) Let Φ−1 t (α) : T × (0, 1) → < be a function. If (i)
Φ−1
t (α) is a continuous and strictly increasing function with respect to α at
each time t, and (ii) Φ−1 −1
t (α)−Φs (α) is a monotone increasing function with
respect to α for any times s < t, then there exists an independent increment
process whose inverse uncertainty distribution is just Φ−1
t (α).
Proof: Without loss of generality, we only consider the range of t ∈ [0, 1]. Let
n be a positive integer. Since Φ−1t (α) is a continuous and strictly increasing
function and Φ−1
t (α)−Φ −1
s (α) is a monotone increasing function with respect
to α, there exist independent uncertain variables ξ0n , ξ1n , · · · , ξnn such that
ξ0n has an inverse uncertainty distribution
Υ−1 −1
0n (α) = Φ0 (α)
Remark 11.3: It is also showed that for any α ∈ (0, 1), the following two
equations are true,
M{Xt < Φ−1t (α), ∀t} = α, (11.33)
M{Xt ≥ Φ−1
t (α), ∀t} = 1 − α. (11.34)
Please mention that {Xt < Φ−1
t (α), ∀t} and {Xt ≥ Φ−1
t (α), ∀t} are disjoint
events but not opposite. Although it is always true that
are independent uncertain variables, it follows from Theorem 2.18 that the
maximum
max Xti
1≤i≤n
and
min Φti (x) → inf Φt (x)
1≤i≤n 0≤t≤s
and
max Φti (x) → sup Φt (x)
1≤i≤n 0≤t≤s
Thus
Ψ(x) 6= inf Φt (x). (11.45)
0≤t≤1
X. t
....
........
....
... ...... .
... ... .. ..... .
.... .. ...... ..........
... ..... .... ..... ........ . ....
... ..... ...
... .. ........
z ............................................................................................. ... .....
... . ...
. ...
... .......... ......
. ....
. ...
..
... .. ... ........ ..
. .
....... ..
. . . . . . ..
... ........ . .... ......... ... ..... ....... ..... . ...
... ... ........... ... .. ........ ........ ..
... ... ........ . .
. .
. .
. . .
... . ... ..... ..... ... .. ..
...... .. ..... ......... ........... .... ..... ... ...
...
... ... .. .. ... . ..... ... .. ... .. ......
... ............. .... ... . .. .. .. ... ..
... ...... ...... ...... .... ..
... .. ..
... ... ..
... ...... .... ..
... ... .. ..
...... ..
....... .
..................................................................................................................................................................................................................................................
τz t
Proof: When X0 < z, it follows from the definition of first hitting time that
τz ≤ s if and only if sup Xt ≥ z.
0≤t≤s
When X0 > z, it follows from the definition of first hitting time that
Definition 11.10 (Liu [78]) Let Xt be an uncertain process. For any par-
tition of closed interval [a, b] with a = t1 < t2 < · · · < tk+1 = b, the mesh is
written as
∆ = max |ti+1 − ti |. (11.58)
1≤i≤k
Section 11.7 - Time Integral 287
provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be time integrable.
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1
exists almost surely and is finite. On the other hand, since Xt is an uncertain
variable at each time t, the above limit is also a measurable function. Hence
the limit is an uncertain variable and then Xt is time integrable.
a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,
the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1
exists almost surely and is finite. Hence Xt is time integrable on the subin-
terval [a0 , b0 ]. Next, for the partition
we have
k
X m−1
X k
X
Xti (ti+1 − ti ) = Xti (ti+1 − ti ) + Xti (ti+1 − ti ).
i=1 i=1 i=m
Note that
Z b k
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z c m−1
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z b k
X
Xt dt = lim Xti (ti+1 − ti ).
c ∆→0
i=m
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of time integral that
Z b k
X
(αXt + βYt )dt = lim (αXti + βYti )(ti+1 − ti )
a ∆→0
i=1
k
X k
X
= lim α Xti (ti+1 − ti ) + lim β Yti (ti+1 − ti )
∆→0 ∆→0
i=1 i=1
Z b Z b
= α Xt dt + β Yt dt.
a a
Thus the time integral Ys has the inverse uncertainty distribution Ψ−1
s (α).
Yt = aXt + b (11.68)
Ys+t − Ys = a(Xs+t − Xs )
are also identically distributed uncertain variables for all s > 0, and Yt is a
stationary increment process. Hence Yt is a stationary independent increment
process.
Then
In addition,
It follows from (11.69) and (11.70) that Xt −X0 and t(X1 −X0 ) are identically
distributed, and so are Xt and (1 − t)X0 + tX1 .
Φ−1
t (α) = µ(α) + ν(α)t. (11.73)
Φ−1
t (α)
.....
....... α = 0.9 .......
.......
....
.... .......
...
...
α = 0.8 ....
......... ............
...... .............
....... ..
... ....... .............
α = 0.7 .......
... ....... .. ........
... ....
......... ............. ...............
... .......... ........... α = 0.6 ...........
. ... ........
... .... .... ......
... ....... ....... ............... ........
... ....... ....... .. .........
...
...
...
...
...... ....... ........
α = 0.5
....... .............. ............... ................
........ ..........
........ ...........
....... ....... ........ ................. ..........
... ....... ........ ......... . α = 0.4
.......... ..........
... ....... ....... ........ .......... .......... ............
... ....
...................................... ................ ................... .......................
...
...
.....
. . ... ........................... ............. ...............
............... ........ .......... ...........
. ..
............
.... ...... .....α = 0.3
.......
...............
..... ....... ... .........
... .............. ......... ......... .......... ...................... ...............
.............. ....... ........ .......... .. ............... α = 0.2 .................
... .................................................................................................. ............................. ............... ....
............................................. .............. .................. ...................
....................
........................................................................................................................................ α = 0.1 ..............................
................................................................................... ................................
...............................
.........................................................................................................
.........................
..
...............................................................................................................................................................................................................................................
t
Theorem 11.19 (Liu [94]) Let µ and ν be continuous and strictly increasing
functions on (0, 1). Then there exists a stationary independent increment
process Xt whose inverse uncertainty distribution is
Φ−1
t (α) = µ(α) + ν(α)t. (11.74)
Proof: Without loss of generality, we only consider the range of t ∈ [0, 1].
Let
ξ(r) r represents rational numbers in [0, 1]
be a countable sequence of independent uncertain variables, where ξ(0) has
an inverse uncertainty distribution µ(α) and ξ(r) have a common inverse
uncertainty distribution ν(α) for all rational numbers r in (0, 1]. For each
positive integer n, we define an uncertain process
k
ξ(0) + 1 i k
X
ξ , if t = (k = 1, 2, · · · , n)
n n i=1 n n
Xt =
linear, otherwise.
E[Xt ] = a + bt (11.75)
Proof: It follows from Theorem 11.20 that there exists a real number b such
that E[Xt ] = bt for any time t ≥ 0. Hence
Proof: It follows from Theorem 11.17 that Xt and (1 − t)X0 + tX1 are
identically distributed uncertain variables. Since X0 is a constant, we have
Proof: It follows from Theorem 11.22 that there exists a real number b such
that V [Xt ] = bt2 for any time t ≥ 0. Hence
p √ √ √ p p
V [Xs+t ] = b(s + t) = bs + bt = V [Xs ] + V [Xt ].
294 Chapter 11 - Uncertain Process
Uncertain Renewal
Process
N. t
...
..........
...
..
..........
4 ....
..............................
..
... ..
.. ..
.......... .........................................................
3 ....
..
.. ..
..
..
... .. ..
.......... ....................................... ..
2 ...
...
.. .
..
..
..
... .. .. ..
... .. .. ..
.........................................................
1 .........
.... .. ..
..
..
..
..
..
..
... .. ..
.... .. .. .. .
.....................................................................................................................................................................................................................................
0 ... ... ... ... ... t
ξ ...
... 1 ... .... ξ 2 ξ ....
... 3 ... ξ
....
4 ....
...
.. .. .. .. ..
S0 S1 S2 S3 S4
Then we have
Nt ≥ n ⇔ Sn ≤ t (12.2)
for any time t and integer n. Furthermore, we also have
Proof: Since Nt is the largest n such that Sn ≤ t, we have SNt ≤ t < SNt +1 .
If Nt ≥ n, then Sn ≤ SNt ≤ t. Conversely, if Sn ≤ t, then Sn < SNt +1
that implies Nt ≥ n. Thus (12.2) is verified. Similarly, if Nt ≤ n, then
Nt + 1 ≤ n + 1 and Sn+1 ≥ SNt +1 > t. Conversely, if Sn+1 > t, then
Sn+1 > SNt that implies Nt ≤ n. Thus (12.3) is verified.
Theorem 12.2 (Liu [84]) Let Nt be a renewal process with iid uncertain
interarrival times ξ1 , ξ2 , · · · If Φ is the common uncertainty distribution of
those interarrival times, then Nt has an uncertainty distribution
t
Υt (x) = 1 − Φ , ∀x ≥ 0 (12.6)
bxc + 1
is called a renewal reward process, where Nt is the renewal process with un-
certain interarrival times ξ1 , ξ2 , · · ·
A renewal reward process Rt denotes the total reward earned by time t.
In addition, if ηi ≡ 1, then Rt degenerates to a renewal process Nt . Please
also note that Rt = 0 whenever Nt = 0.
Theorem 12.6 (Liu [84]) Let Rt be a renewal reward process with iid uncer-
tain interarrival times ξ1 , ξ2 , · · · and iid uncertain rewards η1 , η2 , · · · Assume
(ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are independent uncertain vectors, and those in-
terarrival times and rewards have uncertainty distributions Φ and Ψ, respec-
tively. Then Rt has an uncertainty distribution
t x
Υt (x) = max 1 − Φ ∧Ψ . (12.19)
k≥0 k+1 k
Here we set x/k = +∞ and Ψ(x/k) = 1 when k = 0.
Proof: It follows from the definition of renewal reward process that the
renewal process Nt is independent of uncertain rewards η1 , η2 , · · · , and Rt
has an uncertainty distribution
(N ) (∞ k
)
X t [ X
Υt (x) = M ηi ≤ x = M (Nt = k) ∩ ηi ≤ x
i=1 k=0 i=1
∞
( k
)
[ X
=M (Nt ≤ k) ∩ ηi ≤ x (this is a polyrectangle)
k=0 i=1
( k
)
X
= max M (Nt ≤ k) ∩ ηi ≤ x (polyrectangular theorem)
k≥0
i=1
( k
)
X
= max M {Nt ≤ k} ∧ M ηi ≤ x (independence)
k≥0
i=1
t x
= max 1 − Φ ∧Ψ .
k≥0 k+1 k
Section 12.3 - Renewal Reward Process 301
Υt (x)
....
........ .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
... ........ .. .
... ...... .. .. .. .. .. . .. .. .. .. .. .. .
. ..
... .
. .... .. .. ........ ...... .. ..
... ...
..
. . ... ..
.. ........ ................
... . ...
.
. .... .
.. .. .... ..
..
....
................
.
... .. .. .. .. .. .. .. .. .. .. ....... .. .. .. .. .. .. .. .. .. ......... .. .. .. .. .. .. .. .. ..................................................................
.. .
.... ..
...
... ...... .....
.. ...... ....
... .. ... ...... ....
.. .. ... ..... ....
.... .. .. .. .. .. .. .. ....... .. .. .. .. .. .. ................................................ .
. ....
.... . .. ... ....
... ..
.. ... ... ....
.. ... ... ....
... .. ... ... ...
... . ...
. .. . . .
... .. ... ... ...
... ..
.. ... ... ...
.... .. .. .. .. .................................. ... ...
...
... .
. .
. .... ..
. . .
... .. .. ...
. ...
... ... .. ... ...
.. .. .. ...
... .. .. ...
... ... ..
. .
. .
..
.. .. ...
... .. .. ... .
... ..... .... ..... ......
............... .. .... ....
... ... ... .... ......
.. .. .. ... ....
................................................................................................................................................................................................................................................................... x
....
0 ....
Proof: Assume those interarrival times and rewards have uncertainty dis-
tributions Φ and Ψ, respectively. It follows from Theorem 12.6 that the
uncertainty distribution of Rt is
t x
Υt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
Then Rt /t has an uncertainty distribution
t tx
Ψt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
When t → ∞, we have
Note that Ft (x) → G(x) and Ft (x) ≥ G(x). It follows from Lebesgue domi-
nated convergence theorem and the existence of E[η1 /ξ1 ] that
Z +∞ Z +∞
E[Rt ] η1
lim = lim (1 − Ft (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1
Ψ−1 (α)
G−1 (α) = ,
Φ−1 (1 − α)
we get Z 1
Ψ−1 (α)
η1
E = −1 (1 − α)
dα.
ξ1 0 Φ
The theorem is proved.
with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncertain claim amounts
η1 , η2 , · · · Then the capital of the insurance company at time t is
Zt = a + bt − Rt (12.24)
Z. t
..
........
...
... ... ..
.. ........ .........
... ...... .. ...... ..
... ...... .... .......... ....
... ..
..
...... . ..... ...
.. ...
... ...... ..
... ...... .. ...
... ..
..
...... .. ...
...
. . .... ..
... .
...
... ..
..
... ... . .
.....
. ... ........
... .......... ... ..
..
. .
..
.. .. ... .......... ....
... ....... . ...
..... .... ........ .. ... ....... ...
....... .. .... ......... .. .
. ...
a ... ...
.
.....
.....
.
.. .
.. ..
..
. .
.. ..
.
...
...
... .. .......... ..
.. ..
..
.. ...
... ......... .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
... .. .. .. ..
...
... .. .. .. ..
...
... .. ...
... .. .. .. .. ...
... . .. .. .. ..
............................................................................................................................................................................................................................................................................................
.... .... .... t
0 S ...
... 1 S 2 S S 3 4 ....
.. .
.
.....
... ... .....
... ... ......
... .. .
. ........
Ruin Index
Ruin index is the uncertain measure that the capital of the insurance company
becomes negative.
Definition 12.3 (Liu [90]) Let Zt be an insurance risk process. Then the
ruin index is defined as the uncertain measure that Zt eventually becomes
negative, i.e.,
Ruin = M inf Zt < 0 . (12.25)
t≥0
It is clear that the ruin index is a special case of the risk index in the
sense of Liu [83].
Proof: For each positive integer k, it is clear that the arrival time of the kth
claim is
Sk = ξ1 + ξ2 + · · · + ξk
Yk = a + bSk − (η1 + η2 + · · · + ηk ).
Ruin Time
Definition 12.4 (Liu [90]) Let Zt be an insurance risk process. Then the
ruin time is defined as the first hitting time that the total capital Zt becomes
negative, i.e.,
τ = inf t ≥ 0 Zt < 0 . (12.27)
x
a + bx
αk = sup Φ ∧ 1−Ψ .
x≤t k k
Then
On the one hand, it follows from the definition of the ruin time τ that for
each t, we have
Thus
(∞ )
[
M{τ ≤ t} = M inf Zs < 0 = M (Sk ≤ t, Yk < 0)
0≤s≤t
k=1
∞
( k k k
!)
[ X X X
=M ξi ≤ t, a + b ξi − ηi < 0
k=1 i=1 i=1 i=1
∞ \
( k
)
[
≥M (ξi ≤ Φ −1
(αk )) ∩ (ηi > Ψ −1
(1 − αk ))
k=1 i=1
∞
( k
)
_ \
≥ M (ξi ≤ Φ −1
(αk )) ∩ (ηi > Ψ −1
(1 − αk ))
k=1 i=1
∞ ^
_ k
M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))
=
k=1 i=1
∞ ^
_ k
M ξi ≤ Φ−1 (αk ) ∧ M ηi > Ψ−1 (1 − αk )
=
k=1 i=1
∞ ^
_ k ∞
_
= αk ∧ αk = αk .
k=1 i=1 k=1
306 Chapter 12 - Uncertain Renewal Process
Thus we obtain
∞
_
M{τ ≤ t} = αk
k=1
ξ1 ∧ s, ξ2 ∧ s, · · · (12.29)
Then f (ξi ∧ s) is just the cost of replacing the ith element, and the average
replacement cost before the time t is
N
t
1X
f (ξi ∧ s). (12.32)
t i=1
Proof: At first, the average replacement cost before time t may be rewritten
as
XNt XNt
Nt
f (ξi ∧ s) (ξi ∧ s)
1X i=1 i=1
f (ξi ∧ s) = N × . (12.34)
t i=1 X t t
(ξi ∧ s)
i=1
and
Nt
X
f (ξi ∧ s)
(∞ )
i=1
\ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤ x ≥ M ≤ x = M ≤ x .
XNt
i=1
ξi ∧ s ξ1 ∧ s
(ξi ∧ s)
i=1
308 Chapter 12 - Uncertain Renewal Process
and
XNt
f (ξi ∧ s)
(∞ )
i=1
[ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤x ≤M ≤x =M ≤x .
XNt
i=1
ξi ∧ s ξ1 ∧ s
(ξi ∧ s)
i=1
Hence
Nt
X
f (ξi ∧ s)
i=1 f (ξ1 ∧ s)
and
Nt
X ξ1 ∧ s
(ξi ∧ s)
i=1
are identically distributed uncertain variables. Since
Nt
X
(ξi ∧ s)
i=1
→1
t
as t → ∞, it follows from (12.34) that (12.33) holds. The theorem is verified.
Section 12.5 - Age Replacement Policy 309
and
Z +∞ Z s
f (ξ1 ∧ s) b a−b Φ(x)
E = (1 − Ψ(x))dx = + Φ(s) + a dx.
ξ1 ∧ s 0 s s 0 x2
Since
Nt
X
(ξi ∧ s)
i=1
≤ 1,
t
it follows from (12.34) that
( N )
t
1X f (ξ1 ∧ s)
M f (ξi ∧ s) ≤ x ≥ M ≤x
t i=1 ξ∧s
310 Chapter 12 - Uncertain Renewal Process
for any real number x. By using the Lebesgue dominated convergence theo-
rem, we get
" N # Z +∞ ( N )!
t t
1X 1X
lim E f (ξi ∧ s) = lim 1−M f (ξi ∧ s) ≤ x dx
t→∞ t i=1 t→∞ 0 t i=1
Z +∞
f (ξ1 ∧ s)
= 1−M ≤x dx
0 ξ1 ∧ s
f (ξ1 ∧ s)
=E .
ξ1 ∧ s
Hence the theorem is proved.
Note that the alternating renewal process At is just the total time at which
the system is on up to time t. It is clear that
Nt
X N
Xt +1
ξ i ≤ At ≤ ξi (12.38)
i=1 i=1
Section 12.6 - Alternating Renewal Process 311
for each time t. We are interested in the limit property of the rate at which
the system is on.
At ξ1
→ (12.39)
t ξ1 + η1
Since
ξk+1
→ 0, as t → ∞
t
and
k+1
X k
X
ηi ∼ (k + 1)η1 , ξi ∼ kξ1 ,
i=1 i=1
312 Chapter 12 - Uncertain Renewal Process
we have
( Nt
)
1X
lim M ξi ≤ x
t→∞ t i=1
(∞ )
[ t(1 − x) tx
≤ lim M η1 > ∩ ξ1 ≤
t→∞ k+1 k
k=0
t(1 − x) tx
= lim sup M η1 > ∧ M ξ1 ≤
t→∞ k≥0 k+1 k
t(1 − x) tx
= lim sup 1 − Ψ ∧Φ
t→∞ k≥0 k+1 k
= sup Φ(xy) ∧ (1 − Ψ(y − xy)) = Υ(x).
y>0
That is,
( N
)
t
1X
lim M ξi ≤ x ≤ Υ(x). (12.41)
t→∞ t i=1
Since
ξk+1
→ 0, as t → ∞
t
and
k
X k+1
X
ηi ∼ kη1 , ξi ∼ (k + 1)ξ1 ,
i=1 i=1
Section 12.6 - Alternating Renewal Process 313
we have
(
Nt +1
)
1 X
lim M ξi > x
t→∞ t i=1
(∞ )
[ t(1 − x) tx
≤ lim M η1 ≤ ∩ ξ1 >
t→∞ k k+1
k=0
t(1 − x) tx
= lim sup M η1 ≤ ∧ M ξ1 >
t→∞ k≥0 k k+1
t(1 − x) tx
= lim sup Ψ ∧ 1−Φ
t→∞ k≥0 k+1 k+1
= sup(1 − Φ(xy)) ∧ Ψ(y − xy).
y>0
That is, ( )
Nt +1
1 X
lim M ξi ≤ x ≥ Υ(x). (12.42)
t→∞ t i=1
Since
Nt Nt +1
1X At 1 X
ξi ≤ ≤ ξi ,
t i=1 t t i=1
we obtain
( N
) ( Nt +1
)
t
1X At 1 X
M ξi ≤ x ≥M ≤x ≥M ξi ≤ x .
t i=1 t t i=1
It follows from (12.41) and (12.42) that for any real number x, we have
At
lim ≤ x = Υ(x).
t→∞ t
Φ−1 (α)
G−1 (α) = .
Φ−1 (α) + Ψ(1 − α)
Uncertain Calculus
Φ−1
t (α)
.....
....... α = 0.9 ........
........
.........
.... .........
... . ...............
...
... .........
........
...
... ........
.........
........ α = 0.8 .............
.............
.........
... ..... .............
... .............
.............. ...
. .
...
... .........
...... ....
........ ..........................
.
......... ..... .
α = 0.7
.....................
.....................
.....................
... . . ................................. .........................................
. ... ..... ........
.. . . . . . . .... ......
... ........................................................................................................................................................................
................
α = 0.6 ............ ....................
. .
................................................................................................................................................................................................................................................................................
0 .. ..................................................................................................
....................... ...................... .... α = 0.5
... ......... .............. ..................... ..................................................................................
... ......... . ....
α = 0.4
... ......... .......................... .........................................
......... ............. .....................
... ......... .............
.............
.....................
.....
...
...
.........
.........
.........
.........
α = 0.3
.............
.............
.............
... ......... .............
... ......... ..........
...
...
α = 0.2
.........
.........
.........
... .........
.........
... .........
.........
....
..
...
α = 0.1 ..
...................................................................................................................................................................................................................................................................
.. t
that are homogeneous linear functions of time t for any given α. See Fig-
ure 13.1.
A Liu process is described by three properties in the above definition.
Does such an uncertain process exist? The following theorem will answer
this question.
Theorem 13.1 (Liu [84], Existence Theorem) There exists a Liu process.
Proof: It follows from Theorem 11.19 that there exists a stationary inde-
pendent increment process Ct whose inverse uncertainty distribution is
√
σ 3 α
Φ−1
t (α) = ln t.
π 1−α
Furthermore, Ct has a Lipschitz continuous version. It is also easy to verify
that every increment Cs+t − Cs is a normal uncertain variable with expected
value 0 and variance t2 . Hence there exists a Liu process.
Theorem 13.2 Let Ct be a Liu process. Then for each time t > 0, the ratio
Ct /t is a normal uncertain variable with expected value 0 and variance 1.
That is,
Ct
∼ N (0, 1) (13.3)
t
for any t > 0.
Proof: Since Ct is a normal uncertain variable N (0, t), the operational law
tells us that Ct /t has an uncertainty distribution
−1
πx
Ψ(x) = Φt (tx) = 1 + exp − √ .
3
Hence Ct /t is a normal uncertain variable with expected value 0 and variance
1. The theorem is verified.
Section 13.1 - Liu Process 317
Theorem 13.3 (Liu [84]) Let Ct be a Liu process. Then for each time t, we
have
t2
≤ E[Ct2 ] ≤ t2 . (13.4)
2
+∞ +∞
√ √ t2
Z Z
E[Ct2 ] ≥ M{Ct ≥ x}dx = (1 − Φt ( x))dx = .
0 0 2
Theorem 13.4 (Iwamura-Xu [59]) Let Ct be a Liu process. Then for each
time t, we have
1.24t4 < V [Ct2 ] < 4.31t4 . (13.5)
Proof: Let q be the expected value of Ct2 . On the one hand, it follows from
the definition of variance that
Z +∞
V [Ct2 ] = M{(Ct2 − q)2 ≥ x}dx
0
Z +∞ q
√
≤ M Ct ≥ q+ x dx
0
Z +∞ q
√
+ M Ct ≤ − q + x dx
0
Z +∞ q
√
q
√
+ M − q − x ≤ Ct ≤ q − x dx.
0
318 Chapter 13 - Uncertain Calculus
Since t2 /2 ≤ q ≤ t2 , we have
Z +∞ q
√
First Term = M Ct ≥ q + x dx
0
Z +∞ q √
≤ M Ct ≥ t2 /2 + x dx
0
√ !!−1
+∞
p
t2 /2 + x
Z
π
= 1 − 1 + exp − √ dx
0 3t
≤ 1.725t4 ,
Z +∞ q
√
Second Term = M Ct ≤ − q + x dx
0
Z +∞ q √
≤ M Ct ≤ − t2 /2 + x dx
0
+∞
p √ !!−1
t2 /2 + x
Z
π
= 1 + exp √ dx
0 3t
≤ 1.725t4 ,
Z +∞ q
√
q
√
Third Term = M − q − x ≤ Ct ≤ q − x dx
0
Z +∞ q
√
≤ M Ct ≤ q − x dx
0
Z +∞ q
√
≤ M Ct ≤ t2 − x dx
0
+∞
p √ !!−1
t2 + x
Z
π
= 1 + exp − √ dx
0 3t
< 0.86t4 .
> 1.24t4 .
Definition 13.2 Let Ct be a Liu process. Then for any real numbers e and
σ > 0, the uncertain process
At = et + σCt (13.6)
is called an arithmetic Liu process, where e is called the drift and σ is called
the diffusion.
is called a geometric Liu process, where e is called the log-drift and σ is called
the log-diffusion.
320 Chapter 13 - Uncertain Calculus
Note that the geometric Liu process Gt has a lognormal uncertainty dis-
tribution, i.e.,
Gt ∼ LOGN (et, σt) (13.11)
provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be integrable.
Since Xt and Ct are uncertain variables at each time t, the limit in (13.16)
is also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is integrable with respect to Ct if
and only if the limit in (13.16) is an uncertain variable.
Section 13.2 - Liu Integral 321
Example 13.1: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (13.16) that
Z s k
X
dCt = lim (Cti+1 − Cti ) ≡ Cs − C0 = Cs .
0 ∆→0
i=1
That is,
Z s
dCt = Cs . (13.17)
0
Example 13.2: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (13.16) that
k
X
Cs2 = Ct2i+1 − Ct2i
i=1
k k
X 2 X
= Cti+1 − Cti +2 Cti Cti+1 − Cti
i=1 i=1
Z s
→0+2 Ct dCt
0
as ∆ → 0. That is,
Z s
1 2
Ct dCt = C . (13.18)
0 2 s
Example 13.3: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (13.16) that
k
X
sCs = ti+1 Cti+1 − ti Cti
i=1
k
X k
X
= Cti+1 (ti+1 − ti ) + ti (Cti+1 − Cti )
i=1 i=1
Z s Z s
→ Ct dt + tdCt
0 0
as ∆ → 0. That is,
Z s Z s
Ct dt + tdCt = sCs . (13.19)
0 0
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1
exists almost surely and is finite. On the other hand, since Xt and Ct are
uncertain variables at each time t, the above limit is also a measurable func-
tion. Hence the limit is an uncertain variable and then Xt is integrable with
respect to Ct .
a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,
the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1
we have
k
X m−1
X k
X
Xti (Cti+1 − Cti ) = Xti (Cti+1 − Cti ) + Xti (Cti+1 − Cti ).
i=1 i=1 i=m
Note that
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Section 13.2 - Liu Integral 323
Z c m−1
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ).
c ∆→0
i=m
Hence the equation (13.20) is proved.
Theorem 13.7 (Linearity of Liu Integral) Let Xt and Yt be integrable un-
certain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dCt = α Xt dCt + β Yt dCt . (13.21)
a a a
Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of Liu integral that
Z b Xk
(αXt + βYt )dCt = lim (αXti + βYti )(Cti+1 − Cti )
a ∆→0
i=1
k
X k
X
= lim α Xti (Cti+1 − Cti ) + lim β Yti (Cti+1 − Cti )
∆→0 ∆→0
i=1 i=1
Z b Z b
=α Xt dCt + β Yt dCt .
a a
That is, the sum is also a normal uncertain variable. Since f is an integrable
function, we have
Xk Z s
|f (ti )|(ti+1 − ti ) → |f (t)|dt
i=1 0
324 Chapter 13 - Uncertain Calculus
Exercise 13.1: Let s be a given time with s > 0. Show that the Liu integral
Z s
tdCt (13.24)
0
Exercise 13.2: For any real number α with 0 < α < 1, the uncertain process
Z s
Fs = (s − t)−α dCt (13.26)
0
for any t ≥ 0, then Zt is called a general Liu process with drift µt and
diffusion σt . Furthermore, Zt has an uncertain differential
Example 13.4: It follows from the equation (13.17) that Liu process Ct can
be written as Z t
Ct = dCs .
0
Section 13.3 - Fundamental Theorem 325
Thus Ct is a general Liu process with drift 0 and diffusion 1, and has an
uncertain differential dCt .
Example 13.5: It follows from the equation (13.18) that Ct2 can be written
as Z t
Ct2 = 2 Cs dCs .
0
Thus Ct2
is a general Liu process with drift 0 and diffusion 2Ct , and has an
uncertain differential
d(Ct2 ) = 2Ct dCt .
Example 13.6: It follows from the equation (13.19) that tCt can be written
as Z Z t t
tCt = Cs ds + sdCs .
0 0
Thus tCt is a general Liu process with drift Ct and diffusion t, and has an
uncertain differential
d(tCt ) = Ct dt + tdCt .
Proof: Let Zt be a general Liu process with drift µt and diffusion σt . Then
we immediately have
Z t Z t
Zt = Z0 + µs ds + σs dCs .
0 0
∂h ∂h
dZt = (t, Ct )dt + (t, Ct )dCt . (13.31)
∂t ∂c
326 Chapter 13 - Uncertain Calculus
Thus Gt is a general Liu process with drift eGt and diffusion σGt .
Section 13.5 - Change of Variables 327
Theorem 13.11 (Liu [80], Chain Rule) Let f (c) be a continuously differen-
tiable function. Then f (Ct ) has an uncertain differential
That is, Z s
f 0 (Ct )dCt = f (Cs ) − f (C0 ). (13.41)
0
Proof: Note that ∆Xt and ∆Yt are infinitesimals with the same order. Since
the function xy is a continuously differentiable function with respect to x and
y, by using Taylor series expansion, the infinitesimal increment of Xt Yt has
a first-order approximation,
Zt = exp(t)Ct2 .
Xt = exp(t), Yt = Ct2 .
Then
dXt = exp(t)dt, dYt = 2Ct dCt .
It follows from the integration by parts that
Example 13.17: The integration by parts may also calculate the uncertain
differential of Z t
Zt = sin(t + 1) sdCs .
0
Then
dXt = cos(t + 1)dt, dYt = tdCt .
It follows from the integration by parts that
Z t
dZt = sdCs cos(t + 1)dt + sin(t + 1)tdCt .
0
Uncertain Differential
Equation
has a solution Z t Z t
Xt = X0 + us ds + vs dCs . (14.4)
0 0
Example 14.1: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = adt + bdCt . (14.5)
It follows from Theorem 14.1 that the solution is
Z t Z t
Xt = X0 + ads + bdCs .
0 0
That is,
Xt = X0 + at + bCt . (14.6)
Theorem 14.2 Let ut and vt be two integrable uncertain processes. Then
the uncertain differential equation
dXt = ut Xt dt + vt Xt dCt (14.7)
has a solution Z t Z t
Xt = X0 exp us ds + vs dCs . (14.8)
0 0
Example 14.2: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = aXt dt + bXt dCt . (14.9)
It follows from Theorem 14.2 that the solution is
Z t Z t
Xt = X0 exp ads + bdCs .
0 0
That is,
Xt = X0 exp (at + bCt ) . (14.10)
Section 14.1 - Uncertain Differential Equation 333
has a solution
Z t Z t
u2s v2s
Xt = Ut X0 + ds + dCs (14.12)
0 Us 0 Us
where Z t Z t
Ut = exp u1s ds + v1s dCs . (14.13)
0 0
At first, we have
Z t Z t
Ut = exp (−a)ds + 0dCs = exp(−at).
0 0
That is,
Z t
m m
Xt = + exp(−at) X0 − + σ exp(−at) exp(as)dCs (14.15)
a a 0
That is, Z t
Xt = exp(σCt ) X0 + m exp(−σCs )ds . (14.18)
0
and
dXt = αt Xt dt + g(t, Xt )dCt . (14.20)
Theorem 14.4 (Liu [105]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation
has a solution
Xt = Yt−1 Zt (14.22)
where Z t
Yt = exp − σs dCs (14.23)
0
and Zt is the solution of the uncertain differential equation
dZt = Yt f (t, Yt−1 Zt )dt (14.24)
with initial value Z0 = X0 .
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
dYt = − exp − σs dCs σt dCt = −Yt σt dCt .
0
Theorem 14.4 says the uncertain differential equation (14.25) has a solution
Xt = Yt−1 Zt , i.e.,
Z t 1/(1−α)
Xt = exp(σCt ) X01−α + (1 − α) exp((α − 1)σCs )ds .
0
Theorem 14.5 (Liu [105]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation
has a solution
Xt = Yt−1 Zt (14.27)
where Z t
Yt = exp − αs ds (14.28)
0
and Zt is the solution of the uncertain differential equation
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
Z t
dYt = − exp − αs ds αt dt = −Yt αt dt.
0
That is,
d(Xt Yt ) = Yt g(t, Xt )dCt .
Defining Zt = Xt Yt , we obtain Xt = Yt−1 Zt and dZt = Yt g(t, Yt−1 Zt )dCt .
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.
Since β 6= 1, we have
Theorem 14.5 says the uncertain differential equation (14.30) has a solution
Xt = Yt−1 Zt , i.e.,
Z t 1/(1−β)
Xt = exp(αt) X01−β + (1 − β) exp((β − 1)αs)dCs .
0
and
dXt = αt dt + g(t, Xt )dCt . (14.32)
Theorem 14.6 (Yao [174]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation
has a solution
Xt = Yt + Zt (14.34)
where Z t
Yt = σs dCs (14.35)
0
Hence Z t
Xt = X0 + σCt − ln 1 − α exp(X0 + σCs )ds .
0
Theorem 14.7 (Yao [174]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation
dXt = αt dt + g(t, Xt )dCt (14.38)
has a solution
Xt = Yt + Zt (14.39)
where Z t
Yt = αs ds (14.40)
0
and Zt is the solution of the uncertain differential equation
dZt = g(t, Yt + Zt )dCt (14.41)
with initial value Z0 = X0 .
Section 14.3 - Existence and Uniqueness 339
That is,
d(Xt − Yt ) = g(t, Xt )dCt .
Defining Zt = Xt − Yt , we obtain Xt = Yt + Zt and dZt = g(t, Yt + Zt )dCt .
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.
Since σ 6= 0, we have
d exp(−Zt ) = σ exp(αt)dCt .
Hence Z t
Xt = X0 + αt − ln 1 − σ exp(X0 + αs)dCs .
0
has a unique solution if the coefficients f (t, x) and g(t, x) satisfy the linear
growth condition
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L|x − y|, ∀x, y ∈ <, t ≥ 0 (14.45)
for each γ ∈ Γ. It follows from the linear growth condition and Lipschitz
condition that
Z s Z s
(0)
Dt (γ) = max f (v, X0 )dv + g(v, X0 )dCv (γ)
0≤s≤t 0 0
Z t Z t
≤ |f (v, X0 )| dv + Kγ |g(v, X0 )| dv
0 0
≤ (1 + |X0 |)L(1 + Kγ )t
Next we prove that the solution is unique. Assume that both Xt and Xt∗
are solutions of the uncertain differential equation. Then for each γ ∈ Γ, it
follows from the linear growth condition and Lipschitz condition that
Z t
|Xt (γ) − Xt∗ (γ)| ≤ L(1 + Kγ ) |Xv (γ) − Xv∗ (γ)|dv.
0
14.4 Stability
Definition 14.2 (Liu [80]) An uncertain differential equation is said to be
stable if for any two solutions Xt and Yt , we have
lim M{|Xt − Yt | < ε for all t ≥ 0} = 1 (14.46)
|X0 −Y0 |→0
Example 14.10: Some uncertain differential equations are not stable. For
example, consider
dXt = Xt dt + bdCt . (14.48)
It is clear that two solutions with different initial values X0 and Y0 are
Z t
Xt = exp(t)X0 + b exp(t) exp(−s)dCs ,
0
Z t
Yt = exp(t)Y0 + b exp(t) exp(−s)dCs .
0
Then for any given number ε > 0, we have
lim M{|Xt − Yt | < ε for all t ≥ 0}
|X0 −Y0 |→0
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L(t)|x − y|, ∀x, y ∈ <, t ≥ 0 (14.51)
Proof: Since L(t) is bounded on [0, +∞), there is a constant R such that
L(t) ≤ R for any t. Then the strong Lipschitz condition (14.51) implies the
following Lipschitz condition,
|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ R|x − y|, ∀x, y ∈ <, t ≥ 0. (14.52)
It follows from the linear growth condition (14.50), the Lipschitz condition
(14.52) and the existence and uniqueness theorem that the uncertain differ-
ential equation (14.49) has a unique solution. Let Xt and Yt be two solutions
with initial values X0 and Y0 , respectively. Then for each γ, we have
d|Xt (γ) − Yt (γ)| ≤ |f (t, Xt (γ)) − f (t, Yt (γ))| + |g(t, Xt (γ)) − g(t, Yt (γ))|
≤ L(t)|Xt (γ) − Yt (γ)|dt + L(t)K(γ)|Xt (γ) − Yt (γ)|dt
= L(t)(1 + K(γ))|Xt (γ) − Yt (γ)|dt
where K(γ) is the Lipschitz constant of the sample path Ct (γ). It follows
that
Z +∞
|Xt (γ) − Yt (γ)| ≤ |X0 − Y0 | exp (1 + K(γ)) L(s)ds .
0
Since Z +∞
M |X0 − Y0 | exp (1 + K(γ)) L(s)ds < ε → 1
0
as |X0 − Y0 | → 0, we obtain
Exercise 14.1: Suppose u1t , u2t , v1t , v2t are bounded functions with respect
to t such that
Z +∞ Z +∞
|u1t |dt < +∞, |v1t |dt < +∞. (14.53)
0 0
is stable.
14.5 α-Path
Definition 14.3 (Yao-Chen [173]) Let α be a number with 0 < α < 1. An
uncertain differential equation
Remark 14.2: Note that each α-path Xtα is a real-valued function of time t,
but is not necessarily one of sample paths. Furthermore, almost all α-paths
are continuous functions with respect to time t.
Example 14.11: The uncertain differential equation dXt = adt + bdCt with
X0 = 0 has an α-path
Xtα = at + |b|Φ−1 (α)t (14.58)
where Φ−1 is the inverse standard normal uncertainty distribution.
Example 14.12: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an α-path
Xtα
..
.........
...
.... α = 0.9............
.............
...
.. ............
... .............
... .. .. ..... ..................
... .............
..............
... ..............
... ...............
...............
................................
..............................................
... ........................................ .......................
... ........................................... ...............
...............
... .................................... ........... ................
................
... ................................... ............ ................
... ...................... ........ .......... ................
... ................ ....... ....... ........
........
.................
.................
................. ...... ........
...
...
. .
... ........ ..... ...... .......
... .... ..... ..... ....... ........
... ..... ...... ...... ...... ........
.... ........
.........
.........
α = 0.8 ....
... ... ..... ..... ...... ....... ....... ..........
... ... ..... ..... ...... ...... ... .. ............
... ... ..... ..... ..... ....... ............. .........
..........
... ... .... ..... ...... ....... ....... ..........
... ..... ...... ...... ...... .......
... ... ..... ..... ...... ....... .......
........
..........
..........
... ... ..... ...... ...... ......
...
...
.... ..... ..... ......
.... .... ..... .....
.... ..... ..... ...... .............
....... ........
........... α = 0.7
.........
..
Theorem 14.10 (Yao-Chen Formula [173]) Let Xt and Xtα be the solution
and α-path of the uncertain differential equation
respectively. Then
M{Xt ≤ Xtα , ∀t} = α, (14.61)
M{Xt > Xtα , ∀t} = 1 − α. (14.62)
Proof: At first, for each α-path Xtα , we divide the time interval into two
parts,
T + = t g (t, Xtα ) ≥ 0 ,
dCt (γ)
Λ−
1 = γ ≥ Φ−1 (1 − α) for any t ∈ T −
dt
where Φ−1 is the inverse standard normal uncertainty distribution. Since T +
and T − are disjoint sets and Ct has independent increments, we get
M{Λ+
1 } = α, M{Λ−
1 } = α, M{Λ+ −
1 ∩ Λ1 } = α.
−
For any γ ∈ Λ+
1 ∩ Λ1 , we always have
dCt (γ)
g(t, Xt (γ)) ≤ |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) ≤ Xtα for all t and
−
M{Xt ≤ Xtα , ∀t} ≥ M{Λ+
1 ∩ Λ1 } = α. (14.63)
M{Λ+
2 } = 1 − α, M{Λ−
2 } = 1 − α, M{Λ+ −
2 ∩ Λ2 } = 1 − α.
−
For any γ ∈ Λ+
2 ∩ Λ2 , we always have
dCt (γ)
g(t, Xt (γ)) > |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) > Xtα for all t and
−
M{Xt > Xtα , ∀t} ≥ M{Λ+
2 ∩ Λ2 } = 1 − α. (14.64)
Note that {Xt ≤ Xtα , ∀t} and {Xt 6≤ Xtα , ∀t} are opposite events with each
other. By using the duality axiom, we obtain
It follows from {Xt > Xtα , ∀t} ⊂ {Xt 6≤ Xtα , ∀t} and monotonicity theorem
that
M{Xt ≤ Xtα , ∀t} + M{Xt > Xtα , ∀t} ≤ 1. (14.65)
Thus (14.61) and (14.62) follow from (14.63), (14.64) and (14.65) immedi-
ately.
346 Chapter 14 - Uncertain Differential Equation
Remark 14.3: It is also showed that for any α ∈ (0, 1), the following two
equations are true,
M{Xt < Xtα , ∀t} = α, (14.66)
M{Xt ≥ Xtα , ∀t} = 1 − α. (14.67)
Please mention that {Xt < Xtα ,
∀t} and {Xt ≥ Xtα , ∀t} are disjoint events
but not opposite. Although it is always true that
M{Xt < Xtα , ∀t} + M{Xt ≥ Xtα , ∀t} ≡ 1, (14.68)
the union of {Xt < Xtα , ∀t} and {Xt ≥ Xtα , ∀t} does not make the universal
set, and it is possible that
M{(Xt < Xtα , ∀t) ∪ (Xt ≥ Xtα , ∀t)} < 1. (14.69)
Exercise 14.2: Show that the solution of the uncertain differential equation
dXt = adt + bdCt with X0 = 0 has an inverse uncertainty distribution
Ψ−1
t (α) = at + |b|Φ
−1
(α)t (14.75)
−1
where Φ is the inverse standard normal uncertainty distribution.
Exercise 14.3: Show that the solution of the uncertain differential equation
dXt = aXt dt + bXt dCt with X0 = 1 has an inverse uncertainty distribution
Ψ−1 −1
t (α) = exp at + |b|Φ (α)t (14.76)
where Φ−1 is the inverse standard normal uncertainty distribution.
Section 14.6 - Yao-Chen Formula 347
Thus we have
Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xtα )dα.
0 0
Υ−1 1−α
t (α) = J(Xt ).
Thus we have
Z 1 Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xt1−α )dα = J(Xtα )dα.
0 0 0
Exercise 14.4: Let Xt and Xtα be the solution and α-path of some uncertain
differential equation. Show that
Z 1
E[Xt ] = Xtα dα, (14.79)
0
Z 1
E[(Xt − K)+ ] = (Xtα − K)+ dα, (14.80)
0
Z 1
+
E[(K − Xt ) ] = (K − Xtα )+ dα. (14.81)
0
348 Chapter 14 - Uncertain Differential Equation
respectively. Then for any time s > 0 and strictly increasing function J(x),
the supremum
sup J(Xt ) (14.83)
0≤t≤s
Ψ−1 α
s (α) = sup J(Xt ); (14.84)
0≤t≤s
Ψ−1 α
s (α) = inf J(Xt ). (14.86)
0≤t≤s
Similarly, we have
M sup J(Xt ) > sup J(Xtα ) ≥ M{Xt > Xtα , ∀t} = 1 − α. (14.88)
0≤t≤s 0≤t≤s
Similarly, we have
M inf J(Xt ) > inf J(Xt ) ≥ M{Xt > Xtα , ∀t} = 1 − α.
α
(14.91)
0≤t≤s 0≤t≤s
Exercise 14.5: Let r and K be real numbers. Show that the supremum
sup exp(−rt)(Xt − K)
0≤t≤s
Ψ−1 α
s (α) = sup exp(−rt)(Xt − K)
0≤t≤s
Theorem 14.14 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
respectively. Then for any time s > 0 and strictly decreasing function J(x),
the supremum
sup J(Xt ) (14.94)
0≤t≤s
Ψ−1 1−α
s (α) = sup J(Xt ); (14.95)
0≤t≤s
Ψ−1 1−α
s (α) = inf J(Xt ). (14.97)
0≤t≤s
350 Chapter 14 - Uncertain Differential Equation
Similarly, we have
M sup J(Xt ) > sup J(Xt ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α.
1−α
(14.99)
0≤t≤s 0≤t≤s
Similarly, we have
M inf J(Xt ) > inf J(Xt1−α ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (14.102)
0≤t≤s 0≤t≤s
Exercise 14.6: Let r and K be real numbers. Show that the supremum
sup exp(−rt)(K − Xt )
0≤t≤s
with an initial value X0 , respectively. Then for any given level z and strictly
increasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution
α
1 − inf α sup J(Xt ) ≥ z , if z > J(X0 )
0≤t≤s
Ψ(s) = (14.105)
α
sup α inf J(Xt ) ≤ z ,
if z < J(X0 ).
0≤t≤s
Then we have
sup J(Xtα0 ) = z,
0≤t≤s
{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s
{τz > s} = sup J(Xt ) < z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s
Then we have
inf J(Xtα0 ) = z,
0≤t≤s
352 Chapter 14 - Uncertain Differential Equation
{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s
{τz > s} = inf J(Xt ) > z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s
Theorem 14.16 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
with an initial value X0 , respectively. Then for any given level z and strictly
decreasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution
α
sup α sup J(X t ) ≥ z , if z > J(X0 )
0≤t≤s
Ψ(s) = (14.107)
1 − inf α inf J(Xtα ) ≤ z , if z < J(X0 ).
0≤t≤s
Then we have
sup J(Xtα0 ) = z,
0≤t≤s
{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s
{τz > s} = sup J(Xt ) < z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s
Section 14.6 - Yao-Chen Formula 353
Then we have
inf J(Xtα0 ) = z,
0≤t≤s
{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s
{τz > s} = inf J(Xt ) > z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s
Similarly, we have
Z s Z s
M J(Xt )dt > J(Xtα )dt ≥ M{Xt > Xtα , ∀t} = 1 − α. (14.112)
0 0
Exercise 14.7: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(Xt − K)dt
0
Theorem 14.18 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
respectively. Then for any time s > 0 and strictly decreasing function J(x),
the time integral Z s
J(Xt )dt (14.115)
0
Similarly, we have
Z s Z s
M J(Xt )dt > J(Xt )dt ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (14.118)
1−α
0 0
Exercise 14.8: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(K − Xt )dt
0
Step 2. Solve dXtα = f (t, Xtα )dt + |g(t, Xtα )|Φ−1 (α)dt by any method of or-
dinary differential equation and obtain the α-path Xtα , for example,
by using the recursion formula
α
Xi+1 = Xiα + f (ti , Xiα )h + |g(ti , Xiα )|Φ−1 (α)h (14.121)
α h
Xi+1 = Xiα + (k1 + 2k2 + 2k3 + k4 ) (14.122)
6
where
k1 = f (ti , Xiα ) + |g(ti , Xiα )|Φ−1 (α), (14.123)
k2 = f (ti + h/2, Xiα + hk1 /2) + |g(ti + h/2, Xiα + hk1 /2)|Φ−1 (α), (14.124)
k3 = f (ti + h/2, Xiα + hk2 /2) + |g(ti + h/2, Xiα + hk2 /2)|Φ−1 (α), (14.125)
Uncertain Finance
This chapter will introduce uncertain stock model, uncertain interest rate
model, and uncertain currency model by using the tool of uncertain differen-
tial equation.
In 2009 Liu [80] first supposed that the stock price follows an uncertain
differential equation and presented an uncertain stock model in which the
bond price Xt and the stock price Yt are determined by
(
dXt = rXt dt
(15.1)
dYt = eYt dt + σYt dCt
This section will price European call and put options for the financial market
determined by the uncertain stock model (15.1).
360 Chapter 15 - Uncertain Finance
Let fc represent the price of this contract. Then the investor pays fc for
buying the contract at time 0, and has a payoff (Ys − K)+ at time s since
the option is rationally exercised if and only if Ys > K. Considering the time
value of money resulted from the bond, the present value of the payoff is
exp(−rs)(Ys − K)+ . Thus the net return of the investor at time 0 is
On the other hand, the bank receives fc for selling the contract at time 0,
and pays (Ys − K)+ at the expiration time s. Thus the net return of the
bank at the time 0 is
The fair price of this contract should make the investor and the bank have an
identical expected return (we will call it fair price principle hereafter), i.e.,
Thus fc = exp(−rs)E[(Ys − K)+ ]. That is, the European call option price is
just the expected present value of the payoff.
Definition 15.2 (Liu [80]) Assume a European call option has a strike price
K and an expiration time s. Then the European call option price is
Y.t
...
..........
... ....
.... ... ... ....
... .. ...... .. ........
Y ................................................................................................................................................................... .... ...... . ....
s ... .. ........
.
...
........
.... ... .... ...
... . .. ... ...
... .
..
......... .
. .........
. .. .
... .
. ...
... ....... . .. ........ ..... ..... .....
. .......... ...
... ............. ..
. ...... ........ ..... ..
.
. . .. .... . ...
... .. .......
. .. ... ...... ............ .. ............ .
. .
... .. . .........
.. ...... . . ..
...................................................................................................................................................................................................
K ... .... ..... ...... . ..... . .
... .... . . ..
... .. ... ..
... ......... .
..
... ... ..
...... ..
Y 0 .... ... ..
..
.. .
... ..
.
.....................................................................................................................................................................................................................................................................................
.
..
s t
0 ...
...
.
Theorem 15.1 (Liu [80]) Assume a European call option for the uncertain
stock model (15.1) has a strike price K and an expiration time s. Then the
European call option price is
Z 1
√ ! !+
σs 3 α
fc = exp(−rs) Y0 exp es + ln −K dα. (15.8)
0 π 1−α
It follows from Definition 15.2 that the European call option price formula is
just (15.8).
Remark 15.1: It is clear that the European call option price is a decreasing
function of interest rate r. That is, the European call option will devaluate
if the interest rate is raised; and the European call option will appreciate in
value if the interest rate is reduced. In addition, the European call option
price is also a decreasing function of the strike price K.
Let fp represent the price of this contract. Then the investor pays fp for
buying the contract at time 0, and has a payoff (K − Ys )+ at time s since
the option is rationally exercised if and only if Ys < K. Considering the time
value of money resulted from the bond, the present value of the payoff is
exp(−rs)(K − Ys )+ . Thus the net return of the investor at time 0 is
− fp + exp(−rs)(K − Ys )+ . (15.9)
On the other hand, the bank receives fp for selling the contract at time 0,
and pays (K − Ys )+ at the expiration time s. Thus the net return of the
bank at the time 0 is
fp − exp(−rs)(K − Ys )+ . (15.10)
362 Chapter 15 - Uncertain Finance
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
Definition 15.4 (Liu [80]) Assume a European put option has a strike price
K and an expiration time s. Then the European put option price is
fp = exp(−rs)E[(K − Ys )+ ]. (15.12)
Theorem 15.2 (Liu [80]) Assume a European put option for the uncertain
stock model (15.1) has a strike price K and an expiration time s. Then the
European put option price is
Z 1
√ !!+
σs 3 α
fp = exp(−rs) K − Y0 exp es + ln dα. (15.13)
0 π 1−α
It follows from Definition 15.4 that the European put option price is
1
√ !!+
σs 3 1 − α
Z
fp = exp(−rs) K − Y0 exp es + ln dα
0 π α
Z 1
√ !!+
σs 3 α
= exp(−rs) K − Y0 exp es + ln dα.
0 π 1−α
On the other hand, the bank receives fc for selling the contract at time 0,
and pays
sup exp(−rt)(Yt − K)+ . (15.16)
0≤t≤s
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
−fc + E sup exp(−rt)(Yt − K)+ = fc − E sup exp(−rt)(Yt − K)+ .
0≤t≤s 0≤t≤s
Thus the American call option price is just the expected present value of the
payoff.
Definition 15.6 (Chen [6]) Assume an American call option has a strike
price K and an expiration time s. Then the American call option price is
fc = E sup exp(−rt)(Yt − K)+ . (15.18)
0≤t≤s
Theorem 15.3 (Chen [6]) Assume an American call option for the uncer-
tain stock model (15.1) has a strike price K and an expiration time s. Then
the American call option price is
Z 1 √ ! !+
σt 3 α
fc = sup exp(−rt) Y0 exp et + ln −K dα.
0 0≤t≤s π 1−α
364 Chapter 15 - Uncertain Finance
Proof: It follows from Theorem 14.13 that sup0≤t≤s exp(−rt)(Yt − K)+ has
an inverse uncertainty distribution
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−rt) Y0 exp et + ln −K .
0≤t≤s π 1−α
Hence the American call option price formula follows from Definition 15.6
immediately.
On the other hand, the bank receives fp for selling the contract at time 0,
and pays
sup exp(−rt)(K − Yt )+ . (15.21)
0≤t≤s
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
+ +
−fp + E sup exp(−rt)(K − Yt ) = fp − E sup exp(−rt)(K − Yt ) .
0≤t≤s 0≤t≤s
Thus the American put option price is just the expected present value of the
payoff.
Section 15.4 - Asian Options 365
Definition 15.8 (Chen [6]) Assume an American put option has a strike
price K and an expiration time s. Then the American put option price is
fp = E sup exp(−rt)(K − Yt )+ . (15.23)
0≤t≤s
Theorem 15.4 (Chen [6]) Assume an American put option for the uncer-
tain stock model (15.1) has a strike price K and an expiration time s. Then
the American put option price is
Z 1
√ !!+
σt 3 α
fp = sup exp(−rt) K − Y0 exp et + ln dα.
0 0≤t≤s π 1−α
Hence the American put option price formula follows from Definition 15.8
immediately.
Let fc represent the price of this contract. Then the investor pays fc for
buying the contract at time 0, and has a payoff
Z s +
1
Yt dt − K (15.25)
s 0
at time s. Considering the time value of money resulted from the bond, the
present value of the payoff is
Z s +
1
exp(−rs) Yt dt − K . (15.26)
s 0
Thus the net return of the investor at time 0 is
Z s +
1
− fc + exp(−rs) Yt dt − K . (15.27)
s 0
On the other hand, the bank receives fc for selling the contract at time 0,
and pays
Z s +
1
Yt dt − K (15.28)
s 0
at the expiration time s. Thus the net return of the bank at the time 0 is
Z s +
1
fc − exp(−rs) Yt dt − K . (15.29)
s 0
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
" Z + #
1 s
−fc + exp(−rs)E Yt dt − K
s 0
" Z + # (15.30)
1 s
= fc − exp(−rs)E Yt dt − K .
s 0
Thus the Asian call option price is just the expected present value of the
payoff.
Definition 15.10 (Sun-Chen [144]) Assume an Asian call option has a strike
price K and an expiration time s. Then the Asian call option price is
" Z + #
1 s
fc = exp(−rs)E Yt dt − K . (15.31)
s 0
Theorem 15.5 (Sun-Chen [144]) Assume an Asian call option for the un-
certain stock model (15.1) has a strike price K and an expiration time s.
Then the Asian call option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) exp et + ln dt − K dα.
0 s 0 π 1−α
Section 15.4 - Asian Options 367
Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian call option price formula follows from Definition 15.10 im-
mediately.
Let fp represent the price of this contract. Then the investor pays fp for
buying the contract at time 0, and has a payoff
Z s +
1
K− Yt dt (15.33)
s 0
at time s. Considering the time value of money resulted from the bond, the
present value of the payoff is
+
1 s
Z
exp(−rs) K − Yt dt . (15.34)
s 0
On the other hand, the bank receives fp for selling the contract at time 0,
and pays
+
1 s
Z
K− Yt dt (15.36)
s 0
at the expiration time s. Thus the net return of the bank at the time 0 is
+
1 s
Z
fp − exp(−rs) K − Yt dt . (15.37)
s 0
368 Chapter 15 - Uncertain Finance
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
" + #
1 s
Z
−fp + exp(−rs)E K− Yt dt
s 0
" + # (15.38)
1 s
Z
= fp − exp(−rs)E K− Yt dt .
s 0
Thus the Asian put option price should be the expected present value of the
payoff.
Definition 15.12 (Sun-Chen [144]) Assume an Asian put option has a strike
price K and an expiration time s. Then the Asian put option price is
" + #
1 s
Z
fp = exp(−rs)E K− Yt dt . (15.39)
s 0
Theorem 15.6 (Sun-Chen [144]) Assume an Asian put option for the un-
certain stock model (15.1) has a strike price K and an expiration time s.
Then the Asian put option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) K− exp et + ln dt dα.
0 s 0 π 1−α
Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian put option price formula follows from Definition 15.12 im-
mediately.
Theorem 15.7 (Liu [95]) Assume a European option for the uncertain stock
model (15.40) has a strike price K and an expiration time s. Then the
European call option price is
Z 1
fc = exp(−rs) (Ysα − K)+ dα (15.41)
0
Theorem 15.9 (Liu [95]) Assume an Asian option for the uncertain stock
model (15.40) has a strike price K and an expiration time s. Then the Asian
call option price is
Z 1 Z s +
1 α
fc = exp(−rs) Y dt − K dα (15.49)
0 s 0 t
Proof: It follows from the fair price principle that the Asian call option price
is " Z + #
1 s
fc = exp(−rs)E Yt dt − K . (15.51)
s 0
where r is the riskless interest rate, ei are the log-drifts, σij are the log-
diffusions, and Cjt are independent Liu processes, i = 1, 2, · · · , m, j =
1, 2, · · · , n.
Portfolio Selection
For the multifactor stock model (15.53), we have the choice of m + 1 different
investments. At each time t we may choose a portfolio (βt , β1t , · · · , βmt ) (i.e.,
Section 15.6 - Multifactor Stock Model 371
the investment fractions meeting βt + β1t + · · · + βmt = 1). Then the wealth
Zt at time t should follow the uncertain differential equation
m
X m X
X n
dZt = rβt Zt dt + ei βit Zt dt + σij βit Zt dCjt . (15.54)
i=1 i=1 j=1
That is,
m
Z tX n Z tX
X m
Zt = Z0 exp(rt) exp (ei − r)βis ds + σij βis dCjs .
0 i=1 j=1 0 i=1
No-Arbitrage
The stock model (15.53) is said to be no-arbitrage if there is no portfolio
(βt , β1t , · · · , βmt ) such that for some time s > 0, we have
M{exp(−rs)Zs ≥ Z0 } = 1 (15.55)
and
M{exp(−rs)Zs > Z0 } > 0 (15.56)
where Zt is determined by (15.54) and represents the wealth at time t.
has a solution, i.e., (e1 −r, e2 −r, · · · , em −r) is a linear combination of column
vectors (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · , (σ1n , σ2n , · · · , σmn ).
Proof: When the portfolio (βt , β1t , · · · , βmt ) is accepted, the wealth at each
time t is
Z tXm Xn Z tXm
Zt = Z0 exp(rt) exp (ei − r)βis ds + σij βis dCjs .
0 i=1 j=1 0 i=1
Thus
m
Z tX n Z tX
X m
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)βis ds + σij βis dCjs
0 i=1 j=1 0 i=1
372 Chapter 15 - Uncertain Finance
and variance 2
n Z t X
X m
σij βis ds .
0
j=1
i=1
Assume the system (15.57) has a solution. The argument breaks down
into two cases. Case I: for any given time t and portfolio (βt , β1t , · · · , βmt ),
suppose m
Xn Z t X
σij βis ds = 0.
0
j=1 i=1
Then
m
X
σij βis = 0, j = 1, 2, · · · , n, s ∈ (0, t].
i=1
and
m
Z tX
(ei − r)βis ds = 0.
0 i=1
ln(exp(−rt)Zt ) − ln Z0 = 0
and
M{exp(−rt)Zt > Z0 } = 0.
That is, the stock model (15.53) is no-arbitrage. Case II: for any given time
t and portfolio (βt , β1t , · · · , βmt ), suppose
n Z t X
m
X
σij βis ds 6= 0.
j=1 0
i=1
and
m
X
(ei − r)αi > 0.
i=1
Now we take a portfolio
(βt , β1t , · · · , βmt ) ≡ (1 − (α1 + α2 + · · · + αm ), α1 , α2 , · · · , αm ).
Then
m
Z tX
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)αi ds > 0.
0 i=1
Thus we have
M{exp(−rt)Zt > Z0 } = 1.
Hence the multifactor stock model (15.53) is arbitrage. The theorem is thus
proved.
Theorem 15.11 The multifactor stock model (15.53) is no-arbitrage if its
log-diffusion matrix
σ11 σ12 · · · σ1n
σ21 σ22 · · · σ2n
(15.58)
.. .. .. ..
. . . .
σm1 σm2 ··· σmn
has rank m, i.e., the row vectors are linearly independent.
Proof: If the log-diffusion matrix (15.58) has rank m, then the system of
equations (15.57) has a solution. It follows from Theorem 15.10 that the
multifactor stock model (15.53) is no-arbitrage.
Theorem 15.12 The multifactor stock model (15.53) is no-arbitrage if its
log-drifts are all equal to the interest rate r, i.e.,
ei = r, i = 1, 2, · · · , m. (15.59)
Proof: Since the log-drifts ei = r for any i = 1, 2, · · · , m, we immediately
have
(e1 − r, e2 − r, · · · , em − r) ≡ (0, 0, · · · , 0)
that is a linear combination of (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · ,
(σ1n , σ2n , · · · , σmn ). It follows from Theorem 15.10 that the multifactor stock
model (15.53) is no-arbitrage.
374 Chapter 15 - Uncertain Finance
Zero-Coupon Bond
A zero-coupon bond is a bond bought at a price lower than its face value that
is the amount it promises to pay at the maturity date. For simplicity, we
assume the face value is always 1 dollar.
Let f represent the price of this zero-coupon bond. Then the investor
pays f for buying it at time 0, and receives 1 dollar at the maturity date s.
Since the interest rate is Xt , the present value of 1 dollar is
Z s
exp − Xt dt . (15.63)
0
On the other hand, the bank receives f for selling the zero-coupon bond at
time 0, and pays 1 dollar at the maturity date s. Thus the net return of the
bank at the time 0 is Z s
f − exp − Xt dt . (15.65)
0
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
Z s Z s
− f + E exp − Xt dt = f − E exp − Xt dt (15.66)
0 0
Thus the price of the zero-coupon bond is just the expected present value of
its face value.
Section 15.7 - Uncertain Interest Rate Model 375
Theorem 15.13 (Jiao-Yao [64]) Assume the uncertain interest rate Xt fol-
lows the uncertain differential equation (15.62). Then the price of a zero-
coupon bond with maturity date s is
Z 1 Z s
f= exp − Xtα dt dα (15.68)
0 0
Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral Z s
Xt dt
0
is Z s
Ψ−1
s (α) = Xtα dt.
0
Hence the price formula of zero-coupon bond follows from Theorem 2.26
immediately.
at the maturity date s. Considering the time value of money, the present
value of the payoff is
Z s Z s Z s
exp − Xt dt exp Xt dt − exp Xt ∧ Kdt
0 0 0
Z s Z s
= 1 − exp − Xt dt + Xt ∧ Kdt
0 0
Z s
+
= 1 − exp − (Xt − K) dt .
0
376 Chapter 15 - Uncertain Finance
Similarly, we may verify that the net return of the bank at the time 0 is
Z s
f − 1 + exp − (Xt − K)+ dt . (15.71)
0
The fair price of this contract should make the borrower and the bank have
an identical expected return, i.e.,
Z s Z s
+ +
−f +1−E exp − (Xt − K) dt = f −1+E exp − (Xt − K) dt .
0 0
Thus we have the following definition of the price of interest rate ceiling.
Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral
Z s
(Xt − K)+ dt
0
is
Z s
Ψ−1
s (α) = (Xtα − K)+ dt.
0
Hence the price formula of the interest rate ceiling follows from Theorem 2.26
immediately.
Section 15.7 - Uncertain Interest Rate Model 377
at the maturity date s. Considering the time value of money, the present
value of the payoff is
Z s Z s Z s
exp − Xt dt exp Xt ∨ Kdt − exp Xt dt
0 0 0
Z s Z s
= exp − Xt dt + Xt ∨ Kdt − 1
0 0
Z s
= exp (K − Xt )+ dt − 1.
0
Similarly, we may verify that the net return of the bank at the time 0 is
Z s
+
f − exp (K − Xt ) dt + 1. (15.76)
0
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
Z s Z s
+ +
−f + E exp (K − Xt ) dt − 1 = f − E exp (K − Xt ) dt + 1.
0 0
Thus we have the following definition of the price of interest rate floor.
Proof: It follows from Theorem 14.18 that the inverse uncertainty distribu-
tion of the time integral Z s
(K − Xt )+ dt
0
is Z s
Ψ−1
s (α) = (K − Xt1−α )+ dt.
0
Hence the price formula of the interest rate floor follows from Theorem 2.26
immediately.
On the other hand, the bank receives f for selling the contract at time 0, and
pays (1 − K/Zs )+ in foreign currency at the expiration time s. Thus the net
return of the bank at the time 0 is
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
Thus the European currency option price is given by the definition below.
Proof: Since (Zs − K)+ and Z0 (1 − K/Zs )+ are increasing functions with
respect to Zs , they have inverse uncertainty distributions
√ ! !+
σs 3 α
Ψ−1
s (α) = Z0 exp es + ln −K ,
π 1−α
√ !!+
σs 3 α
Υ−1
s (α) = Z0 − K/ exp es + ln ,
π 1−α
380 Chapter 15 - Uncertain Finance
respectively. Thus the European currency option price formula follows from
Definition 15.17 immediately.
Remark 15.5: The European currency option price of the uncertain cur-
rency model (15.79) is a decreasing function of K, u and v.
Example 15.5: Assume the domestic interest rate u = 0.08, the foreign in-
terest rate v = 0.07, the log-drift e = 0.06, the log-diffusion σ = 0.32, the ini-
tial exchange rate Z0 = 5, the strike price K = 6 and the expiration time s =
2. The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm)
yields the European currency option price f = 0.977.
On the other hand, the bank receives f for selling the contract, and pays
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
+
−f + E sup exp(−ut)(Zt − K)
0≤t≤s
(15.90)
+
=f −E sup exp(−vt)Z0 (1 − K/Zt ) .
0≤t≤s
Thus the American currency option price is given by the definition below.
Section 15.8 - Uncertain Currency Model 381
Proof: It follows from Theorem 14.13 that sup0≤t≤s exp(−ut)(Zt − K)+ and
sup0≤t≤s exp(−vt)Z0 (1 − K/Zt )+ have inverse uncertainty distributions
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−ut) Z0 exp et + ln −K ,
0≤t≤s π 1−α
√ !!+
σt 3 α
Υ−1
s (α) = sup exp(−vt) Z0 − K/ exp et + ln ,
0≤t≤s π 1−α
respectively. Thus the American currency option price formula follows from
Definition 15.19 immediately.
where u and v are interest rates, F and G are two functions, and Ct is a Liu
process.
Theorem 15.18 (Liu [95]) Assume a European currency option for the un-
certain currency model (15.91) has a strike price K and an expiration time
s. Then the European currency option price is
1 1
Z
exp(−us)(Zsα − K)+ + exp(−vs)Z0 (1 − K/Zsα )+ dα (15.92)
f=
2 0
382 Chapter 15 - Uncertain Finance
Proof: It follows from the fair price principle that the European option price
is
1 1
f= exp(−us)E[(Zs − K)+ ] + exp(−vs)Z0 E[(1 − K/Zs )+ ]. (15.93)
2 2
By using Theorem 14.12, we get the equation (15.92).
Theorem 15.19 (Liu [95]) Assume an American currency option for the
uncertain currency model (15.91) has a strike price K and an expiration
time s. Then the American currency option price is
1 1
Z
f= sup exp(−ut)(Ztα − K)+ + sup exp(−vt)Z0 (1 − K/Ztα )+ dα
2 0 0≤t≤s 0≤t≤s
Proof: It follows from the fair price principle that the American option price
is
1 1
f = E sup exp(−ut)(Zt − K)+ + E sup exp(−vt)Z0 (1 − K/Zt )+ .
2 0≤t≤s 2 0≤t≤s
Uncertain Statistics
Denote the expert’s belief degree by α (say 0.6). Note that the expert’s belief
degree of ξ greater than x must be 1 − α due to the self-duality of uncertain
measure. An expert’s experimental data
............................................................................
.....
.....
x ... .....
...........................................................................
.....
.....
... .....
..... .....
α .....
..... ... ......
..... .. .....
.
. ..
..
1−α
..... .. .....
. ..
.................................................................................................................................................................................................................................... ξ
..
M{ξ ≤ x} ...
.. M{ξ ≥ x}
Repeating the above process, the following expert’s experimental data are
obtained by the questionnaire,
Q1: May I ask you how far is from Beijing to Tianjin? What do you think
is the minimum distance?
A1: 100km. (an expert’s experimental data (100, 0) is acquired)
Q2: What do you think is the maximum distance?
A2: 150km. (an expert’s experimental data (150, 1) is acquired)
Q3: What do you think is a likely distance?
A3: 130km.
Q4: To what degree do you think that the real distance is less than 130km?
A4: 60%. (an expert’s experimental data (130, 0.6) is acquired)
Q5: Is there another number this distance may be? If yes, what is it?
A5: 140km.
Q6: To what degree do you think that the real distance is less than 140km?
A6: 90%. (an expert’s experimental data (140, 0.9) is acquired)
Section 16.3 - Determining Uncertainty Distribution 387
Q7: Is there another number this distance may be? If yes, what is it?
A7: 120km.
Q8: To what degree do you think that the real distance is less than 120km?
Q9: Is there another number this distance may be? If yes, what is it?
A9: No idea.
Φ(x)
....
........
..
...
..
1 ..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...............................................
... ...
... ...
.
... .........•
... ...
...
...
...
................. (x , α ) 5 5
... (x , α ) 4 4 • ..
...
...
...
............
.
... ....
... .
...
.
... ...
... ...
... ...
... ...
... ...
... (x , α ) 2 2 • ...
... ...
..........................................•
... .
.... (x , α )
... .. 3 3
...
... ...
... ...
... .
....
... ..
...
... ...
... ...
... .....
(x , α ) ... 1 1 .....
.
... •.....
...
... ....
0 ...................................................................................................................................................................................................................................................................................
....
x
.
Example 16.1: Recall that the five expert’s experimental data (100, 0),
(120, 0.3), (130, 0.6), (140, 0.9), (150, 1) of the travel distance between Beijing
and Tianjin have been acquired in Section 16.2. Based on those expert’s
experimental data, an empirical uncertainty distribution of travel distance is
shown in Figure 16.3.
Φ(x)
....
........
.
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..........•
1 ... ....
........
........................................
... ......... (150, 1)
... (140, 0.9) .•
.
.........
... ...
....
... .....
... .....
... ..
.....
...
... ....
... .....
... .....
... ..
.....
.•
..
... ....
... (130, 0.6) .....
... .....
... ..
.....
...
... .....
... .....
... .....
... ..
.....
... .....
.....•
... .......
... (120, 0.3) ......
... ....
........
... .......
......
... .......
... .......
(100, 0) ... .
..........
....
.................................................................................................................................................................................................................................
.....................................•
x
0 ....
n
X
min (Φ(xi |θ) − αi )2 . (16.11)
θ
i=1
Φ(x|θ)
...
..........
....
...
.........................
... ..................................
...
...
•... ...................................... •.
.......... ..
...
......
.
.. . •
... ......
... .....
... .....
... .
.....
.
.
... ....
... ....
.....
... .... ...
... ..... • .
... ...
...
... ....
... ....
... •.. ........
.. ....
... .....
.
... .....
... .......
...........
................................................................................................................................................................................................................................................................ x
...
0 ...
...
..
if x ≤ a
0,
x−a
Φ(x|a, b) = , if a ≤ x ≤ b (16.12)
b−a
1, if x ≥ b.
(1, 0.15), (2, 0.45), (3, 0.55), (4, 0.85), (5, 0.95). (16.13)
(0.6, 0.1), (1.0, 0.3), (1.5, 0.4), (2.0, 0.6), (2.8, 0.8), (3.6, 0.9). (16.16)
Method of Moments
Assume that a nonnegative uncertain variable has an uncertainty distribution
Φ(x|θ1 , θ2 , · · · , θp ) (16.18)
with
0 ≤ x1 < x2 < · · · < xn , 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1, (16.20)
Wang-Peng [154] proposed a method of moments to estimate the unknown
parameters of uncertainty distribution. At first, the kth empirical moments
of the expert’s experimental data are defined as that of the corresponding
empirical uncertainty distribution, i.e.,
n−1 k
1 XX
ξ k = α1 xk1 + (αi+1 − αi )xji xk−j k
i+1 + (1 − αn )xn . (16.21)
k + 1 i=1 j=0
The moment estimates θ̂1 , θ̂2 , · · · , θ̂p are then obtained by equating the first
p moments of Φ(x|θ1 , θ2 , · · · , θp ) to the corresponding first p empirical mo-
ments. In other words, the moment estimates θ̂1 , θ̂2 , · · · , θ̂p should solve the
system of equations,
Z +∞
√
(1 − Φ( k x | θ1 , θ2 , · · · , θp ))dx = ξ k , k = 1, 2, · · · , p (16.22)
0
From the expert’s experimental data, we may believe that the unknown pa-
rameters must be positive numbers. Thus the first three moments of the
zigzag uncertainty distribution Φ(x|a, b, c) are
a + 2b + c
,
4
a2 + ab + 2b2 + bc + c2
,
6
a3 + a2 b + ab2 + 2b3 + b2 c + bc2 + c3
.
8
392 Chapter 16 - Uncertain Statistics
Delphi Method
Delphi method was originally developed in the 1950s by the RAND Corpo-
ration based on the assumption that group experience is more valid than
individual experience. This method asks the domain experts answer ques-
tionnaires in two or more rounds. After each round, a facilitator provides
an anonymous summary of the answers from the previous round as well as
the reasons that the domain experts provided for their opinions. Then the
Section 16.4 - Determining Membership Function 393
domain experts are encouraged to revise their earlier answers in light of the
summary. It is believed that during this process the opinions of domain
experts will converge to an appropriate answer. Wang-Gao-Guo [152] recast
Delphi method as a process to determine uncertainty distributions. The main
steps are listed as follows:
Step 2. Use the i-th expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · ,
(xini , αini ) to generate the uncertainty distributions Φi of the i-
th domain experts, i = 1, 2, · · · , m, respectively.
Step 3. Compute Φ(x) = w1 Φ1 (x) + w2 Φ2 (x) + · · · + wm Φm (x) where
w1 , w2 , · · · , wm are convex combination coefficients representing
weights of the domain experts.
Step 4. If |αij − Φ(xij )| are less than a given level ε > 0 for all i and j, then
go to Step 5. Otherwise, the i-th domain experts receive the sum-
mary (for example, the function Φ obtained in the previous round
and the reasons of other experts), and then provide a set of revised
expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · , (xini , αini ) for
i = 1, 2, · · · , m. Go to Step 2.
Step 5. The last function Φ is the uncertainty distribution to be determined.
µ(x)
...
..........
...
.... •
......................................................•
...
.. ..... ...
... ..... ...
... ..
..... ...
...
... ..
...
. ...
... •
...
. ...
... ... .
... ..
. •...........
... ... .....
.....
... ..
. .....
... .. .....
... .•
.
.... ....
•....
... ..
.... ...
... ..
.... ...
... ..
.... ...
... ..
.... ...
...
... ..
.... ...
... .... ...
... ..
.
.
...
. • ...
... ..... ...
... •...
. ...
. .
.................................................................................................................................................................................................................................................
.
. .
.
x
...
.
(1, 0.15), (2, 0.45), (3, 0.90), (6, 0.85), (7, 0.60), (8, 0.20). (16.35)
Q1: May I ask you what distances belong to “about 100km”? What do you
think is the minimum distance?
A3: 95km.
Q4: To what degree do you think that 95km belongs to “about 100km”?
Q5: Is there another distance that belongs to “about 100km”? If yes, what
is it?
A5: 105km.
Q6: To what degree do you think that 105km belongs to “about 100km”?
Q7: Is there another distance that belongs to “about 100km”? If yes, what
is it?
A7: 90km.
396 Chapter 16 - Uncertain Statistics
Q8: To what degree do you think that 90km belongs to “about 100km”?
Q9: Is there another distance that belongs to “about 100km”? If yes, what
is it?
A9: 110km.
Q10: To what degree do you think that 110km belongs to “about 100km”?
Q11: Is there another distance that belongs to “about 100km”? If yes, what
is it?
A11: No idea.
Until now six expert’s experimental data (80, 0), (90, 0.5), (95, 1), (105, 1),
(110, 0.5), (120, 0) are acquired from the domain expert. Based on those
expert’s experimental data, an empirical membership function of “about
100km” is produced and shown by Figure 16.6.
µ(x)
..
........
...
... (95, 1) (105, 1)
... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ............................................. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
1 ....
...
•
...
...
•....
...
... ... ...
...
... .... ...
... .
. ...
... .... ...
... ... ...
... .
. ...
... .... ...
...
... ... ...
... ... ...
... .
. ...
... .... ...
... . ...
... (90, 0.5) .•
..
. (110, 0.5) • ...
... ..
. ...
... .... ...
...
... ... ...
... ... ...
... ... ...
...
... .
..
...
... ...
. ...
... ... ...
... ... ...
...
... ...
...
... ...
. ...
(80, 0) ... ... (120, 0) ...
.....................................................................................................................................................................................................................................................................
0 ...
.
• • ... x
y = β0 + β1 x 1 + β2 x 2 + · · · + βp x p + ε (16.37)
where x̃i1 , x̃i2 , · · · , x̃ip , ỹi are uncertain variables with uncertainty distribu-
tions Φi1 , Φi2 , · · · , Φip , Ψi , i = 1, 2, · · · , n, respectively.
Based on the imprecisely observed data (16.39), Yao-Liu [187] suggested
that the least squares estimate of β in the regression model
where
Φ−1
(
ij (1 − α), if βj ≥ 0
Υ−1
ij (α, βj ) = (16.45)
Φ−1
ij (α), if βj < 0
for i = 1, 2, · · · , n and j = 1, 2, · · · , p.
For each index i, the inverse uncertainty distribution of the uncertain variable
p
X
ỹi − β0 − βj x̃ij
j=1
is just
p
X
Fi−1 (α) = Ψ−1
i (α) − β0 − βj Υ−1
ij (α, βj ).
j=1
Residual Analysis
Definition 16.1 (Lio-Liu [74]) Let (x̃i1 , x̃i2 , · · · , x̃ip , ỹi ), i = 1, 2, · · · , n be
a set of imprecisely observed data, and let the fitted regression model be
y = f (x1 , x2 , · · · , xp |β∗ ). (16.49)
Then for each index i (i = 1, 2, · · · , n), the term
ε̂i = ỹi − f (x̃i1 , x̃i2 , · · · , x̃ip |β∗ ) (16.50)
is called the i-th residual.
If the disturbance term ε is assumed to be an uncertain variable, then
its expected value can be estimated as the average of the expected values of
residuals, i.e.,
n
1X
ê = E[ε̂i ] (16.51)
n i=1
and the variance can be estimated as
n
1X
σ̂ 2 = E[(ε̂i − ê)2 ] (16.52)
n i=1
where
Φ−1 ∗
(
ij (1 − α), if βj ≥ 0
Υ−1 ∗
ij (α, βj ) = (16.56)
Φ−1
ij (α), if βj∗ < 0
for i = 1, 2, · · · , n and j = 1, 2, · · · , p.
400 Chapter 16 - Uncertain Statistics
Proof: For each index i, the inverse uncertainty distribution of the uncertain
variable
p
X
ỹi − β0∗ − βj∗ x̃ij
j=1
is just
p
X
Fi−1 (α) = Ψ−1 ∗
i (α) − β0 − βj∗ Υ−1 ∗
ij (α, βj ).
j=1
It follows from Theorems 2.25 and 2.42 that (16.54) and (16.55) hold.
and (ii) the disturbance term ε has expected value ê and variance σ̂ 2 , and
is independent of x̃1 , x̃2 , · · · , x̃p . Lio-Liu [74] suggested that the forecast
uncertain variable of response variable y with respect to x̃1 , x̃2 , · · · , x̃p is
determined by
Xp
∗
ŷ = β0 + βj∗ x̃j + ε, (16.61)
j=1
Section 16.5 - Uncertain Regression Analysis 401
and the forecast value is defined as the expected value of the forecast uncertain
variable ŷ, i.e.,
Xp
µ = β0∗ + βj∗ E[x̃j ] + ê. (16.62)
j=1
where
Φ−1 if βj∗ ≥ 0
(
j (α),
Υ−1 ∗
j (α, βj ) = (16.64)
Φ−1 ∗
j (1 − α), if βj < 0
Exercise 16.4: Let (x̃1 , x̃2 , · · · , x̃p ) be a new explanatory vector, where
x̃1 , x̃2 , · · · , x̃p are independent uncertain variables with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φp , respectively. Assume (i) the fitted linear re-
gression model is
Xp
y = β0∗ + βj∗ xj , (16.68)
j=1
and (ii) the disturbance term ε follows linear uncertainty distribution with ex-
pected value ê and variance σ̂ 2 , and is independent of x̃1 , x̃2 , · · · , x̃p . What is
the α confidence √ interval√of response variable y? (Hint: The linear uncertain
variable L(ê − 3σ̂, ê + 3σ̂) has expected value ê and variance σ̂ 2 .)
and (ii) the disturbance term ε follows normal uncertainty distribution with
expected value ê and variance σ̂ 2 , and is independent of x̃. What are the
forecast value and α confidence interval of response variable y?
No. x1 x2 x3 y
1 L(3, 4) L(9, 10) L(6, 7) L(33, 36)
2 L(5, 6) L(20, 22) L(6, 7) L(40, 43)
3 L(5, 6) L(18, 20) L(7, 8) L(38, 41)
4 L(5, 6) L(33, 36) L(6, 7) L(46, 49)
5 L(4, 5) L(31, 34) L(7, 8) L(41, 44)
6 L(6, 7) L(13, 15) L(5, 6) L(37, 40)
7 L(6, 7) L(25, 28) L(6, 7) L(39, 42)
8 L(5, 6) L(30, 33) L(4, 5) L(40, 43)
9 L(3, 4) L(5, 6) L(5, 6) L(30, 33)
10 L(7, 8) L(47, 50) L(8, 9) L(52, 55)
11 L(4, 5) L(25, 28) L(5, 6) L(38, 41)
12 L(4, 5) L(11, 13) L(6, 7) L(31, 34)
13 L(8, 9) L(23, 26) L(7, 8) L(43, 46)
14 L(6, 7) L(35, 38) L(7, 8) L(44, 47)
15 L(6, 7) L(39, 44) L(5, 6) L(42, 45)
16 L(3, 4) L(21, 24) L(4, 5) L(33, 36)
17 L(6, 7) L(7, 8) L(5, 6) L(34, 37)
18 L(7, 8) L(40, 43) L(7, 8) L(48, 51)
19 L(4, 5) L(35, 38) L(6, 7) L(38, 41)
20 L(4, 5) L(23, 26) L(3, 4) L(35, 38)
21 L(5, 6) L(33, 36) L(4, 5) L(40, 43)
22 L(5, 6) L(27, 30) L(4, 5) L(36, 39)
23 L(4, 5) L(34, 37) L(8, 9) L(45, 48)
24 L(3, 4) L(15, 17) L(5, 6) L(35, 38)
y = β0 + β1 x1 + β2 x2 + β3 x3 + ε. (16.70)
By solving the minimization problem (16.44), we get the least squares esti-
Section 16.6 - Uncertain Time Series Analysis 403
mate
(β0∗ , β1∗ , β2∗ , β3∗ ) = (21.5196, 0.8678, 0.3110, 1.0053). (16.71)
Thus the fitted linear regression model is
By using the formulas (16.54) and (16.55), we get the expected value and
variance of the disturbance term ε are
(x̃1 , x̃2 , x̃3 ) ∼ (L(5, 6), L(28, 30), L(6, 7)) (16.74)
be a new uncertain explanatory vector. When x̃1 , x̃2 , x̃3 , ε are independent,
by calculating the formula (16.62), we get the forecast value of response
variable y is
µ = 41.8460. (16.75)
Taking the confidence level α = 95%, if the disturbance term ε is assumed to
follow normal uncertainty distribution, then
b = 5.9780 (16.76)
is the minimum value such that (16.66) holds. Therefore, the 95% confidence
interval of response variable y is
X = {X1 , X2 , · · · , Xn } (16.78)
If the minimization solution is a∗0 , a∗1 , · · · , a∗k , then the fitted autoregressive
model is
X k
Xt = a∗0 + a∗i Xt−i . (16.81)
i=1
where
Φ−1
(
t−i (1 − α), if ai ≥ 0
Υ−1
t−i (α, ai ) = (16.84)
Φ−1
t−i (α), if ai < 0
for i = 1, 2, · · · , k.
Proof: For each index t, the inverse uncertainty distribution of the uncertain
variable
Xk
Xt − a0 − ai Xt−i
i=1
is just
k
X
Ft−1 (α) = Φ−1
t (α) − a0 − ai Υ−1
t−i (α, ai ).
i=1
It follows from Theorem 2.42 that
k
!2 Z k
!2
X 1 X
E Xt − a0 − ai Xt−i = Φ−1
t (α) − a0 − ai Υ−1
t−i (α, ai ) dα.
i=1 0 i=1
Section 16.6 - Uncertain Time Series Analysis 405
Residual Analysis
Definition 16.2 (Yang-Liu [166]) Let X1 , X2 , · · · , Xn be imprecisely ob-
served values, and let the fitted autoregressive model be
k
X
Xt = a∗0 + a∗i Xt−i . (16.85)
i=1
Then the estimated expected value of disturbance terms under the iid hypoth-
esis is
n Z 1 k
!
1 X X
ê = Φ−1 ∗
t (α) − a0 − a∗i Υ−1 ∗
t−i (α, ai ) dα (16.90)
n−k 0 i=1
t=k+1
406 Chapter 16 - Uncertain Statistics
where
Φ−1 ∗
(
t−i (1 − α), if ai ≥ 0
Υ−1 ∗
t−i (α, ai ) = (16.92)
Φ−1
t−i (α), if a∗i < 0
for i = 1, 2, · · · , k.
Proof: For each index t, the inverse uncertainty distribution of the uncertain
variable
Xk
Xt − a∗0 − a∗i Xt−i
i=1
is just
k
X
Ft−1 (α) = Φ−1 ∗
t (α) − a0 − a∗i Υ−1 ∗
t−i (α, ai ).
i=1
It follows from Theorems 2.25 and 2.42 that (16.90) and (16.91) hold.
k
X
Xt = a∗0 + a∗i Xt−i , (16.93)
i=1
and (ii) the disturbance term εn+1 has expected value ê and variance σ̂ 2 , and
is independent of X1 , X2 , · · · , Xn . Yang-Liu [166] suggested that the forecast
uncertain variable of Xn+1 based on X1 , X2 , · · · , Xn is determined by
k
X
X̂n+1 = a∗0 + a∗i Xn+1−i + εn+1 , (16.94)
i=1
and the forecast value is defined as the expected value of the forecast uncertain
variable X̂n+1 , i.e.,
k
X
µ = a∗0 + a∗i E[Xn+1−i ] + ê. (16.95)
i=1
Section 16.6 - Uncertain Time Series Analysis 407
If we suppose further that the disturbance term εn+1 follows normal un-
certainty distribution, then the inverse uncertainty distribution of forecast
uncertain variable X̂n+1 is
k
X
Φ̂−1 ∗
n+1 (α) = a0 + a∗i Υ−1 ∗
n+1−i (α, ai ) + Φ
−1
(α) (16.96)
i=1
where
Φ−1 if a∗i ≥ 0
(
n+1−i (α),
Υ−1 ∗
n+1−i (α, ai ) = (16.97)
Φ−1 ∗
n+1−i (1 − α), if ai < 0
and (ii) the disturbance term εn+1 follows linear uncertainty distribution with
expected value ê and variance σ̂ 2 , and is independent of X1 , X2 , · · · , Xn .
What is the α√confidence √ interval of Xn+1 ? (Hint: The linear uncertain
variable L(ê − 3σ̂, ê + 3σ̂) has expected value ê and variance σ̂ 2 .)
X1 X2 X3 X4 X5
L(330, 341) L(333, 346) L(335, 347) L(338, 350) L(340, 354)
X6 X7 X8 X9 X10
L(343, 359) L(344, 364) L(346, 366) L(350, 366) L(355, 369)
X11 X12 X13 X14 X15
L(360, 372) L(362, 376) L(365, 381) L(370, 384) L(373, 390)
X16 X17 X18 X19 X20
L(379, 391) L(380, 398) L(384, 402) L(388, 410) L(390, 415)
By solving the minimization problem (16.83), we get the least squares esti-
mate
(a∗0 , a∗1 , a∗2 ) = (28.4715, 0.2367, 0.7018). (16.103)
Thus the fitted autoregressive model is
By using the formulas (16.90) and (16.91), we get the expected value and
variance of disturbance term ε21 are
µ = 403.7361. (16.106)
Taking the confidence level α = 95%, if the disturbance term ε21 is assumed
to follow normal uncertainty distribution, then
b = 28.7376 (16.107)
is the minimum value such that (16.99) holds. Therefore, the 95% confidence
interval of carbon emission in the 21st year (i.e., X21 ) is
Uncertain Random
Variable
(Γ × Ω, L × A, M × Pr) (A.1)
Γ × Ω = {(γ, ω) | γ ∈ Γ, ω ∈ Ω} . (A.2)
Θω = {γ ∈ Γ | (γ, ω) ∈ Θ} (A.3)
Ω..
..
.......... ...........................................
... ......... .......
... ....... ......
... ..
.......
. .....
... ..... .....
... ...... ....
...
ω ... .. ..
.
.........................................................................................................................
.. ...
... .... .. ... .....
.
... .. ..
... .... .. .. ...
... .... .. .. ...
...
...
... ..
... ..
Θ .. ...
.. ...
... ... .. .. ...
... ... . .. ...
... .. .....
... ..... ...
... ... .
... ..... ......
... .. ...... ..
... ..
... .. ........ ....... ..
...... ...
... .. ....... ...... ..
... .. .......... ....... ..
.. ..................................... ..
... ..
... ..
.
. .. .
....................................................................................................................................................................
..... ..... ...................................... .
.
.
Γ
.......Θω ..............................................
Definition A.1 (Liu [106]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space, and
let Θ ∈ L × A be an event. Then the chance measure of Θ is defined as
Z 1
Ch{Θ} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x} dx. (A.5)
0
Θ = {(γ, ω) ∈ Γ × Ω | γ + ω ≤ 1} (A.6)
1
Ch{Θ} = . (A.7)
2
Section A.1 - Chance Measure 413
Proof: Let us first prove the identity (A.10). When A is nonempty, we have
{γ ∈ Γ | (γ, ω) ∈ Λ × A} = Λ
and
M{γ ∈ Γ | (γ, ω) ∈ Λ × A} = M{Λ}.
For any real number x, if M{Λ} ≥ x, then
Thus
Z 1
Ch{Λ × A} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} dx
0
Z M{Λ} Z 1
= Pr{A}dx + 0dx = M{Λ} × Pr{A}.
0 M{Λ}
{γ ∈ Γ | (γ, ω) ∈ Θ1 } ⊂ {γ ∈ Γ | (γ, ω) ∈ Θ2 }
and
M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≤ M{γ ∈ Γ | (γ, ω) ∈ Θ2 }.
Thus for any real number x, we have
Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≥ x}
≤ Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ2 } ≥ x} .
Theorem A.3 (Liu [106], Duality Theorem) The chance measure is self-
dual. That is, for any event Θ, we have
Proof: Since both uncertain measure and probability measure are self-dual,
we have
Z 1
Ch{Θ} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x} dx
0
Z 1
= Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } ≤ 1 − x} dx
0
Z 1
= (1 − Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > 1 − x}) dx
0
Z 1
=1− Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > x} dx
0
= 1 − Ch{Θc }.
X∞
= Ch{Θi }.
i=1
Theorem A.9 (Liu [106], Sufficient and Necessary Condition for Chance
Distribution) A function Φ : < → [0, 1] is a chance distribution if and only if
it is a monotone increasing function except Φ(x) ≡ 0 and Φ(x) ≡ 1.
Proof: Assume Φ is a chance distribution of uncertain random variable ξ.
Let x1 and x2 be two real numbers with x1 < x2 . It follows from Theorem A.7
that
Φ(x1 ) = Ch{ξ ≤ x1 } ≤ Ch{ξ ≤ x2 } = Φ(x2 ).
Hence the chance distribution Φ is a monotone increasing function. Further-
more, if Φ(x) ≡ 0, then
Z 1
Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr ≡ 0.
0
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn )? (A.29)
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.30)
Proof: It follows from Theorem A.6 that the uncertain random variable ξ
has a chance distribution
Z 1
Φ(x) = Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr
0
Z 1
= Pr {ω ∈ Ω | M{f (η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) ≤ x} ≥ r} dr
0
Z
= M{f (y1 , y2 , · · ·, ym , τ1 , τ2 , · · ·, τn ) ≤ x}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= F (x; y1 , y2 , · · ·, ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym )
<m
ξ = η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn (A.32)
420 Appendix A - Uncertain Random Variable
where
Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.34)
y1 +y2 +···+ym ≤y
ξ = η1 η2 · · · ηm τ1 τ2 · · · τn (A.36)
where Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.38)
y1 y2 ···ym ≤y
ξ = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn (A.40)
where
ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (A.44)
where
Ψ(x) = Ψ1 (x)Ψ2 (x) · · · Ψm (x) (A.46)
is the probability distribution of η1 ∨ η2 ∨ · · · ∨ ηm , and
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.48)
where
F (x; y1 , · · ·, ym ) = sup min Υi (xi ) ∧ min (1 − Υi (xi )) .
f (y1 ,··· ,ym ,x1 ,··· ,xn )=x 1≤i≤k k+1≤i≤n
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.50)
f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = x.
Remark A.5: Sometimes, the equation in the above theorem may not have
a root. In this case, if
f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) < x
f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) > x
for all α, then we set the root α = 0. The root α may be estimated by the
bisection method because
f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α))
Order Statistics
Definition A.4 (Gao-Sun-Ralescu [38], Order Statistic) Let ξ1 , ξ2 , · · · , ξn
be uncertain random variables, and let k be an index with 1 ≤ k ≤ n. Then
ξ = k-min[ξ1 , ξ2 , · · · , ξn ] (A.52)
Proof: For each index i and each real number yi , since fi is a strictly increas-
ing function, the uncertain variable fi (yi , τi ) has an uncertainty distribution
Fi (x; yi ) = sup Υi (zi ).
fi (yi ,zi )=x
Theorem 2.17 states that the kth order statistic of f1 (y1 , τ1 ), f2 (y2 , τ2 ), · · · ,
fn (yn , τn ) has an uncertainty distribution
" #
F (x; y1 , y2 , · · · , yn ) = k-max sup Υ1 (z1 ), · · · , sup Υn (zn ) .
f1 (y1 ,z1 )=x fn (yn ,zn )=x
Thus the theorem follows from the operational law of uncertain random vari-
ables immediately.
ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (A.56)
where
sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
∗
f (x1 , · · · , xm ) = (A.58)
1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n
if sup min νj (yj ) ≥ 0.5,
1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1
(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (A.59)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (A.60)
1 − bj , if yj = 0
Section A.4 - Operational Law 425
Remark A.6: When the uncertain variables disappear, the operational law
becomes
m
!
X Y
Pr{ξ = 1} = µi (xi ) f (x1 , x2 , · · · , xm ). (A.61)
(x1 ,x2 ,··· ,xm )∈{0,1}m i=1
Remark A.7: When the random variables disappear, the operational law
becomes
sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n
if sup min νj (yj ) < 0.5
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n
M{ξ = 1} = (A.62)
1− sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=0 1≤j≤n
if sup min νj (yj ) ≥ 0.5.
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n
where (
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (A.68)
1 − ai , if xi = 0
where (
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (A.70)
1 − ai , if xi = 0
Proof: It follows from the chance inversion theorem that for almost all
numbers x, we have Ch{ξ ≥ x} = 1 − Φ(x) and Ch{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ x}dx − Ch{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞
Proof: It follows from the change of variables of integral and Theorem A.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞
Proof: It follows from the change of variables of integral and Theorem A.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z 1 Z Φ(0) Z 1
= Φ−1 (α)dα + Φ−1 (α)dα = Φ−1 (α)dα.
Φ(0) 0 0
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.75)
and
E[ητ ] = E[η]E[τ ]. (A.77)
and Z Z 1
y ∧ Υ−1 (α) dαdΨ(y).
E[η ∧ τ ] = (A.79)
< 0
Proof: Since τ1 and τ2 are independent uncertain variables, for any real
numbers y1 and y2 , the functions f1 (y1 , τ1 ) and f2 (y2 , τ2 ) are also independent
uncertain variables. Thus
Exercise A.15: Assume η1 and η2 are random variables, and τ1 and τ2 are
independent uncertain variables. Show that
Theorem A.21 (Liu [106]) Let ξ be an uncertain random variable, and let f
be a nonnegative even function. If f is decreasing on (−∞, 0] and increasing
on [0, ∞), then for any given number t > 0, we have
E[f (ξ)]
Ch{|ξ| ≥ t} ≤ . (A.82)
f (t)
Proof: It is clear that Ch{|ξ| ≥ f −1 (r)} is a monotone decreasing function
of r on [0, ∞). It follows from the nonnegativity of f (ξ) that
Z +∞ Z +∞
E[f (ξ)] = Ch{f (ξ) ≥ x}dx = Ch{|ξ| ≥ f −1 (x)}dx
0 0
Z f (t) Z f (t)
≥ Ch{|ξ| ≥ f −1 (x)}dx ≥ Ch{|ξ| ≥ f −1 (f (t))}dx
0 0
Z f (t)
= Ch{|ξ| ≥ t}dx = f (t) · Ch{|ξ| ≥ t}
0
A.6 Variance
Definition A.6 (Liu [106]) Let ξ be an uncertain random variable with finite
expected value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (A.84)
2
Since (ξ − e) is a nonnegative uncertain random variable, we also have
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx. (A.85)
0
Theorem A.24 (Liu [106]) Let ξ be an uncertain random variable with ex-
pected value e. Then V [ξ] = 0 if and only if Ch{ξ = e} = 1.
Proof: We first assume V [ξ] = 0. It follows from the equation (A.85) that
Z +∞
Ch{(ξ − e)2 ≥ x}dx = 0
0
Ch{(ξ − e)2 = 0} = 1.
Proof: This theorem is based on Stipulation A.1 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0
√
Substituting e + y with x and y with (x − e)2 , the change of variables and
integration by parts produce
Z +∞ Z +∞ Z +∞
√
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e)2 = (x − e)2 dΦ(x).
0 e e
√
Similarly, substituting e − y with x and y with (x − e)2 , we obtain
+∞ −∞ e
√
Z Z Z
2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞
Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.26 that the variance is
Z +∞ Z 1
V [ξ] = (x − e)2 dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0
ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.91)
has a variance
Z Z +∞ √
V [ξ] = (1 − F (e + x; y1 , y2 , · · · , ym )
<m 0
√
+F (e − x; y1 , y2 , · · · , ym ))dxdΨ1 (y1 )dΨ2 (y2 ) · · · Ψm (ym )
f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = x.
The argument breaks into two cases. Case 1: Assume f (y, z) is strictly
increasing with respect to z. Let Υ denote the common uncertainty distri-
bution of τ1 , τ2 , · · · It is clear that
M{f (y, τ1 ) ≤ f (y, z)} = Υ(z)
for any real numbers y and z. Thus we have
Z +∞ Z +∞
M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y) = Υ(z). (A.97)
−∞ −∞
In addition, since f (η1 , z), f (η2 , z), · · · are a sequence of iid random variables,
the law of large numbers for random variables tells us that
Z +∞
f (η1 , z) + f (η2 , z) + · · · + f (ηn , z)
→ f (y, z)dΨ(y), a.s.
n −∞
as n → ∞. Thus
Z +∞
Sn
lim Ch ≤ f (y, z)dΨ(y) = Υ(z). (A.98)
n→∞ n −∞
It follows from (A.97) and (A.98) that (A.96) holds. Case 2: Assume f (y, z)
is strictly decreasing with respect to z. Then −f (y, z) is strictly increasing
with respect to z. By using Case 1 we obtain
Z +∞
Sn
lim Ch − < −z = M − f (y, τ1 )dΨ(y) < −z .
n→∞ n −∞
Section A.8 - Uncertain Random Programming 435
That is,
Z +∞
Sn
lim Ch >z =M f (y, τ1 )dΨ(y) > z .
n→∞ n −∞
Show that
Sn
→ E[η1 ] + τ1 (A.100)
n
in the sense of convergence in distribution as n → ∞.
Sn = η1 τ1 + η2 τ2 + · · · + ηn τn . (A.101)
Show that
Sn
→ E[η1 ]τ1 (A.102)
n
in the sense of convergence in distribution as n → ∞.
for j = 1, 2, · · · , p.
f (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)).
Section A.8 - Uncertain Random Programming 437
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0. (A.111)
Proof: It follows from Theorem A.6 that the left side of the chance constraint
(A.109) is
Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0}
Z 1
= Pr {ω ∈ Ω | M{gj (x, η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) ≤ 0} ≥ r} dr
0
Z
= M{gj (x, y1 , · · ·, ym , τ1 , · · ·, τn ) ≤ 0}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= Gj (x; y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m
Remark A.9: Sometimes, the equation (A.111) may not have a root. In
this case, if
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) < 0 (A.112)
for all α, then we set the root α = 1; and if
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) > 0 (A.113)
Remark A.10: The root α may be estimated by the bisection method be-
cause gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) is a strictly increasing function
with respect to α.
438 Appendix A - Uncertain Random Variable
If all uncertain random factors degenerate to random ones, then the risk
index is the probability measure that the system is loss-positive (Roy [131]).
If all uncertain random factors degenerate to uncertain ones, then the risk
index is the uncertain measure that the system is loss-positive (Liu [83]).
Proof: It follows from the definition of risk index and self-duality of chance
measure that
Risk = Ch{f (ξ1 , ξ2 , · · · , ξn ) > 0}
= 1 − Ch{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0}
= 1 − Φ(0).
The theorem is proved.
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.
Proof: It follows from the definition of risk index and Theorem A.6 that
Risk = Ch{f (η1 , · · · , ηm , τ1 , · · · , τn ) > 0}
Z 1
= Pr {ω ∈ Ω | M{f (η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) > 0} ≥ r} dr
0
Z
= M{f (y1 , · · ·, ym , τ1 , · · ·, τn ) > 0}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.
440 Appendix A - Uncertain Random Variable
Remark A.12: Sometimes, the equation may not have a root. In this case,
if
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) < 0
f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) > 0
Exercise A.19: (Series System) Consider a series system in which there are
m elements whose lifetimes are independent random variables η1 , η2 , · · · , ηm
with continuous probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements
whose lifetimes are independent uncertain variables τ1 , τ2 , · · · , τn with con-
tinuous uncertainty distributions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is
understood as the case that the system fails before the time T , then the loss
function is
f = T − η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (A.118)
Risk = a + b − ab (A.119)
where
b = Υ1 (T ) ∨ Υ2 (T ) ∨ · · · ∨ Υn (T ). (A.121)
f = T − η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (A.122)
where
a = Ψ1 (T )Ψ2 (T ) · · · Ψm (T ), (A.124)
b = Υ1 (T ) ∧ Υ2 (T ) ∧ · · · ∧ Υn (T ). (A.125)
f = T − k-max[η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ]. (A.126)
k-max[y1 , y2 , · · · , ym , Υ−1 −1 −1
1 (α), Υ2 (α), · · · , Υn (α)] = T. (A.128)
f = T − (η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn ). (A.129)
Υ−1 −1 −1
1 (α) + Υ2 (α) + · · · + Υn (α) = T − (y1 + y2 + · · · + ym ). (A.131)
Note that VaR(α) represents the maximum possible loss when α percent
of the right tail distribution is ignored. In other words, the loss will ex-
ceed VaR(α) with chance measure α. If the chance distribution Φ(x) of
f (ξ1 , ξ2 , · · · , ξn ) is continuous, then
If its inverse chance distribution Φ−1 (α) exists, then the expected loss is
Z 1
+
L= Φ−1 (α) dα. (A.137)
0
where
sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n
f ∗ (x1 , · · · , xm ) = (A.140)
1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n
if sup min νj (yj ) ≥ 0.5,
1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1
(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (A.141)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (A.142)
1 − bj , if yj = 0
Exercise A.23: (Series System) Consider a series system in which there are
m independent random elements η1 , η2 , · · ·, ηm with reliabilities a1 , a2 , · · ·, am ,
and n independent uncertain elements τ1 , τ2 , · · ·, τn with reliabilities b1 , b2 , · · · ,
bn , respectively. Note that the structure function is
f = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (A.143)
f = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (A.145)
444 Appendix A - Uncertain Random Variable
where
(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (A.148)
1 − ai , if xi = 0
21 α22 · · · α2n
α
T= .
.. .. ..
(A.152)
.. . . .
αn1 αn2 ··· αnn
Section A.11 - Uncertain Random Graph 445
Please note that the uncertain random graph becomes a random graph
(Erdős-Rényi [29], Gilbert [51]) if the collection U of uncertain edges vanishes;
and becomes an uncertain graph (Gao-Gao [43]) if the collection R of random
edges vanishes.
In order to deal with uncertain random graph, let us introduce some
symbols. Write
x11 x12 · · · x1n
21 x22 · · · x2n
x
X= .
.. .. ..
(A.153)
.. . . .
xn1 xn2 · · · xnn
and
xij = 0 or 1, if (i, j) ∈ R
xij = 0, if (i, j) ∈ U
X= X| . (A.154)
xij = xji , i, j = 1, 2, · · · , n
xii = 0, i = 1, 2, · · · , n
where
sup min νij (X), if sup min νij (X) < 0.5
X∈Y ∗, f (X)=1 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U
f ∗ (Y ) =
1 −
sup min νij (X), if sup min νij (X) ≥ 0.5,
X∈Y ∗, f (X)=0 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U
(
αij , if xij = 1
νij (X) = (i, j) ∈ U, (A.158)
1 − αij , if xij = 0
(
1, if I + X + X 2 + · · · + X n−1 > 0
f (X) = (A.159)
0, otherwise,
X and Y ∗ are defined by (A.154) and (A.156), respectively.
where
xij = 0 or 1, i, j = 1, 2, · · · , n
X = X | xij = xji , i, j = 1, 2, · · · , n . (A.161)
xii = 0, i = 1, 2, · · · , n
Section A.12 - Uncertain Random Network 447
where X becomes
xij = 0 or 1, i, j = 1, 2, · · · , n
X = X | xij = xji , i, j = 1, 2, · · · , n . (A.162)
xii = 0, i = 1, 2, · · · , n
Please note that the uncertain random network becomes a random net-
work (Frank-Hakimi [30]) if all weights are random variables; and becomes
an uncertain network (Liu [84]) if all weights are uncertain variables.
................ ................
... ............................................................. ..
....
.. . ..
.. . 2
..... .......
...... ........ ....
. ... ...
.4.
..
.....
................... ............
....
. .... ...
..
.
. ....... .
... ..
.... ......
......
.... .... ......
...... ....
.... ......
...... ...
...
......
..........
. ....
.... ... ......
...... .
.....
...
.. ..
...... .
. .... .......... ..............
.. ......... .... ... . ...... ...
.... ... ......
.... ..
... 1 ..
....... .............. .
..
.....
... ..... 6
.......... ...
..
... ......
...... ..
.
... ...
.
. ..
........... ............
.
......
...
.. .... ......
......
....
.... ......
......
....
.... ......
...... .... ......
......
...... ...... ......
....
.......
......... ............... ......... ........ ......
......... .... .... ............
.
.... ............................................................. ..
3
.... ......
..........
... ... 5
....... ........
...
.
U = {(1, 2), (1, 3), (2, 4), (2, 5), (3, 4), (3, 5)}, (A.168)
f (Υ−1
ij (α), (i, j) ∈ U; yij , (i, j) ∈ R) = x (A.172)
and f is the length of the shortest path and may be calculated by the Dijkstra
algorithm (Dijkstra [25]) when the weights are yij if (i, j) ∈ R and Υ−1
ij (α) if
(i, j) ∈ U, respectively.
Section A.13 - Uncertain Random Process 449
Definition A.13 (Gao-Yao [31]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space
and let T be a totally ordered set (e.g. time). An uncertain random process is
a function Xt (γ, ω) from T × (Γ, L, M) × (Ω, A, Pr) to the set of real numbers
such that {Xt ∈ B} is an event in L × A for any Borel set B of real numbers
at each time t.
Xt = f (Yt , Zt ) (A.175)
for n ≥ 1. Then
Nt = max n Sn ≤ t (A.177)
n≥0
where btxc represents the maximal integer less than or equal to tx. Since
btxc ≤ tx < btxc + 1, we immediately have
btxc 1 t 1
· ≤ <
btxc + 1 x btxc + 1 x
and then
Sbtxc+1 1 Sbtxc+1 t Sbtxc+1 1
Ch > ≤ Ch > ≤ Ch > .
btxc + 1 x btxc + 1 btxc + 1 btxc x
Section A.13 - Uncertain Random Process 451
It follows from the law of large numbers for uncertain random variables that
Sbtxc+1 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch ≤
t→∞ btxc + 1 x t→∞ btxc + 1 x
Z +∞
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x
−∞
and
Sbtxc+1 1 btxc + 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch · ≤
t→∞ btxc x t→∞ btxc btxc + 1 x
Z +∞
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x .
−∞
and then
(Z
+∞ −1 )
Nt
lim Ch ≤x =M f (y, τ1 )dΨ(y) ≤x .
t→∞ t −∞
Rt E[η1 ]
→ (A.184)
t τ1
in the sense of convergence in distribution as t → ∞.
that is just the uncertainty distribution of E[η1 ]/τ1 . The theorem is thus
proved.
Nt
X Nt
X Nt
X
t − τ , if (η + τ ) ≤ t < (ηi + τi ) + ηNt +1
i i i
i=1 i=1 i=1
At = (A.185)
N
Xt +1 Nt
X N
Xt +1
ηi , if (ηi + τi ) + ηNt +1 ≤ t < (ηi + τi )
i=1 i=1 i=1
At E[η1 ]
→ (A.186)
t E[η1 ] + τ1
E[η1 ] E[η1 ](1 − x)
Υ(x) = M ≤ x = M τ1 ≥
E[η1 ] + τ1 x
E[η1 ](1 − x) E[η1 ](1 − x)
= 1 − M τ1 < =1−Φ .
x x
On the one hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have
( N
) ( ( N
) )
t Z 1 t
1X 1X
lim Ch ηi ≤ x = lim Pr M ηi ≤ x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
( ( N
) )
Z 1 t
1X
= lim Pr M ηi ≤ x ≥ r dr
0 t→∞ t i=1
( ( N
) )
Z 1 t
1X
= Pr lim M ηi ≤ x ≥ r dr.
0 t→∞ t i=1
Section A.13 - Uncertain Random Process 455
Note that
( N ) (∞ k
! )
t
1X [ 1X
M ηi ≤ x = M ηi ≤ x ∩ (Nt = k)
t i=1 t i=1
k=0
(∞ k
! k+1
!)
[ X X
≤M ηi ≤ tx ∩ (ηi + τi ) > t
k=0 i=1 i=1
∞
( k
! k+1
!)
[ X X
≤M ηi ≤ tx ∩ tx + ηk+1 + τi > t
k=0 i=1 i=1
∞
( k+1
!)
[ ηk+1 1X
=M (k ≤ ∗
Ntx ) ∩ + τi > 1 − x
t t i=1
k=0
we have
( Nt
) (∞ )
1X [ t − tx
lim M ηi ≤ x ≤ lim M (k ≤ Ntx ∗
) ∩ τ1 >
t→∞ t i=1 t→∞ k+1
k=0
∗
N [tx
t − tx
= lim M τ1 >
t→∞ k+1
k=0
t − tx
= lim M τ1 > ∗
t→∞ Ntx + 1
t − tx
= 1 − lim Φ ∗ +1 .
t→∞ Ntx
as t → ∞, and then
( N
)
t
1X E[η1 ](1 − x)
lim M ηi ≤ x ≤1−Φ = Υ(x).
t→∞ t i=1 x
456 Appendix A - Uncertain Random Variable
Thus
( N
)
t Z 1
1X
lim Ch ηi ≤ x ≤ Pr {Υ(x) ≥ r} dr = Υ(x). (A.187)
t→∞ t i=1 0
On the other hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have
( Nt +1
) Z 1 ( ( NX +1
) )
1 X 1 t
lim Ch ηi > x = lim Pr M ηi > x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
Z 1 ( ( N +1 ) )
t
1 X
= lim Pr M ηi > x ≥ r dr
0 t→∞ t i=1
Z 1 ( ( N +1
t
) )
1 X
= Pr lim M ηi > x ≥ r dr.
0 t→∞ t i=1
Note that
( Nt +1
)
1 X
M ηi > x
t i=1
∞
( k+1
! )
[ 1X
=M ηi > x ∩ (Nt = k)
t i=1
k=0
∞
( k+1
! k
!)
[ X X
≤M ηi > tx ∩ (ηi + τi ) ≤ t
k=0 i=1 i=1
∞
( k+1
! k
!)
[ X X
≤M ηi > tx ∩ tx − ηk+1 + τi ≤ t
k=0 i=1 i=1
∞
( k
!)
[ 1X ηk+1
=M ∗
(Ntx ≤ k) ∩ τi − ≤1−x .
t i=1 t
k=0
Since
k
X
τi ∼ kτ1
i=1
and
ηk+1
→ 0 as t → ∞,
t
Section A.13 - Uncertain Random Process 457
we have
( Nt +1
) (∞ k
!)
1 X [ 1X
lim M ηi > x ≤ lim M ∗
(Ntx ≤ k) ∩ τi ≤ 1 − x
t→∞ t i=1 t→∞ t i=1
k=0
∞
[ t − tx
= lim M τ1 ≤
t→∞ ∗
k
k=Ntx
t − tx
= lim M τ1 ≤ ∗
t→∞ Ntx
t − tx
= lim Φ ∗ .
t→∞ Ntx
By the elementary renewal theorem, we have
∗
Ntx 1
→ , a.s.
tx E[η1 ]
as t → ∞, and then
( N +1 )
t
1 X E[η1 ](1 − x)
lim M ηi > x ≤ Φ = 1 − Υ(x).
t→∞ t i=1 x
Thus
( Nt +1
) Z
1
1 X
lim Ch ηi > x ≤ Pr {1 − Υ(x) ≥ r} dr = 1 − Υ(x).
t→∞ t i=1 0
Since
Nt Nt +1
1X At 1 X
ηi ≤ ≤ ηi ,
t i=1 t t i=1
we obtain
( Nt +1
) ( Nt
)
1 X At 1X
Ch ηi ≤ x ≤ Ch ≤x ≤ Ch ηi ≤ x .
t i=1 t t i=1
Nt
X Nt
X Nt
X
t − η , if (τ + η ) ≤ t < (τi + ηi ) + τNt +1
i i i
i=1 i=1 i=1
At = (A.189)
N
Xt +1 Nt
X N
Xt +1
τi , if (τi + ηi ) + τNt +1 ≤ t < (τi + ηi )
i=1 i=1 i=1
At τ1
→ (A.190)
t τ1 + E[η1 ]
τ1 E[η1 ]x E[η1 ]x
Υ(x) = M ≤ x = M τ1 ≤ =Φ .
τ1 + E[η1 ] 1−x 1−x
On the one hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have
( N
) ( ( N
) )
t Z 1 t
1X 1X
lim Ch τi ≤ x = lim Pr M τi ≤ x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
( ( N
) )
Z 1 t
1X
= lim Pr M τi ≤ x ≥ r dr
0 t→∞ t i=1
( ( N
) )
Z 1 t
1X
= Pr lim M τi ≤ x ≥ r dr.
0 t→∞ t i=1
Section A.13 - Uncertain Random Process 459
Note that
( N
)
t
1X
M τi ≤ x
t i=1
∞
( k
! )
[ 1X
=M τi ≤ x ∩ (Nt = k)
t i=1
k=0
∞
( k
! k+1
!)
[ X X
≤M τi ≤ tx ∩ (τi + ηi ) > t
k=0 i=1 i=1
∞
( k
! k+1
!)
[ X X
≤M τi ≤ tx ∩ tx + τk+1 + ηi > t
k=0 i=1 i=1
∞
( k
! k+1
!)
[ X τk+1 1X
=M τi ≤ tx ∩ + ηi > 1 − x .
i=1
t t i=1
k=0
Since
k
X
τi ∼ kτ1
i=1
and
τk+1
→ 0 as t → ∞,
t
we have
( N
)
t
1X
lim M τi ≤ x
t→∞ t i=1
∞
( k+1
!)
[ tx
1X
≤ lim M τ1 ≤
∩ ηi > 1 − x
t→∞ k
t i=1
k=0
(∞ )
[ tx
= lim M ∗
τ1 ≤ ∩ Nt−tx ≤ k
t→∞ k
k=0
∞
[ tx
= lim M τ1 ≤
t→∞ ∗
k
k=Nt−tx
tx
= lim M τ1 ≤ ∗
t→∞ Nt−tx
tx
= lim Φ ∗
t→∞ Nt−tx
Thus
( N
)
t Z 1
1X
lim Ch τi ≤ x ≤ Pr {Υ(x) ≥ r} dr = Υ(x). (A.191)
t→∞ t i=1 0
On the other hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have
( N +1 ) Z 1 ( ( NX +1
) )
t
1 X 1 t
lim Ch τi > x = lim Pr M τi > x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
Z 1 ( ( N +1 ) )
t
1 X
= lim Pr M τi > x ≥ r dr
0 t→∞ t i=1
Z 1 ( ( N +1
t
) )
1 X
= Pr lim M τi > x ≥ r dr.
0 t→∞ t i=1
Note that
( Nt +1
)
1 X
M τi > x
t i=1
∞
( k+1
! )
[ 1X
=M τi > x ∩ (Nt = k)
t i=1
k=0
∞
( k+1
! k
!)
[ X X
≤M τi > tx ∩ (τi + ηi ) ≤ t
k=0 i=1 i=1
∞
( k+1
! k
!)
[ X X
≤M τi > tx ∩ tx − τk+1 + ηi ≤ t
k=0 i=1 i=1
∞
( k+1
! k
!)
[ X 1X τk+1
≤M τi > tx ∩ ηi − ≤1−x .
i=1
t i=1 t
k=0
Since
k+1
X
τi ∼ (k + 1)τ1
i=1
Section A.13 - Uncertain Random Process 461
and
τk+1
→ 0 as t → ∞,
t
we have
( Nt +1
)
1 X
lim M τi > x
t→∞ t i=1
∞
( k
!)
[ tx 1X
≤ lim M τ1 > ∩ τi ≤ 1 − x
t→∞ k+1 t i=1
k=0
(∞ )
[ tx
= lim M ∗
τ1 > ∩ Nt−tx ≥ k
t→∞ k+1
k=0
∗
N[ t−tx
tx
= lim M τ1 >
t→∞ k+1
k=0
tx
= lim M τ1 > ∗
t→∞ Nt−tx + 1
tx
= 1 − lim Φ ∗ +1 .
t→∞ Ntx
Thus
( Nt +1
) Z
1
1 X
lim Ch τi > x ≤ Pr {1 − Υ(x) ≥ r} dr = 1 − Υ(x).
t→∞ t i=1 0
Since
Nt Nt +1
1X At 1 X
τi ≤ ≤ τi ,
t i=1 t t i=1
462 Appendix A - Uncertain Random Variable
we obtain
( Nt +1
) ( Nt
)
1 X At 1X
Ch τi ≤ x ≤ Ch ≤x ≤ Ch τi ≤ x .
t i=1 t t i=1
Urn Problems
The basic urn problem is to determine the probability of drawing one col-
ored ball from an urn containing differently colored balls. This appendix
will design some new urn problems in order to illustrate probability theory,
uncertainty theory and chance theory.
Assume I have filled 100 urns each with 100 balls that are either red or black.
You are told that the compositions (red versus black) in those urns are iid,
but the distribution function is completely unknown to you.
(i) How many balls do you think are red in the first urn?
(ii) How many balls do you think are red in the 100 urns?
(iii) How likely do you think the number of red balls is 10,000?
Let us first consider those problems by probability theory. Since you do
not know the number of red balls completely, Laplace criterion makes you
assign equal probabilities to the possible outcomes 0, 1, 2, · · · , 100. Thus, for
each i with 1 ≤ i ≤ 100, the number of red balls in the ith urn is a random
variable,
1
ξi = k with probability , k = 0, 1, 2, · · · , 100.
101
Especially, since the total number of red balls is 10,000 if and only if the 100
urns each contain 100 red balls, the probability measure of the total number
of red balls being 10,000 is
0, if x < 0
k+1
Ψ(x) = , if 100k ≤ x < 100(k + 1), k = 0, 1, 2, · · · , 99
101
1, if x ≥ 10000.
Section B.1 - Urn Problems 467
Especially, since the total number of red balls is 10,000 if and only if the 100
urns each contain 100 red balls, the uncertain measure of the total number
of red balls being 10,000 is
A: You lose $1,000,000 if the total number of red balls is 10,000, and receive
$1 otherwise;
B: Don’t bet.
What is your choice between A and B? If probability theory is used, then the
probability of the total number of red balls being 10,000 is 3.6 × 10−201 , and
the expected income of A is
A > B.
That is, probability theory makes you choose A. If uncertainty theory is used,
then the uncertain measure of the total number of red balls being 10,000 is
1/101, and the expected income of A is
1 1
A=1× 1− − 1, 000, 000 × ≈ −$9, 900.
101 101
A < B.
That is, uncertainty theory makes you choose B. Who do you believe?
Now I would like to show you how I filled the 100 urns. First I took a
distribution function (please recognize that I have the option to choose my
preferred distribution function),
(
0, if x < 100
Υ(x) =
1, if x ≥ 100
468 Appendix B - Urn Problems
that is just the constant 100. Next I generated a random number k from
the distribution function Υ, and filled the first urn with k red balls and
100 − k black balls. Then I generated a new random number k from Υ, and
filled the second urn with k red balls and 100 − k black balls. Repeated
this process until 100 urns were filled. Note that 100, 100, · · · , 100 are indeed
iid, and the total number of red balls happens to be 10,000. You would lose
$1,000,000 if you used probability theory. Could you believe that uncertainty
theory is better than probability theory to deal with unknown-composition
urn problem?
Frequently Asked
Questions
This appendix will answer some frequently asked questions related to prob-
ability theory and uncertainty theory as well as their applications. This
appendix will also show why fuzzy set is a wrong model in both theory and
practice. Finally, I will clarify what uncertainty is.
It is easy for us to understand why we need to justify that the object meets
the three axioms. However, some readers may wonder why we also need to
justify that the object meets the product probability theorem. The reason is
that product probability theorem cannot be deduced from Kolmogorov’s axioms
except we presuppose that the product probability meets the three axioms.
In other words, an object does not necessarily satisfy the product probability
theorem if it is only justified to meet the three axioms. Would that surprise
you?
Please keep in mind that “an object follows the laws of probability the-
ory” is equivalent to “an object meets the three axioms plus the product
probability theorem”. This assertion is stronger than “an object meets the
three axioms of Kolmogorov”. In other words, the three axioms do not ensure
that an object follows the laws of probability theory.
There exist two broad categories of interpretations of probability, one is
frequency interpretation and the other is belief interpretation. The frequency
interpretation takes the probability to be the frequency with which an event
happens (Venn [146], Reichenbach [129], von Mises [147]), while the belief
interpretation takes the probability to be the degree to which we believe an
event will happen (Ramsey [128], de Finetti [23], Savage [133]).
The debate between the two interpretations has been lasting from the
nineteenth century. Personally, I agree with the frequency interpretation,
but strongly oppose the belief interpretation of probability because frequency
follows the laws of probability theory but belief degree does not. The detailed
reasons will be given in the following a few sections.
probability theory.
..
.........
...
..
.............................................................................................................................
......... ... ... ...
... ... ... ...
...
... ... ... ...
. ... ... ...
... ...
B ...
.
.
...
.
.
α×β .....
...
.... ..
. ..
. ...
.... .... .... ....
........ .. .
. .
...............................................................................................................................
.... ..
..
..
..
... .. ..
... .. ..
... .. ..
...
..........................................................................................................................................................................................
.... ... ...
.... .... ................................. .........................................
.
.
.......
A
Figure C.1: Let A and B be two events from different probability spaces
(essentially they come from two different experiments). If A happens α times
and B happens β times, then the product A × B happens α × β times, where
α and β are understood as percentage numbers.
of the outcome of the gamble. For example, let A be a bet that offers $1 if A happens, let
B be a bet that offers $1 if B happens, and let A ∨ B be a bet that offers $1 if either A or
B happens. If the prices of A, B and A ∨ B are 30¢, 40¢ and 80¢, respectively, and you (i)
472 Appendix C - Frequently Asked Questions
belief degree is irrational if there exists a book that guarantees you a loss. For
the moment, we are assumed to agree with it.
Let A1 be a bet that offers $1 if A1 happens, and let A2 be a bet that
offers $1 if A2 happens. Assume the belief degrees of A1 and A2 are α1
and α2 , respectively. This means the prices of A1 and A2 are $α1 and $α2 ,
respectively. Now we consider the bet A1 ∪ A2 that offers $1 if either A1 or
A2 happens, and write the belief degree of A1 ∪ A2 by α. This means the
price of A1 ∪ A2 is $α. If α > α1 + α2 , then you (i) sell A1 , (ii) sell A2 , and
(iii) buy A1 ∪ A2 . It is clear that you are guaranteed to lose α − α1 − α2 > 0.
Thus there exists a Dutch book and the assumption α > α1 + α2 is irrational.
If α < α1 + α2 , then you (i) buy A1 , (ii) buy A2 , and (iii) sell A1 ∪ A2 . It is
clear that you are guaranteed to lose α1 + α2 − α > 0. Thus there exists a
Dutch book and the assumption α < α1 + α2 is irrational. Hence we have to
assume α = α1 + α2 and the belief degree meets the additivity axiom (but
this assertion is questionable because you cannot reverse “buy” and “sell”
arbitrarily due to the unequal status of the decision maker and the domain
expert).
Until now we have verified that the belief degree meets the three axioms
of probability theory. Almost all subjectivists stop here and assert that belief
degree follows the laws of probability theory. Unfortunately, the evidence is
not enough for this conclusion because we have not verified whether belief
degree meets the product probability theorem or not. In fact, it is impossible
for us to prove belief degree meets the product probability theorem through
the Dutch book argument.
Recall the example of truck-cross-over-bridge on Page 6. Let Ai represent
that the ith bridge strengths are greater than 90 tons, i = 1, 2, · · · , 50, re-
spectively. For each i, since your belief degree for Ai is 75%, you are willing
to pay 75¢ for the bet that offers $1 if Ai happens. If the belief degree did
follow the laws of probability theory, then it would be fair to pay
for a bet that offers $1 if A1 × A2 × · · · × A50 happens. Notice that the odd
is over a million and A1 × A2 × · · · × A50 definitely happens because the real
strengths of the 50 bridges range from 95 to 110 tons. All of us will be happy
to bet on it. But who is willing to offer such a bet? It seems that no one does,
and then the belief degree of A1 × A2 × · · · × A50 is not the product of each
individuals. Hence the belief degree does not follow the laws of probability
theory.
It is thus concluded that the belief interpretation of probability is un-
acceptable. The main mistake of subjectivists is that they only justify the
sell A, (ii) sell B, and (iii) buy A ∨ B, then you are guaranteed to lose 10¢ no matter what
happens. Thus there exists a Dutch book, and the prices are considered to be irrational.
Section C.5 - Probability Theory vs Uncertainty Theory 473
belief degree meets the three axioms of probability theory, but do not check
if it meets the product probability theorem.
T (P ∧ Q) = f (T (P ), T (Q)) (C.4)
and then excludes uncertain measure from its start because the function
f (x, y) = x ∧ y used in uncertainty theory is not differentiable with respect
to x and y. In fact, there does not exist any evidence that the truth value
of conjunction is completely determined by the truth values of individual
propositions, let alone a twice differentiable function.
On the one hand, it is recognized that probability theory is a legitimate
approach to deal with the frequency. On the other hand, at any rate, it is
impossible that probability theory is the unique one for modelling indetermi-
nacy. In fact, it has been demonstrated in this book that uncertainty theory
is successful to deal with belief degrees.
while the product uncertain measure is the minimum of the uncertain mea-
sures of the individual events, i.e.,
be completely rewritten. Perhaps some fuzzists insist that {x ∈ ξ} and {x ∈ ξ c } are not
opposite. Here I would like to advise them not to think so because it is in contradiction
with that ξ c has the membership function λ(x) = 1 − µ(x).
Section C.6 - Fuzzy set theory is bad mathematics 475
Hence
µ(x) = 0 or 1. (C.17)
This result shows that the membership function µ can only be an indicator
function of crisp set. In other words, only crisp sets can simultaneously
satisfy (C.7)∼(C.11). In this sense, fuzzy set theory collapses mathematically
to classical set theory. That is, fuzzy set theory is nothing but classical set
theory.
Furthermore, it seems both in theory and practice that inclusion relation
between fuzzy sets has to be needed. Thus fuzzy set theory also assumes a
formula (Zadeh [193]),
for any crisp set B. Now consider two crisp intervals [1, 2] and [2, 3]. It is
completely inacceptable in mathematical community that [1, 2] is included in
[2, 3], i.e., the inclusion relation
is 100% wrong. Note that [1, 2] is a special fuzzy set whose membership
function is (
1, if 1 ≤ x ≤ 2
µ(x) = (C.20)
0, otherwise.
It follows from the formula (C.18) that
That is, fuzzy set theory says that [1, 2] ⊂ [2, 3] is 100% right. Are you willing
to accept this result? If not, then (C.18) is in conflict with the inclusion rela-
tion in classical set theory. In other words, nothing can simultaneously satisfy
(C.7)∼(C.11) and (C.18). Therefore, fuzzy set theory is not self-consistent in
mathematics and may lead to wrong results in practice.
Perhaps some fuzzists may argue that they never use possibility measure
in fuzzy set theory. Here I would like to remind them that the membership
degree µ(x) is just the possibility measure that the fuzzy set ξ contains the
476 Appendix C - Frequently Asked Questions
point x (i.e., x belongs to ξ). Please also keep in mind that we cannot
distinguish fuzzy set from random set (Robbins [130] and Matheron [113])
and uncertain set (Liu [82]) if the underlying measures are not available.
From the above discussion, we can see that fuzzy set theory is not self-
consistent in mathematics and may lead to wrong results in practice. There-
fore, I would like to conclude that fuzzy set theory is bad mathematics. To
express this more frankly, fuzzy set theory cannot be called mathematics.
Can we improve fuzzy set theory? Yes, we can. But the change is so big
that I have to give the revision a new name called uncertain set theory. See
Chapter 8.
that is just the trapezoidal fuzzy variable (80, 90, 110, 120). Please do not
argue why I choose such a membership function because it is not important for
the focus of debate. Based on the membership function µ and the definition
of possibility measure
Pos{ξ ∈ B} = sup µ(x), (C.23)
x∈B
The first proposition says we are 100% sure that the bridge strength is “ex-
actly 100 tons”, neither less nor more. What a coincidence it should be!
It is doubtless that the belief degree of “exactly 100 tons” is almost zero,
and nobody is so naive to expect that “exactly 100 tons” is the true bridge
strength. The second proposition sounds good. The third proposition says
“exactly 100 tons” and “not 100 tons” have the same possibility measure.
Thus we have to regard them “equally likely”. Consider a bet: you get $1 if
the bridge strength is “exactly 100 tons”, and pay $1 if the bridge strength
is“not 100 tons”. Do you think the bet is fair? It seems that no one thinks
so. Hence the conclusion (c) is unacceptable because “exactly 100 tons” is
almost impossible compared with “not 100 tons”. This paradox shows that
those indeterminate quantities like the bridge strength cannot be quantified
by possibility measure and then they are not fuzzy concepts.
for any events Λ1 and Λ2 no matter if they are independent or not. A lot
of surveys showed that the measure of a union of events is usually greater
than the maximum of the measures of individual events when they are not
independent. This fact states that human brains do not behave fuzziness.
Both uncertainty theory and possibility theory attempt to model belief
degrees, where the former uses the tool of uncertain measure and the latter
uses the tool of possibility measure. Thus they are complete competitors.
Traditionally, stochastic finance theory presumes that the stock price (in-
cluding interest rate and currency exchange rate) follows Ito’s stochastic dif-
ferential equation. Is it really reasonable? In fact, this widely accepted
presumption was challenged among others by Liu [89] via some paradoxes.
First Paradox: As an example, let us assume that the stock price Xt follows
the differential equation,
dXt
= eXt + σXt · “noise” (C.28)
dt
where e is the log-drift, σ is the log-diffusion, and “noise” is a stochastic
process. Now we take the mathematical interpretation of the “noise” term
as
dWt
“noise” = (C.29)
dt
where Wt is a Wiener process3 . Thus the stock price Xt follows the stochastic
differential equation,
dXt dWt
= eXt + σXt . (C.30)
dt dt
Note that the “noise” term
dWt 1
∼ N 0, (C.31)
dt dt
as the “noise” term. In addition, since the right-hand part of (C.30) has
an infinite variance at any time t, the left-hand part (i.e., the instantaneous
growth rate dXt /dt of stock price) has to take an infinite variance at every
time. However, the growth rate usually has a finite variance in practice, or
at least, it is impossible to have infinite variance at every time. Thus it is
impossible that the real stock price Xt follows Ito’s stochastic differential
equation.
Second Paradox: Roughly speaking, the sample path of a stochastic differ-
ential equation (C.30) is increasing with probability 0.5 and decreasing with
probability 0.5 at each time no matter what happened before. However, in
practice, when the stock price is greatly increasing at the moment, usually it
will continue to increase; when the stock price is greatly decreasing, usually
3 A stochastic process W is said to be a Wiener process if (i) W = 0 and almost all
t 0
sample paths are continuous (but non-Lipschitz), (ii) Wt has stationary and independent
increments, and (iii) every increment Ws+t −Ws is a normal random variable with expected
value 0 and variance t.
Section C.9 - Challenge to Stochastic Finance Theory 479
it will continue to decrease. This means that the stock price in the real world
does not behave like Ito’s stochastic differential equation.
Third Paradox: It follows from the stochastic differential equation (C.30)
that Xt is a geometric Wiener process, i.e.,
A, A, · · · , A, B, C, · · · , Z.
| {z } | {z } (C.37)
9900 100
Nobody can believe that those 10000 samples follow a normal probability
distribution with expected value 0 and variance ∆t. This fact is in contra-
diction with the property of Wiener process that the increment ∆Wt is a
normal random variable. Therefore, the real stock price Xt does not follow
the stochastic differential equation.
Perhaps some people think that the stock price does behave like a geomet-
ric Wiener process (or Ornstein-Uhlenbeck process) in macroscopy although
they recognize the paradox in microscopy. However, as the very core of
stochastic finance theory, Ito’s calculus is just built on the microscopic struc-
ture (i.e., the differential dWt ) of Wiener process rather than macroscopic
structure. More precisely, Ito’s calculus is dependent on the presumption
that dWt is a normal random variable with expected value 0 and variance
dt. This unreasonable presumption is what causes the second order term in
Ito’s formula,
∂h ∂h 1 ∂2h
dXt = (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt. (C.38)
∂t ∂w 2 ∂w2
480 Appendix C - Frequently Asked Questions
..
.........
...
....
99% ...............
..
...
... ... ...
... ... ...
... ... ...
... .. ...
... .. .
... .. . ...
... .. ...
... ... ...
... ... ...
... .. ...
... ... ...
... ... ...
... ... . . .
... .. ......... ....................
.
...
...
.
... ...... . . .. ......
.
... .... .... ... .....
.....
........ .. ... .....
..
.. . .
. .
. .....
...... ... ...
. .
.
. .....
.....
....... .... ............................ ......
...... ... ... ... ............... ....
....
................................ ... ... ... ....................................... .............
..
.... . . . . . .. .................. ... ... ... ... ... ... ... ... .............. ...............
..........................................................................................................................................................................................................................................................
.
....
Figure C.2: There does not exist any continuous probability distribution
(curve) that can approximate to the frequency (histogram) of ∆Wt . Hence
it is impossible that the real stock price Xt follows any Ito’s stochastic dif-
ferential equation.
[15] Chen XW, Li XF, and Ralescu DA, A note on uncertain sequence, Inter-
national Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,
Vol.22, No.2, 305-314, 2014.
[16] Chen XW, Uncertain calculus with finite variation processes, Soft Computing,
Vol.19, No.10, 2905-2912, 2015.
[17] Chen XW, and Gao J, Two-factor term structure model with uncertain
volatility risk, Soft Computing, to be published.
[18] Chen XW, Theory of Uncertain Finance, http://orsc.edu.cn/chen/tuf.pdf.
[19] Cox RT, Probability, frequency and reasonable expectation, American Jour-
nal of Physics, Vol.14, 1-13, 1946.
[20] Dai W, and Chen XW, Entropy of function of uncertain variables, Mathe-
matical and Computer Modelling, Vol.55, Nos.3-4, 754-760, 2012.
[21] Dai W, Quadratic entropy of uncertain variables, Soft Computing, to be pub-
lished.
[22] Dantzig GB, Linear programming under uncertainty, Management Science,
Vol.1, 197-206, 1955.
[23] de Finetti B, La prévision: ses lois logiques, ses sources subjectives, Annales
de l’Institut Henri Poincaré, Vol.7, 1-68, 1937.
[24] de Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[25] Dijkstra EW, A note on two problems in connection with graphs, Numerical
Mathematics, Vol.1, No.1, 269-271, 1959.
[26] Ding SB, Uncertain minimum cost flow problem, Soft Computing, Vol.18,
No.11, 2201-2207, 2014.
[27] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[28] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.
[29] Erdős P, and Rényi A, On random graphs, Publicationes Mathematicae, Vol.6,
290-297, 1959.
[30] Frank H, and Hakimi SL, Probabilistic flows through a communication net-
work, IEEE Transactions on Circuit Theory, Vol.12, 413-414, 1965.
[31] Gao J, and Yao K, Some concepts and theorems of uncertain random process,
International Journal of Intelligent Systems, Vol.30, No.1, 52-65, 2015.
[32] Gao R, Milne method for solving uncertain differential equations, Applied
Mathematics and Computation, Vol.274, 774-785, 2016.
[33] Gao R, and Sheng YH, Law of large numbers for uncertain random variables
with different chance distributions, Journal of Intelligent & Fuzzy Systems,
Vol.31, No.3, 1227-1234, 2016.
[34] Gao R, and Yao K, Importance index of component in uncertain reliability
system, Journal of Uncertainty Analysis and Applications, Vol.4, Article 7,
2016.
Bibliography 485
[53] Guo HY, Wang XS, Wang LL, and Chen D, Delphi method for estimating
membership function of uncertain set, Journal of Uncertainty Analysis and
Applications, Vol.4, Article 3, 2016.
[54] Han SW, Peng ZX, and Wang SQ, The maximum flow problem of uncertain
network, Information Sciences, Vol.265, 167-175, 2014.
[55] Hou YC, Subadditivity of chance measure, Journal of Uncertainty Analysis
and Applications, Vol.2, Article 14, 2014.
[56] Ito K, Stochastic integral, Proceedings of the Japan Academy Series A, Vol.20,
No.8, 519-524, 1944.
[57] Ito K, On stochastic differential equations, Memoirs of the American Math-
ematical Society, No.4, 1-51, 1951.
[58] Iwamura K, and Kageyama M, Exact construction of Liu process, Applied
Mathematical Sciences, Vol.6, No.58, 2871-2880, 2012.
[59] Iwamura K, and Xu YL, Estimating the variance of the square of canonical
process, Applied Mathematical Sciences, Vol.7, No.75, 3731-3738, 2013.
[60] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[61] Jaynes ET, Probability Theory: The Logic of Science, Cambridge University
Press, 2003.
[62] Ji XY, and Zhou J, Option pricing for an uncertain stock model with jumps,
Soft Computing, Vol.19, No.11, 3323-3329, 2015.
[63] Jia LF, and Dai W, Uncertain forced vibration equation of spring mass sys-
tem, Technical Report, 2017.
[64] Jiao DY, and Yao K, An interest rate model in uncertain environment, Soft
Computing, Vol.19, No.3, 775-780, 2015.
[65] Kahneman D, and Tversky A, Prospect theory: An analysis of decision under
risk, Econometrica, Vol.47, No.2, 263-292, 1979.
[66] Ke H, Su TY, and Ni YD, Uncertain random multilevel programming with
application to product control problem, Soft Computing, Vol.19, No.6, 1739-
1746, 2015.
[67] Ke H, and Yao K, Block replacement policy in uncertain environment, Reli-
ability Engineering & System Safety, Vol.148, 119-124, 2016.
[68] Keynes JM, The General Theory of Employment, Interest, and Money, Har-
court, New York, 1936.
[69] Knight FH, Risk, Uncertainty, and Profit, Houghton Mifflin, Boston, 1921.
[70] Kolmogorov AN, Grundbegriffe der Wahrscheinlichkeitsrechnung, Julius
Springer, Berlin, 1933.
[71] Li SG, Peng J, and Zhang B, Multifactor uncertain differential equation,
Journal of Uncertainty Analysis and Applications, Vol.3, Article 7, 2015.
[72] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
Bibliography 487
[73] Lio W, and Liu B, Uncertain data envelopment analysis with imprecisely
observed inputs and outputs, Fuzzy Optimization and Decision Making, to
be published.
[74] Lio W, and Liu B, Residual and confidence interval for uncertain regression
model with imprecise observations, Technical Report, 2017.
[75] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Hei-
delberg, 2002.
[76] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.
[77] Liu B, Uncertainty Theory, 2nd edn, Springer-Verlag, Berlin, 2007.
[78] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Un-
certain Systems, Vol.2, No.1, 3-16, 2008.
[79] Liu B, Theory and Practice of Uncertain Programming, 2nd edn, Springer-
Verlag, Berlin, 2009.
[80] Liu B, Some research problems in uncertainty theory, Journal of Uncertain
Systems, Vol.3, No.1, 3-10, 2009.
[81] Liu B, Uncertain entailment and modus ponens in the framework of uncertain
logic, Journal of Uncertain Systems, Vol.3, No.4, 243-251, 2009.
[82] Liu B, Uncertain set theory and uncertain inference rule with application to
uncertain control, Journal of Uncertain Systems, Vol.4, No.2, 83-98, 2010.
[83] Liu B, Uncertain risk analysis and uncertain reliability analysis, Journal of
Uncertain Systems, Vol.4, No.3, 163-170, 2010.
[84] Liu B, Uncertainty Theory: A Branch of Mathematics for Modeling Human
Uncertainty, Springer-Verlag, Berlin, 2010.
[85] Liu B, Uncertain logic for modeling human language, Journal of Uncertain
Systems, Vol.5, No.1, 3-20, 2011.
[86] Liu B, Why is there a need for uncertainty theory? Journal of Uncertain
Systems, Vol.6, No.1, 3-10, 2012.
[87] Liu B, and Yao K, Uncertain integral with respect to multiple canonical
processes, Journal of Uncertain Systems, Vol.6, No.4, 250-255, 2012.
[88] Liu B, Membership functions and operational law of uncertain sets, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 387-410, 2012.
[89] Liu B, Toward uncertain finance theory, Journal of Uncertainty Analysis and
Applications, Vol.1, Article 1, 2013.
[90] Liu B, Extreme value theorems of uncertain process with application to in-
surance risk model, Soft Computing, Vol.17, No.4, 549-556, 2013.
[91] Liu B, A new definition of independence of uncertain sets, Fuzzy Optimization
and Decision Making, Vol.12, No.4, 451-461, 2013.
[92] Liu B, Polyrectangular theorem and independence of uncertain vectors, Jour-
nal of Uncertainty Analysis and Applications, Vol.1, Article 9, 2013.
[93] Liu B, Uncertain random graph and uncertain random network, Journal of
Uncertain Systems, Vol.8, No.1, 3-12, 2014.
488 Bibliography
[112] Liu YH, and Ralescu DA, Expected loss of uncertain random systems, Soft
Computing, to be published.
[113] Matheron G, Random Sets and Integral Geometry, Wiley, New York, 1975.
[114] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[115] Morgan JP, Risk Metrics TM – Technical Document, 4th edn, Morgan Guar-
anty Trust Companies, New York, 1996.
[116] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[117] Nejad ZM, and Ghaffari-Hadigheh A, A novel DEA model based on uncer-
tainty theory, Annals of Operations Research, to be published.
[118] Nilsson NJ, Probabilistic logic, Artificial Intelligence, Vol.28, 71-87, 1986.
[119] Ning YF, Ke H, and Fu ZF, Triangular entropy of uncertain variables with
application to portfolio selection, Soft Computing, Vol.19, No.8, 2203-2209,
2015.
[120] Peng J, and Yao K, A new option pricing model for stocks in uncertainty
markets, International Journal of Operations Research, Vol.8, No.2, 18-26,
2011.
[121] Peng J, Risk metrics of loss function for uncertain system, Fuzzy Optimization
and Decision Making, Vol.12, No.1, 53-64, 2013.
[122] Peng ZX, and Iwamura K, A sufficient and necessary condition of uncertainty
distribution, Journal of Interdisciplinary Mathematics, Vol.13, No.3, 277-285,
2010.
[123] Peng ZX, and Iwamura K, Some properties of product uncertain measure,
Journal of Uncertain Systems, Vol.6, No.4, 263-269, 2012.
[124] Peng ZX, and Chen XW, Uncertain systems are universal approximators,
Journal of Uncertainty Analysis and Applications, Vol.2, Article 13, 2014.
[125] Pugsley AG, A philosophy of strength factors, Aircraft Engineering and
Aerospace Technology, Vol.16, No.1, 18-19, 1944.
[126] Qin ZF, and Gao X, Fractional Liu process with application to finance, Math-
ematical and Computer Modelling, Vol.50, Nos.9-10, 1538-1543, 2009.
[127] Qin ZF, Uncertain random goal programming, Fuzzy Optimization and De-
cision Making, to be published.
[128] Ramsey FP, Truth and probability, In Foundations of Mathematics and Other
Logical Essays, Humanities Press, New York, 1931.
[129] Reichenbach H, The Theory of Probability, University of California Press,
Berkeley, 1948.
[130] Robbins HE, On the measure of a random set, Annals of Mathematical Statis-
tics, Vol.15, No.1, 70-74, 1944.
[131] Roy AD, Safety-first and the holding of assets, Econometrica, Vol.20, 431-149,
1952.
[132] Samuelson PA, Rational theory of warrant pricing, Industrial Management
Review, Vol.6, 13-31, 1965.
490 Bibliography
[133] Savage LJ, The Foundations of Statistics, Wiley, New York, 1954.
[134] Savage LJ, The Foundations of Statistical Inference, Methuen, London, 1962.
[135] Shannon CE, The Mathematical Theory of Communication, The University
of Illinois Press, Urbana, 1949.
[136] Shen YY, and Yao K, A mean-reverting currency model in an uncertain
environment, Soft Computing, Vol.20, No.10, 4131-4138, 2016.
[137] Sheng YH, and Wang CG, Stability in the p-th moment for uncertain differen-
tial equation, Journal of Intelligent & Fuzzy Systems, Vol.26, No.3, 1263-1271,
2014.
[138] Sheng YH, and Yao K, Some formulas of variance of uncertain random vari-
able, Journal of Uncertainty Analysis and Applications, Vol.2, Article 12,
2014.
[139] Sheng YH, and Gao J, Chance distribution of the maximum flow of uncertain
random network, Journal of Uncertainty Analysis and Applications, Vol.2,
Article 15, 2014.
[140] Sheng YH, and Kar S, Some results of moments of uncertain variable through
inverse uncertainty distribution, Fuzzy Optimization and Decision Making,
Vol.14, No.1, 57-76, 2015.
[141] Sheng YH, and Gao J, Exponential stability of uncertain differential equation,
Soft Computing, Vol.20, No.9, 3673-3678, 2016.
[142] Sheng YH, Qin ZF, and Shi G, Minimum spanning tree problem of uncertain
random network, Journal of Intelligent Manufacturing, Vol.28, No.3, 565-574,
2017.
[143] Sheng YH, Gao R, and Zhang ZQ, Uncertain population model with age-
structure, Journal of Intelligent & Fuzzy Systems, Vol.33, No.2, 853-858,
2017.
[144] Sun JJ, and Chen XW, Asian option pricing formula for uncertain financial
market, Journal of Uncertainty Analysis and Applications, Vol.3, Article 11,
2015.
[145] Tian JF, Inequalities and mathematical properties of uncertain variables,
Fuzzy Optimization and Decision Making, Vol.10, No.4, 357-368, 2011.
[146] Venn J, The Logic of Chance, MacMillan, London, 1866.
[147] von Mises R, Wahrscheinlichkeit, Statistik und Wahrheit, Springer, Berlin,
1928.
[148] von Mises R, Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statis-
tik und Theoretischen Physik, Leipzig and Wien, Franz Deuticke, 1931.
[149] Wang X, Ning YF, Moughal TA, and Chen XM, Adams-Simpson method for
solving uncertain differential equation, Applied Mathematics and Computa-
tion, Vol.271, 209-219, 2015.
[150] Wang X, and Ning YF, An uncertain currency model with floating interest
rates, Soft Computing, Vol.21, No.22, 6739-6754, 2017.
[151] Wang XS, Gao ZC, and Guo HY, Uncertain hypothesis testing for two ex-
perts’ empirical data, Mathematical and Computer Modelling, Vol.55, 1478-
1482, 2012.
Bibliography 491
[152] Wang XS, Gao ZC, and Guo HY, Delphi method for estimating uncer-
tainty distributions, Information: An International Interdisciplinary Journal,
Vol.15, No.2, 449-460, 2012.
[153] Wang XS, and Ha MH, Quadratic entropy of uncertain sets, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 99-109, 2013.
[154] Wang XS, and Peng ZX, Method of moments for estimating uncertainty dis-
tributions, Journal of Uncertainty Analysis and Applications, Vol.2, Article
5, 2014.
[155] Wen ML, and Kang R, Reliability analysis in uncertain random system, Fuzzy
Optimization and Decision Making, Vol.15, No.4, 491-506, 2016.
[156] Wen ML, Zhang QY, Kang R, and Yang Y, Some new ranking criteria in data
envelopment analysis under uncertain environment, Computers & Industrial
Engineering, Vol.110, 498-504, 2017.
[157] Wiener N, Differential space, Journal of Mathematical Physics, Vol.2, 131-
174, 1923.
[158] Yang XF, and Gao J, Uncertain differential games with application to capi-
talism, Journal of Uncertainty Analysis and Applications, Vol.1, Article 17,
2013.
[159] Yang XF, and Gao J, Some results of moments of uncertain set, Journal of
Intelligent & Fuzzy Systems, Vol.28, No.6, 2433-2442, 2015.
[160] Yang XF, and Ralescu DA, Adams method for solving uncertain differential
equations, Applied Mathematics and Computation, Vol.270, 993-1003, 2015.
[161] Yang XF, and Shen YY, Runge-Kutta method for solving uncertain differ-
ential equations, Journal of Uncertainty Analysis and Applications, Vol.3,
Article 17, 2015.
[162] Yang XF, and Gao J, Linear-quadratic uncertain differential game with appli-
cation to resource extraction problem, IEEE Transactions on Fuzzy Systems,
Vol.24, No.4, 819-826, 2016.
[163] Yang XF, Ni YD, and Zhang YS, Stability in inverse distribution for uncertain
differential equations, Journal of Intelligent & Fuzzy Systems, Vol.32, No.3,
2051-2059, 2017.
[164] Yang XF, and Yao K, Uncertain partial differential equation with application
to heat conduction, Fuzzy Optimization and Decision Making, Vol.16, No.3,
379-403, 2017.
[165] Yang XF, Gao J, and Ni YD, Resolution principle in uncertain random envi-
ronment, IEEE Transactions on Fuzzy Systems, to be published.
[166] Yang XF, and Liu B, Uncertain time series analysis with imprecise observa-
tions, Technical Report, 2017.
[167] Yang XH, On comonotonic functions of uncertain variables, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 89-98, 2013.
[168] Yao K, Uncertain calculus with renewal process, Fuzzy Optimization and
Decision Making, Vol.11, No.3, 285-297, 2012.
[169] Yao K, and Li X, Uncertain alternating renewal process and its application,
IEEE Transactions on Fuzzy Systems, Vol.20, No.6, 1154-1160, 2012.
492 Bibliography
[170] Yao K, Gao J, and Gao Y, Some stability theorems of uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.12, No.1, 3-13, 2013.
[171] Yao K, Extreme values and integral of solution of uncertain differential equa-
tion, Journal of Uncertainty Analysis and Applications, Vol.1, Article 2, 2013.
[172] Yao K, and Ralescu DA, Age replacement policy in uncertain environment,
Iranian Journal of Fuzzy Systems, Vol.10, No.2, 29-39, 2013.
[173] Yao K, and Chen XW, A numerical method for solving uncertain differential
equations, Journal of Intelligent & Fuzzy Systems, Vol.25, No.3, 825-832,
2013.
[174] Yao K, A type of nonlinear uncertain differential equations with analytic
solution, Journal of Uncertainty Analysis and Applications, Vol.1, Article 8,
2013.
[175] Yao K, and Ke H, Entropy operator for membership function of uncertain
set, Applied Mathematics and Computation, Vol.242, 898-906, 2014.
[176] Yao K, A no-arbitrage theorem for uncertain stock model, Fuzzy Optimization
and Decision Making, Vol.14, No.2, 227-242, 2015.
[177] Yao K, Ke H, and Sheng YH, Stability in mean for uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.14, No.3, 365-379,
2015.
[178] Yao K, A formula to calculate the variance of uncertain variable, Soft Com-
puting, Vol.19, No.10, 2947-2953, 2015.
[179] Yao K, and Gao J, Uncertain random alternating renewal process with appli-
cation to interval availability, IEEE Transactions on Fuzzy Systems, Vol.23,
No.5, 1333-1342, 2015.
[180] Yao K, Inclusion relationship of uncertain sets, Journal of Uncertainty Anal-
ysis and Applications, Vol.3, Article 13, 2015.
[181] Yao K, Uncertain contour process and its application in stock model with
floating interest rate, Fuzzy Optimization and Decision Making, Vol.14, No.4,
399-424, 2015.
[182] Yao K, and Gao J, Law of large numbers for uncertain random variables,
IEEE Transactions on Fuzzy Systems, Vol.24, No.3, 615-621, 2016.
[183] Yao K, and Zhou J, Uncertain random renewal reward process with appli-
cation to block replacement policy, IEEE Transactions on Fuzzy Systems,
Vol.24, No.6, 1637-1647, 2016.
[184] Yao K, Uncertain Differential Equations, Springer-Verlag, Berlin, 2016.
[185] Yao K, Ruin time of uncertain insurance risk process, IEEE Transactions on
Fuzzy Systems, to be published.
[186] Yao K, Conditional uncertain set and conditional membership function, Fuzzy
Optimization and Decision Making, to be published.
[187] Yao K, and Liu B, Uncertain regression analysis: An approach for imprecise
observations, Soft Computing, to be published.
[188] Yao K, and Zhou J, Renewal reward process with uncertain interarrival times
and random rewards, IEEE Transactions on Fuzzy Systems, to be published.
Bibliography 493
[189] Yao K, Extreme value and time integral of uncertain independent increment
process, http://orsc.edu.cn/online/130302.pdf.
[190] You C, Some convergence theorems of uncertain sequences, Mathematical and
Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
[191] Yu XC, A stock model with jumps for uncertain markets, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.20, No.3, 421-
432, 2012.
[192] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[193] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
[194] Zadeh LA, A theory of approximate reasoning, In: J Hayes, D Michie and
RM Thrall, eds., Mathematical Frontiers of the Social and Policy Sciences,
Westview Press, Boulder, Cororado, 69-129, 1979.
[195] Zeng ZG, Wen ML, Kang R, Belief reliability: A new metrics for products’ re-
liability, Fuzzy Optimization and Decision Making, Vol.12, No.1, 15-27, 2013.
[196] Zeng ZG, Kang R, Wen ML, and Zio E, Uncertainty theory as a basis for
belief reliability, Information Sciences, Vol.429, 26-36, 2018.
[197] Zhang B, and Peng J, Euler index in uncertain graph, Applied Mathematics
and Computation, Vol.218, No.20, 10279-10288, 2012.
[198] Zhang B, Peng J, and Li SG, Euler index of uncertain random graph, Inter-
national Journal of Computer Mathematics, Vol.94, No.2, 217-229, 2017.
[199] Zhang CX, and Guo CR, Uncertain block replacement policy with no re-
placement at failure, Journal of Intelligent & Fuzzy Systems, Vol.27, No.4,
1991-1997, 2014.
[200] Zhang XF, Ning YF, and Meng GW, Delayed renewal process with uncertain
interarrival times, Fuzzy Optimization and Decision Making, Vol.12, No.1,
79-87, 2013.
[201] Zhang XF, and Li X, A semantic study of the first-order predicate logic with
uncertainty involved, Fuzzy Optimization and Decision Making, Vol.13, No.4,
357-367, 2014.
[202] Zhang Y, Gao J, and Huang ZY, Hamming method for solving uncertain
differential equations, Applied Mathematics and Computation, Vol.313, 331-
341, 2017.
[203] Zhang ZM, Some discussions on uncertain measure, Fuzzy Optimization and
Decision Making, Vol.10, No.1, 31-43, 2011.
[204] Zhang ZQ, and Liu WQ, Geometric average Asian option pricing for uncertain
financial market, Journal of Uncertain Systems, Vol.8, No.4, 317-320, 2014.
[205] Zhang ZQ, Ralescu DA, and Liu WQ, Valuation of interest rate ceiling and
floor in uncertain financial market, Fuzzy Optimization and Decision Making,
Vol.15, No.2, 139-154, 2016.
[206] Zhou J, Yang F, and Wang K, Multi-objective optimization in uncertain ran-
dom environments, Fuzzy Optimization and Decision Making, Vol.13, No.4,
397-413, 2014.
[207] Zhu Y, Uncertain optimal control with application to a portfolio selection
model, Cybernetics and Systems, Vol.41, No.7, 535-547, 2010.
List of Frequently Used Symbols
M uncertain measure
(Γ, L, M) uncertainty space
ξ, η, τ uncertain variables
Φ, Ψ, Υ uncertainty distributions
Φ−1 , Ψ−1 , Υ−1 inverse uncertainty distributions
µ, ν, λ membership functions
µ , ν , λ−1
−1 −1
inverse membership functions
L(a, b) linear uncertain variable
Z(a, b, c) zigzag uncertain variable
N (e, σ) normal uncertain variable
LOGN (e, σ) lognormal uncertain variable
(a, b, c) triangular uncertain set
(a, b, c, d) trapezoidal uncertain set
E expected value
V variance
H entropy
Xt , Yt , Zt uncertain processes
Ct Liu process
Nt renewal process
Q uncertain quantifier
(Q, S, P ) uncertain proposition
∀ universal quantifier
∃ existential quantifier
∨ maximum operator
∧ minimum operator
¬ negation symbol
Pr probability measure
(Ω, A, Pr) probability space
Ch chance measure
k-max the kth largest value
k-min the kth smallest value
∅ the empty set
< the set of real numbers
iid independent and identically distributed
Index
.... ....
........ ........ ....... ........ .......
.... .............................. .... .....................
................... .... .... . . ...............
... ... .......................
... .................. .... .... .... ... . .....
... . .
.. .. .. .. .. ... ................... ...
... ................ .... .... .... ... ... ....... ... ...
... ..... .. ... ... ... ... ... ..... .. .. ..
..... ... ... ... ... .. ....... .. .. .
...
................. ..... .... .... .... ....
... ..... .. .. .. ...
... ... ... .... .... ..... .... ...
... ..... ... ... ... ... ... ...
. ... ... ... ... ... .... ...
... ..... ... .. ... ... .. ... ... ... ....... .... ... ... ...
... ................. .... ..... .... .... .... ... ... ... .. .. .. ... .. ...
... ... . . . .. .. .. .. ... .. ..... ..... .... .... ..... ...
... ..... ..... ..... ..... .... .... .... .... ...
.
. .... . . . .
.
.
............ .. .. .. .. .. .. .. ... .... .... .... .... .... ..... ....
... .... ... .. .. .. .. .. ... .. ... ... ... ... ... ... ... ... ...
... ............. .. ... ... ... ... ... .. .. ... ....
... ...... .. .. .. .. .. .. .. .. .. ... .......
. ......... .... .... .... .... .... ...
... ................... .... ..... .... .... .... .... .... ..... .... ... ...........
. .
. .
.. ... ... ... ... ... ... ...
. . . .. . . .
.... .. . . . . . ....
................................................................................................................................................................................................. ..............................................................................................................................................................................................
... ...
.... ....
... Probability ... Uncertainty