Вы находитесь на странице: 1из 511

Uncertainty Theory

Fifth Edition

Baoding Liu
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
liu@tsinghua.edu.cn
http://orsc.edu.cn/liu

http://orsc.edu.cn/liu/ut.pdf
5th Edition © 2017 by Uncertainty Theory Laboratory
4th Edition © 2015 by Springer-Verlag Berlin
3rd Edition © 2010 by Springer-Verlag Berlin
2nd Edition © 2007 by Springer-Verlag Berlin
1st Edition © 2004 by Springer-Verlag Berlin
Contents

Preface xi

0 Introduction 1
0.1 Indeterminacy . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.2 Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
0.3 Belief Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1 Uncertain Measure 11
1.1 Measurable Space . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Uncertain Measure . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Uncertainty Space . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 Product Uncertain Measure . . . . . . . . . . . . . . . . . . . 19
1.5 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Polyrectangular Theorem . . . . . . . . . . . . . . . . . . . . 27
1.7 Conditional Uncertain Measure . . . . . . . . . . . . . . . . . 29
1.8 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 31

2 Uncertain Variable 33
2.1 Uncertain Variable . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Uncertainty Distribution . . . . . . . . . . . . . . . . . . . . . 36
2.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4 Operational Law: Inverse Distribution . . . . . . . . . . . . . 48
2.5 Operational Law: Distribution . . . . . . . . . . . . . . . . . 59
2.6 Operational Law: Boolean System . . . . . . . . . . . . . . . 66
2.7 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.8 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.9 Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.10 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.11 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.12 Conditional Uncertainty Distribution . . . . . . . . . . . . . . 95
2.13 Uncertain Sequence . . . . . . . . . . . . . . . . . . . . . . . . 98
2.14 Uncertain Vector . . . . . . . . . . . . . . . . . . . . . . . . . 104
vi Contents

2.15 Uncertain Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 108


2.16 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 110

3 Uncertain Programming 113


3.1 Uncertain Programming . . . . . . . . . . . . . . . . . . . . . 113
3.2 Numerical Method . . . . . . . . . . . . . . . . . . . . . . . . 116
3.3 Machine Scheduling Problem . . . . . . . . . . . . . . . . . . 118
3.4 Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . 121
3.5 Project Scheduling Problem . . . . . . . . . . . . . . . . . . . 125
3.6 Uncertain Multiobjective Programming . . . . . . . . . . . . 129
3.7 Uncertain Goal Programming . . . . . . . . . . . . . . . . . . 130
3.8 Uncertain Multilevel Programming . . . . . . . . . . . . . . . 131
3.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 132

4 Uncertain Risk Analysis 133


4.1 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
4.2 Risk Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.3 Series System . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.4 Parallel System . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.5 k-out-of-n System . . . . . . . . . . . . . . . . . . . . . . . . 137
4.6 Standby System . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.7 Structural Risk Analysis . . . . . . . . . . . . . . . . . . . . . 138
4.8 Investment Risk Analysis . . . . . . . . . . . . . . . . . . . . 142
4.9 Value-at-Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.10 Expected Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.11 Hazard Distribution . . . . . . . . . . . . . . . . . . . . . . . 144
4.12 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 145

5 Uncertain Reliability Analysis 147


5.1 Structure Function . . . . . . . . . . . . . . . . . . . . . . . . 147
5.2 Reliability Index . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.3 Series System . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.4 Parallel System . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.5 k-out-of-n System . . . . . . . . . . . . . . . . . . . . . . . . 150
5.6 General System . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 151

6 Uncertain Propositional Logic 153


6.1 Uncertain Proposition . . . . . . . . . . . . . . . . . . . . . . 153
6.2 Truth Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.3 Chen-Ralescu Theorem . . . . . . . . . . . . . . . . . . . . . . 157
6.4 Boolean System Calculator . . . . . . . . . . . . . . . . . . . 160
6.5 Uncertain Predicate Logic . . . . . . . . . . . . . . . . . . . . 160
6.6 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 163
Contents vii

7 Uncertain Entailment 165


7.1 Uncertain Entailment Model . . . . . . . . . . . . . . . . . . 165
7.2 Uncertain Modus Ponens . . . . . . . . . . . . . . . . . . . . 168
7.3 Uncertain Modus Tollens . . . . . . . . . . . . . . . . . . . . 169
7.4 Uncertain Hypothetical Syllogism . . . . . . . . . . . . . . . . 170
7.5 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 171

8 Uncertain Set 173


8.1 Uncertain Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.2 Membership Function . . . . . . . . . . . . . . . . . . . . . . 182
8.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.4 Set Operational Law . . . . . . . . . . . . . . . . . . . . . . . 200
8.5 Arithmetic Operational Law . . . . . . . . . . . . . . . . . . . 206
8.6 Inclusion Relation . . . . . . . . . . . . . . . . . . . . . . . . 213
8.7 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.8 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
8.9 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.10 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
8.11 Conditional Membership Function . . . . . . . . . . . . . . . 229
8.12 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 234

9 Uncertain Logic 235


9.1 Individual Feature Data . . . . . . . . . . . . . . . . . . . . . 235
9.2 Uncertain Quantifier . . . . . . . . . . . . . . . . . . . . . . . 236
9.3 Uncertain Subject . . . . . . . . . . . . . . . . . . . . . . . . 243
9.4 Uncertain Predicate . . . . . . . . . . . . . . . . . . . . . . . 246
9.5 Uncertain Proposition . . . . . . . . . . . . . . . . . . . . . . 249
9.6 Truth Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
9.7 Linguistic Summarizer . . . . . . . . . . . . . . . . . . . . . . 256
9.8 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 259

10 Uncertain Inference 261


10.1 Uncertain Inference Rule . . . . . . . . . . . . . . . . . . . . . 261
10.2 Uncertain System . . . . . . . . . . . . . . . . . . . . . . . . . 265
10.3 Uncertain Control . . . . . . . . . . . . . . . . . . . . . . . . 268
10.4 Inverted Pendulum . . . . . . . . . . . . . . . . . . . . . . . . 268
10.5 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 271

11 Uncertain Process 273


11.1 Uncertain Process . . . . . . . . . . . . . . . . . . . . . . . . 273
11.2 Uncertainty Distribution . . . . . . . . . . . . . . . . . . . . . 274
11.3 Independence and Operational Law . . . . . . . . . . . . . . . 278
11.4 Independent Increment Process . . . . . . . . . . . . . . . . . 280
11.5 Extreme Value Theorem . . . . . . . . . . . . . . . . . . . . . 282
11.6 First Hitting Time . . . . . . . . . . . . . . . . . . . . . . . . 285
viii Contents

11.7 Time Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 286


11.8 Stationary Increment Process . . . . . . . . . . . . . . . . . . 290
11.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 294

12 Uncertain Renewal Process 295


12.1 Uncertain Renewal Process . . . . . . . . . . . . . . . . . . . 295
12.2 Block Replacement Policy . . . . . . . . . . . . . . . . . . . . 299
12.3 Renewal Reward Process . . . . . . . . . . . . . . . . . . . . . 300
12.4 Uncertain Insurance Model . . . . . . . . . . . . . . . . . . . 302
12.5 Age Replacement Policy . . . . . . . . . . . . . . . . . . . . . 306
12.6 Alternating Renewal Process . . . . . . . . . . . . . . . . . . 310
12.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 314

13 Uncertain Calculus 315


13.1 Liu Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
13.2 Liu Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
13.3 Fundamental Theorem . . . . . . . . . . . . . . . . . . . . . . 325
13.4 Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
13.5 Change of Variables . . . . . . . . . . . . . . . . . . . . . . . 327
13.6 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . 328
13.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 330

14 Uncertain Differential Equation 331


14.1 Uncertain Differential Equation . . . . . . . . . . . . . . . . . 331
14.2 Analytic Methods . . . . . . . . . . . . . . . . . . . . . . . . . 334
14.3 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . 339
14.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
14.5 α-Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
14.6 Yao-Chen Formula . . . . . . . . . . . . . . . . . . . . . . . . 344
14.7 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . 355
14.8 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 357

15 Uncertain Finance 359


15.1 Uncertain Stock Model . . . . . . . . . . . . . . . . . . . . . . 359
15.2 European Options . . . . . . . . . . . . . . . . . . . . . . . . 359
15.3 American Options . . . . . . . . . . . . . . . . . . . . . . . . 363
15.4 Asian Options . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
15.5 General Stock Model . . . . . . . . . . . . . . . . . . . . . . . 368
15.6 Multifactor Stock Model . . . . . . . . . . . . . . . . . . . . . 370
15.7 Uncertain Interest Rate Model . . . . . . . . . . . . . . . . . 374
15.8 Uncertain Currency Model . . . . . . . . . . . . . . . . . . . . 378
15.9 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 382
Contents ix

16 Uncertain Statistics 385


16.1 Expert’s Experimental Data . . . . . . . . . . . . . . . . . . . 385
16.2 Questionnaire Survey . . . . . . . . . . . . . . . . . . . . . . . 386
16.3 Determining Uncertainty Distribution . . . . . . . . . . . . . 387
16.4 Determining Membership Function . . . . . . . . . . . . . . . 393
16.5 Uncertain Regression Analysis . . . . . . . . . . . . . . . . . . 396
16.6 Uncertain Time Series Analysis . . . . . . . . . . . . . . . . . 403
16.7 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 408

A Uncertain Random Variable 411


A.1 Chance Measure . . . . . . . . . . . . . . . . . . . . . . . . . 411
A.2 Uncertain Random Variable . . . . . . . . . . . . . . . . . . . 415
A.3 Chance Distribution . . . . . . . . . . . . . . . . . . . . . . . 417
A.4 Operational Law . . . . . . . . . . . . . . . . . . . . . . . . . 419
A.5 Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . 426
A.6 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
A.7 Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . 433
A.8 Uncertain Random Programming . . . . . . . . . . . . . . . . 435
A.9 Uncertain Random Risk Analysis . . . . . . . . . . . . . . . . 438
A.10 Uncertain Random Reliability Analysis . . . . . . . . . . . . . 442
A.11 Uncertain Random Graph . . . . . . . . . . . . . . . . . . . . 444
A.12 Uncertain Random Network . . . . . . . . . . . . . . . . . . . 447
A.13 Uncertain Random Process . . . . . . . . . . . . . . . . . . . 449
A.14 Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . 462

B Urn Problems 465


B.1 Urn Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
B.2 Ellsberg Experiment . . . . . . . . . . . . . . . . . . . . . . . 468

C Frequently Asked Questions 469


C.1 What is the meaning that an object follows the laws of prob-
ability theory? . . . . . . . . . . . . . . . . . . . . . . . . . . 469
C.2 Why does frequency follow the laws of probability theory? . . 470
C.3 Why is probability theory not suitable for modelling belief
degree? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
C.4 What goes wrong with Cox’s theorem? . . . . . . . . . . . . . 473
C.5 What is the difference between probability theory and uncer-
tainty theory? . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
C.6 Why do I think fuzzy set theory is bad mathematics? . . . . 474
C.7 Why is fuzzy variable not suitable for modelling indeterminate
quantity? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
C.8 What is the difference between uncertainty theory and possi-
bility theory? . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
C.9 Why is stochastic differential equation not suitable for mod-
elling stock price? . . . . . . . . . . . . . . . . . . . . . . . . . 477
x Contents

C.10 In what situations should we use uncertainty theory? . . . . . 480


C.11 How did “uncertainty” evolve over the past 100 years? . . . . 481
C.12 How can we distinguish between randomness and uncertainty
in practice? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482

Bibliography 483

List of Frequently Used Symbols 494

Index 495
Preface

When no samples are available to estimate a probability distribution, we have


to invite some domain experts to evaluate the belief degree that each event
will happen. Perhaps some people think that the belief degree should be
modeled by subjective probability or fuzzy set theory. However, it is usually
inappropriate because both of them may lead to counterintuitive results in
this case. In order to rationally deal with personal belief degrees, uncertainty
theory was founded in 2007 and subsequently studied by many researchers.
Nowadays, uncertainty theory has become a branch of mathematics.

Uncertain Measure

The most fundamental concept is uncertain measure that is a type of set


function satisfying the axioms of uncertainty theory. It is used to indicate
the belief degree that an uncertain event may happen. Chapter 1 will intro-
duce normality, duality, subadditivity and product axioms. From those four
axioms, this chapter will also present uncertain measure, product uncertain
measure, and conditional uncertain measure.

Uncertain Variable

Uncertain variable is a measurable function from an uncertainty space to the


set of real numbers. It is used to represent quantities with uncertainty. Chap-
ter 2 is devoted to uncertain variable, uncertainty distribution, independence,
operational law, expected value, variance, moments, distance, entropy, con-
ditional uncertainty distribution, uncertain sequence, uncertain vector, and
uncertain matrix.

Uncertain Programming

Uncertain programming is a type of mathematical programming involving


uncertain variables. Chapter 3 will provide a type of uncertain program-
ming model with applications to machine scheduling problem, vehicle routing
problem, and project scheduling problem. In addition, uncertain multiob-
jective programming, uncertain goal programming and uncertain multilevel
programming are also documented.
xii Preface

Uncertain Risk Analysis


The term risk has been used in different ways in literature. In this book
the risk is defined as the accidental loss plus the uncertain measure of such
loss, and a risk index is defined as the uncertain measure that some specified
loss occurs. Chapter 4 will introduce uncertain risk analysis that is a tool
to quantify risk via uncertainty theory. As applications of uncertain risk
analysis, Chapter 4 will also discuss structural risk analysis and investment
risk analysis.

Uncertain Reliability Analysis


Reliability index is defined as the uncertain measure that some system is
working. Chapter 5 will introduce uncertain reliability analysis that is a tool
to deal with system reliability via uncertainty theory.

Uncertain Propositional Logic


Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. Chapter 6 will present uncertain propositional logic and uncertain pred-
icate logic. In addition, uncertain entailment is a methodology for determin-
ing the truth value of an uncertain proposition via the maximum uncertainty
principle when the truth values of other uncertain propositions are given.
Chapter 7 will discuss an uncertain entailment model from which uncertain
modus ponens, uncertain modus tollens and uncertain hypothetical syllogism
are deduced.

Uncertain Set
Uncertain set is a set-valued function on an uncertainty space, and attempts
to model unsharp concepts like “young”, “tall”, “warm”, and “most”. The
main difference between uncertain set and uncertain variable is that the for-
mer takes values of set and the latter takes values of point. Uncertain set
theory will be introduced in Chapter 8.

Uncertain Logic
Some knowledge in human brain is actually an uncertain set. This fact en-
courages us to design an uncertain logic that is a methodology for calculating
the truth values of uncertain propositions via uncertain set theory. Uncertain
logic may provide a flexible means for extracting linguistic summary from a
collection of raw data. Chapter 9 will be devoted to uncertain logic and
linguistic summarizer.
Preface xiii

Uncertain Inference

Uncertain inference is a process of deriving consequences from human knowl-


edge via uncertain set theory. Chapter 10 will present a set of uncertain
inference rules, uncertain system, and uncertain control with application to
an inverted pendulum system.

Uncertain Process

An uncertain process is essentially a sequence of uncertain variables indexed


by time. Thus an uncertain process is usually used to model uncertain phe-
nomena that vary with time. Chapter 11 is devoted to basic concepts of
uncertain process and uncertainty distribution. In addition, extreme value
theorem, first hitting time and time integral of uncertain processes are also
introduced. Chapter 12 deals with uncertain renewal process, renewal reward
process, and alternating renewal process. Chapter 12 also provides block re-
placement policy, age replacement policy, and an uncertain insurance model.

Uncertain Calculus

Uncertain calculus is a branch of mathematics that deals with differentiation


and integration of uncertain processes. Chapter 13 will introduce Liu process
that is a stationary independent increment process whose increments are
normal uncertain variables, and discuss Liu integral that is a type of uncertain
integral with respect to Liu process. In addition, the fundamental theorem of
uncertain calculus will be proved in this chapter from which the techniques
of chain rule, change of variables, and integration by parts are also derived.

Uncertain Differential Equation

Uncertain differential equation is a type of differential equation involving


uncertain processes. Chapter 14 will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and will introduce
Yao-Chen formula that represents the solution of an uncertain differential
equation by a family of solutions of ordinary differential equations. On the
basis of this formula, some formulas to calculate extreme value, first hitting
time, and time integral of solution are provided. Furthermore, some numeri-
cal methods for solving general uncertain differential equations are designed.

Uncertain Finance

As applications of uncertain differential equation, Chapter 15 will discuss


uncertain stock model, uncertain interest rate model, and uncertain currency
model.
xiv Preface

Uncertain Statistics
Uncertain statistics is a methodology for collecting and interpreting expert’s
experimental data by uncertainty theory. Chapter 16 will present a question-
naire survey for collecting expert’s experimental data. In order to determine
uncertainty distributions and membership functions from those expert’s ex-
perimental data, Chapter 16 will also introduce linear interpolation method,
principle of least squares, method of moments, and Delphi method. In ad-
dition, uncertain regression analysis and uncertain time series analysis are
also introduced when the imprecise observations are characterized in terms
of uncertain variables.

Law of Truth Conservation


The law of excluded middle tells us that a proposition is either true or false,
and the law of contradiction tells us that a proposition cannot be both true
and false. In the state of indeterminacy, some people said, the law of excluded
middle and the law of contradiction are no longer valid because the truth
degree of a proposition is no longer 0 or 1. I cannot gainsay this viewpoint
to a certain extent. But it does not mean that you might “go as you please”.
The truth values of a proposition and its negation should sum to unity. This is
the law of truth conservation that is weaker than the law of excluded middle
and the law of contradiction. Furthermore, the law of truth conservation
agrees with the law of excluded middle and the law of contradiction when
the uncertainty vanishes.

Maximum Uncertainty Principle


An event has no uncertainty if its uncertain measure is 1 because we may be-
lieve that the event happens. An event has no uncertainty too if its uncertain
measure is 0 because we may believe that the event does not happen. An
event is the most uncertain if its uncertain measure is 0.5 because the event
and its complement may be regarded as “equally likely”. In practice, if there
is no information about the uncertain measure of an event, we should assign
0.5 to it. Sometimes, only partial information is available. In this case, the
value of uncertain measure may be specified in some range. What value does
the uncertain measure take? For any event, if there are multiple reasonable
values that an uncertain measure may take, then the value as close to 0.5 as
possible is assigned to the event. This is the maximum uncertainty principle.

Matlab Uncertainty Toolbox


Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) is a col-
lection of functions built on Matlab for many methods of uncertainty the-
ory, including uncertain programming, uncertain risk analysis, uncertain re-
liability analysis, uncertain logic, uncertain inference, uncertain differential
Preface xv

equation, uncertain statistics, scheduling, logistics, data mining, control, and


finance.

Lecture Slides
If you need lecture slides for uncertainty theory, please download them from
the website at http://orsc.edu.cn/liu/resources.htm.

Uncertainty Theory Online


If you want to read more books, dissertations and papers related to uncer-
tainty theory, please visit the website at http://orsc.edu.cn/online.

Purpose
The purpose is to equip the readers with a branch of mathematics to deal
with belief degrees. The textbook is suitable for researchers, engineers, and
students in the field of mathematics, information science, operations research,
industrial engineering, computer science, artificial intelligence, automation,
economics, and management science.

A Guide for the Readers


The readers are not required to read the book from cover to cover. The logic
dependence of chapters is illustrated by the figure below.
............
.... .....
.... 1 ..
.. .................................
. ......
...... ......
..
. .
....... ......
.. ......
...... .......
..
...........................
. ..............................
. ....
..... .....
...
.... . . 2.
.
..
.................... ........................... 8.... ...........
.
...
.. ....
............... .... .. .... .......................
. .................. ...........
........ ........... ...
... ...... .........
...
... ......
........ . ...... .
...... ................. . ......
......... ............ ......... ........... ........... ........... ......
........ . ............ ........... ........ ....... .........
.............. ...... ..................................... ........... ........ .......
....................................... ............................... .... ...... .... ...... .... ...... .............. ......
.... ... . ... .. ... .. ... . . . ... .. . .
... .. ... .
.... ...
... 3
....... .......
...
. ...
.
4
.... ....
. ........
. ...
.... ....
........ ..
5 . ...
.... ....
........
6 ..
. ...
.
11
.... .....
....... .... .
.
.
.
...
16
.... ....
........ ..
. 9
...
.... ....
........ ..
. 10 .... ......
..........
... ... .....
. . .....
........ ........ .....
.......
... ... ......................
................ ................ ...
.... ... .... ... ..
.... ..
...
....... .......
...
7 . . . ...
...
13
......... ..
..
.. . 12
.... ......
..........
...
..........
..
...........
.... ......
.....
14
.... .....
...........
.
...
..
.......
...
................
... ..
....
15
.... ......
..........
.

Acknowledgment
This work was supported by National Natural Science Foundation of China
Grant No.61573210.
xvi Preface

Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
December 4, 2017
Chapter 0

Introduction

Real decisions are usually made in the state of indeterminacy. To ratio-


nally deal with indeterminacy, there exist two mathematical systems, one is
probability theory (Kolmogorov, 1933) and the other is uncertainty theory
(Liu, 2007). Probability theory is a branch of mathematics for modelling fre-
quencies, while uncertainty theory is a branch of mathematics for modelling
belief degrees.
What is indeterminacy? What is frequency? What is belief degree? This
chapter will answer these questions, and show in what situation we should use
probability theory and in what situation we should use uncertainty theory.
Finally, it is concluded that a rational man behaves as if he used uncertainty
theory.

0.1 Indeterminacy
By indeterminacy we mean the phenomena whose outcomes cannot be ex-
actly predicted in advance. For example, we cannot exactly predict which
face will appear before we toss dice. Thus “tossing dice” is a type of in-
determinate phenomenon. As another example, we cannot exactly predict
tomorrow’s stock price. That is, “stock price” is also a type of indetermi-
nate phenomenon. Some other instances of indeterminacy include “roulette
wheel”, “product lifetime”, “market demand”, “bridge strength”, “travel dis-
tance”, etc.
Indeterminacy is absolute, while determinacy is relative. This is the rea-
son why we say real decisions are usually made in the state of indeterminacy.
How to model indeterminacy is thus an important research subject in not
only mathematics but also science and engineering.
In order to describe an indeterminate quantity (e.g. stock price), what we
need is a “distribution function” representing the degree that the quantity
falls into the left side of the current point. Such a function will always have
bigger values as the current point moves from the left to right. See Figure 1.
2 Chapter 0 - Introduction

If the distribution function takes value 0, then it is completely impossible that


the quantity falls into the left side of the current point; if the distribution
function takes value 1, then it is completely impossible that the quantity
falls into the right side; if the distribution function takes value 0.6, then we
are 60% sure that the quantity falls into the left side and 40% sure that the
quantity falls into the right side.
....
........
...
...
1 ............................................................................
...
....................
....................
............
... .........
... ........
... ...
........
... .....
.....
................................................
α ... ...
..... ..
... ..
.
... ..
... .....
... ..... ...
... .....
... ...
...... ..
..
... ...
.... ..
... ...
.... ..
... ..
..
.... ..
... ...
...... ..
... .
...
....... ..
.....................
. ..
.......... .........
...............................................................................................................................................................................................................................................................
.......
.. ...
0 ...
...
x

Figure 1: Distribution function

In order to find a distribution function for some indeterminate quantity,


personally I think there exist only two ways, one is frequency generated by
samples (i.e., historical data), and the other is belief degree evaluated by
domain experts. Could you imagine a third way?

0.2 Frequency
Assume we have collected a set of samples for some indeterminate quantity
(e.g. stock price). By cumulative frequency we mean a function representing
the percentage of all samples that fall into the left side of the current point.
It is clear that the cumulative frequency looks like a step function in Figure 2.
....
........
..
...
1 ...
......................................................................................
... ....................
..................
................
...
... . ... ..... ...
... .................. . ... ... ...
... ... .
.. ... .
.. ...
... .................. . .
.. .
... .
.. ...
... ... ... .
.. .. .
.. ...
... ... .
.. .
.. .
.. .
.. ...
... ... .
.. .
.. .
.. .
.. ...
... .................. .
. .
. .
. .
. ...
... ... ... .... .... .... .... ...
... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ...
... ................. . .
.
... .
.. .
.. .
.. . ...
..
... ... ... . .
. .
. .
. .
. ...
... ... ... .... .... .... .... .... ...
... .................. . .
.. .
.. .
.. .
.. .
.. .
.. ...
... ... .
.. .
.. .
.. .
.. .
.. .
.. .
.. ...
... . .
.. .
.. .
.. .
.. .
.. .
.. .
.. ...
... ................... .
. .
. .
. .
. .
. .
. .
. ...
... .. .... .... .... .... .... .... .... .... ...
... ................... .
.. .
.. .
.. .
.. .
.. .
.. .
.. .
.. ...
. ................. .
. .
. . . .
.............................................................................................................................................................................................................................................
. . . . .
..

Figure 2: Cumulative frequency histogram

Frequency is a factual property of indeterminate quantity, and does not


Section 0.3 - Belief Degree 3

change with our state of knowledge and preference. In other words, the
frequency in the long run exists and is relatively invariant, no matter if it is
observed by us.

Probability theory is applicable when samples are available


The study of probability theory was started by Pascal and Fermat in the
17th century when they succeeded in deriving the exact probabilities for
certain gambling problems. After that, probability theory was studied by
many researchers. Particularly, a complete axiomatic foundation of proba-
bility theory was successfully given by Kolmogorov [70] in 1933. Since then,
probability theory has been developed steadily and widely applied in science
and engineering.
Keep in mind that a fundamental premise of applying probability theory
is that the estimated probability distribution is close enough to the long-run
cumulative frequency. Otherwise, the law of large numbers is no longer valid
and probability theory is no longer applicable.
When the sample size is large enough, it is possible for us to believe the
estimated probability distribution is close enough to the long-run cumulative
frequency. In this case, there is no doubt that probability theory is the only
legitimate approach to deal with our problems on the basis of the estimated
probability distributions.
However, in many cases, no samples are available to estimate a probability
distribution. What can we do in this situation? Perhaps we have no choice
but to invite some domain experts to evaluate the belief degree that each
event will happen.

0.3 Belief Degree


Belief degrees are familiar to all of us. The object of belief is an event (i.e.,
a proposition). For example, “the sun will rise tomorrow”, “it will be sunny
next week”, and “John is a young man” are all instances of object of belief.
A belief degree represents the strength with which we believe the event will
happen. If we completely believe the event will happen, then the belief degree
is 1 (complete belief). If we think it is completely impossible, then the belief
degree is 0 (complete disbelief). If the event and its complementary event
are equally likely, then the belief degree for the event is 0.5, and that for the
complementary event is also 0.5. Generally, we will assign a number between
0 and 1 to the belief degree for each event. The higher the belief degree is,
the more strongly we believe the event will happen.
Assume a box contains 100 balls, each of which is known to be either red
or black, but we do not know how many of the balls are red and how many
are black. In this case, it is impossible for us to determine the probability of
drawing a red ball. However, the belief degree can be evaluated by us. For
example, the belief degree for drawing a red ball is 0.5 because “drawing a
4 Chapter 0 - Introduction

red ball” and “drawing a black ball” are equally likely. Besides, the belief
degree for drawing a black ball is also 0.5.
The belief degree depends heavily on the personal knowledge (even includ-
ing preference) concerning the event. When the personal knowledge changes,
the belief degree changes too.

Belief Degree Function


How do we describe an indeterminate quantity (e.g. bridge strength)? It is
clear that a single belief degree is absolutely not enough. Do we need to know
the belief degrees for all possible events? The answer is negative. In fact,
what we need is a belief degree function that represents the degree with which
we believe the indeterminate quantity falls into the left side of the current
point.
For example, if we believe the indeterminate quantity completely falls
into the left side of the current point, then the belief degree function takes
value 1; if we think it completely falls into the right side, then the belief
degree function takes value 0. Generally, a belief degree function takes values
between 0 and 1, and has bigger values as the current point moves from the
left to right. See Figure 3.
..
.........
...
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .....•
1 ...
...
....
.......
................................
......
... .......
... ...
........
...
... .......
... .......
......
... .......
... ..•
.
... ...
... ...
... ...
... ..
.
... .
...
... ...
... ..
... .
...
.
.....•
... ......
... .......
... .......
... .
..........
... .......
.......
.. ......
...............................• ......................................................................................................................................................................................
0 ..

Figure 3: Belief degree function

How to obtain belief degrees


Consider a bridge and its strength. At first, we have to admit that no destruc-
tive experiment is allowed for the bridge. Thus we have no samples about
the bridge strength. In this case, there do not exist any statistical methods
to estimate its probability distribution. How do we deal with it? It seems
that we have no choice but to invite some bridge engineers to evaluate the
belief degrees about the bridge strength. In practice, it is almost impossible
for the bridge engineers to give a perfect description of the belief degrees of
all possible events. Instead, they can only provide some subjective judgments
Section 0.3 - Belief Degree 5

about the bridge strength. As a simple example, we assume a consultation


process is as follows:
(Q) What do you think is the bridge strength?
(A) I think the bridge strength is between 80 and 120 tons.
What belief degrees can we derive from the answer of the bridge engineer?
First, we may have an inference:
(i) I am 100% sure that the bridge strength is less than 120 tons.
This means the belief degree of “the bridge strength being less than 120 tons”
is 1. Thus we have an expert’s experimental data (120, 1). Furthermore, we
may have another inference:
(ii) I am 100% sure that the bridge strength is greater than 80 tons.
This statement gives a belief degree that the bridge strength falls into the
right side of 80 tons. We need translate it to a statement about the belief
degree that the bridge strength falls into the left side of 80 tons:
(ii 0 ) I am 0% sure that the bridge strength is less than 80 tons.
Although the statement (ii0 ) sounds strange to us, it is indeed equivalent to
the statement (ii). Thus we have another expert’s experimental data (80, 0).
Until now we have acquired two expert’s experimental data (80, 0) and
(120, 1) about the bridge strength. Could we infer the belief degree Φ(x)
that the bridge strength falls into the left side of the point x? The answer is
affirmative. For example, a reasonable value is


 0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (1)

1, if x > 120.

See Figure 4. From the function Φ(x), we may infer that the belief degree
of “the bridge strength being less than 90 tons” is 0.25. In other words, it is
reasonable to infer that “I am 25% sure that the bridge strength is less than
90 tons”, or equivalently “I am 75% sure that the bridge strength is greater
than 90 tons”.

All belief degrees are wrong, but some are useful


Different people may hold different belief degrees. Perhaps some readers may
ask which belief degree is correct. Liu [95] answered that all belief degrees
are wrong, but some are useful. A belief degree becomes “correct” only when
it is close enough to the frequency of the indeterminate quantity. However,
usually we cannot make it to that.
Through a lot of surveys, Kahneman and Tversky [65] showed that human
beings usually overweight unlikely events. From another side, Liu [95] showed
6 Chapter 0 - Introduction

..
.........
...
...
1 .......................................................
...
...
......................................................................
.....
..... ..
... ..... .
... ..
...... ...
... ..
... .....
... ..... ...
... ....
... ..
....... ..
..
... ......
. ..
....
... .
...
.
. ..
... ..
. ..
... ..
..... ..
... ..
...
. ..
... ..
..... ..
... ..
...
. ..
... ..
....
. ..
... ..
..... ..
... ..
...
. ..
... ..
..... ..
... ..
...
. ..
... ..
....
. .
..
........................................................................................................................................................................................................................
0 ..
...
... x (ton)
80 ..
..
120

Figure 4: Belief degree function of “the bridge strength”

that human beings usually estimate a much wider range of values than the
object actually takes. This conservatism of human beings makes the belief
degrees deviate far from the frequency. Thus all belief degrees are wrong
compared with its frequency. However, it cannot be denied that those belief
degrees are indeed helpful for decision making.

Belief degrees cannot be treated as subjective probability


Can we deal with belief degrees by probability theory? Some people do think
so and call it subjective probability. However, Liu [86] declared that it is
inappropriate to model belief degrees by probability theory because it may
lead to counterintuitive results.

...
... ...
... ...
“exactly 90 tons” ...
... ...
... ...
... .... ... ... .... ...
... ..... ..... ..... ... ..... ..... .....
..
.. .. .. ...
... .. ..... ... ...
... ... ... ... .. ...

...................................................................
..
. . . .
... ... .... ... ....
... .. .. ... ... ...
... ... .. .. .. ...
... ... .. .. ... ... .................................... .... .... .... .... .... .... .... .... .... .... .... .... .... ... ..... .... ..... ..... .....
.
.... ..... .... ..... ..... ..... .. . . . . . . . . . . . . . . ..
. . . ... ... ...
.
... ... ... ... .... ... ......................... ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. .. ... .... ....
... ... .. ... ... .... ...................
... ... ... .................................................................................................................. ................................................................. ... ... ..
... ..... ....
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
.... ..... .... ... ... ...
... ... ...
.
.. ... . .
... . .
... ... ...... . ...... . ...... ...... . . .
. ..
. . .. ... ... ...
.
.. . ..
. .
. .. .
. .
.. .. .
.
. . .. ..

...... ....
.......
....
...... ...........
... ......
.....
⇑ ....... ....
.....
..
.................
...........
... .. ... ... ..... . ..... .. .. ... ...
. ... ... .... .. ..... ... ... ... ...
.... ... ... ... ... .. ..... ... ... ... ...
.... .... ... ... .... .. ... ... ... ... ....
. ....
... .... .. .. .... .. .... .... .... .. ..
.................................. ..... .. ..... .................................
........ .
.
......
..... .....
..........................................................................................................................
.... .

Unknown Strength

Figure 5: A Truck is Crossing over a Bridge

Consider a counterexample presented by Liu [86]. Assume there is one


truck and 50 bridges in an experiment. Also assume the weight of the truck
is 90 tons and the 50 bridge strengths are iid uniform random variables on
[95, 110] in tons. For simplicity, suppose a bridge collapses whenever its real
strength is less than the weight of the truck. Now let us have the truck cross
Section 0.3 - Belief Degree 7

over the 50 bridges one by one. It is easy to verify that

Pr{“the truck can cross over the 50 bridges”} = 1. (2)

That is to say, we are 100% sure that the truck can cross over the 50 bridges
successfully.
....
........
....
....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ............................................................................
1 ....
...
... ......
..... ..
......
..... ..
..... .
... .. . ..... ...
... “true” probability ..... .. .
........ .
... .. .. .. .......
... . ... .. .. ..........
.....
..
.
...
...
distribution ..... ...............
.. ........... ..
...
..
.
.
... .............. .. .. ..
... ........ . . . .
... .
....... .. .. ..
...
.
.. ..
... . ..
.......... .. .. .. .. .
... ....... ...
. .
.. .. .. .. .. .. ..
... ....... .... .. .. .. .. ..
. .
... . . ...... .... .
.... .. .. .. .. .. ..
... .......
. ... .. .. .. .. .. ..
.. .
... .... .. .. .. .. .. .. ..
belief degree
... ....
.
....
...
....
.
.
.
... .. .. .. .. .. .. ..
.
... ...
..... ..
.. .. .. .. .. .. .. .. ..
function ... ..
...... .
... .. .. .. .. .. .. .. .
... ........ ... .... .. .. .. .. .. .. ..
. ..
... .......
. .... .. .. .. .. .. .. .. ..
. .
... ...
..... .
.... .. .. .. .. .. .. .. .. ..
... ..
...... ...
. ..... .. .. .. .. .. .. .. ..
. .
... ........ .. .. .. .. .. .. .. .. .. ..
.
. .
.. . . . . . . . . . .
0 .................................................................................................................................................................................................................................................................. x (ton)
....
80
.. 95 110 120

Figure 6: Belief degree function, “true” probability distribution and cumu-


lative frequency histogram of “the bridge strength”

However, when there do not exist any observed samples for the bridge
strength at the moment, we have to invite some bridge engineers to evaluate
the belief degrees about it. As we stated before, human beings usually esti-
mate a much wider range of values than the bridge strength actually takes
because of the conservatism. Assume the belief degree function is


 0, if x < 80
Φ(x) = (x − 80)/40, if 80 ≤ x ≤ 120 (3)

1, if x > 120.

See Figure 6. Let us imagine what will happen if the belief degree function
is treated as a probability distribution. At first, we have to regard the 50
bridge strengths as iid uniform random variables on [80, 120] in tons. If we
have the truck cross over the 50 bridges one by one, then we immediately
have

Pr{“the truck can cross over the 50 bridges”} = 0.7550 ≈ 0. (4)

Thus it is almost impossible that the truck crosses over the 50 bridges suc-
cessfully. Unfortunately, the results (2) and (4) are at opposite poles. This
example shows that, by inappropriately using probability theory, a sure event
becomes an impossible one. The error seems intolerable for us. Hence the
belief degrees cannot be treated as subjective probability.
8 Chapter 0 - Introduction

A possible proposition cannot be judged impossible


During information processing, we should follow such a basic principle that a
possible proposition cannot be judged impossible (Liu [86]). In other words,
if a proposition is possibly true, then its truth value should not be zero.
Likewise, if a proposition is possibly false, then its truth value should not be
unity.
In the example of truck-cross-over-bridge, a completely true proposition is
judged completely false by probability theory. This means using probability
theory violates the above-mentioned principle, and therefore probability the-
ory is not appropriate to model belief degrees. In other words, belief degrees
do not follow the laws of probability theory.

Uncertainty theory is able to model belief degrees


In order to rationally deal with personal belief degrees, uncertainty theory was
founded by Liu [77] in 2007 and subsequently studied by many researchers.
Nowadays, uncertainty theory has become a branch of mathematics for mod-
elling belief degrees.
Liu [86] declared that uncertainty theory is the only legitimate approach
when only belief degrees are available. If we believe the estimated uncertainty
distribution is close enough to the belief degrees hidden in the mind of the
domain experts, then we may use uncertainty theory to deal with our own
problems on the basis of the estimated uncertainty distributions.
Let us reconsider the example of truck-cross-over-bridge by uncertainty
theory. If the belief degree function is regarded as a linear uncertainty dis-
tribution on [80, 120] in tons, then we immediately have

M{“the truck can cross over the 50 bridges”} = 0.75. (5)

That is to say, we are 75% sure that the truck can cross over the 50 bridges
successfully. Here the degree 75% does not achieve up to the true value 100%.
But the error is caused by the difference between belief degree and frequency,
and is not further magnified by uncertainty theory.

0.4 Summary
In order to model indeterminacy, many theories have been invented. What
theories are considered acceptable? Personally I think an acceptable theory
should be not only theoretically self-consistent but also the best among others
for solving at least one practical problem. On the basis of this principle, I
may conclude that there exist two mathematical systems, one is probability
theory and the other is uncertainty theory. It is emphasized that probability
theory is only applicable to modelling frequencies, and uncertainty theory
is only applicable to modelling belief degrees. In other words, frequency is
the empirical basis of probability theory, while belief degree is the empirical
Section 0.4 - Summary 9

basis of uncertainty theory. Keep in mind that using uncertainty theory to


model frequency may produce a crude result, while using probability theory
to model belief degree may produce a big disaster.

... ...
.......... ..........
... ............................................ ...
....... .....................
. . ..............
.... .................. .... ... .... ......... ..............
.................. .... .... .... . ....
..
... ..
. .. ... ... ... ...
.
..
... .................... ....
.............. .. ... .. .. ... .
... .. . .. . .. . ... ..... ... ... ...
... ..... .... ... .... .... .... ... ......... ... ... ...
... ................ .... .... .... .... ... ... ..... ... ... ... ...
... ..... ... ... ... ... ... ... ... .... ..... .... .... .... ....
... .... ... .. .. .. .. .. ... ... ...... ... .... .... ...
... ............... .... .... ..... .... .... .... ... ... ... .. ... .. .. ..
... ..... ..... .... .... .... .... .... ... ... .. .. ... .. .. .. ..
... .
..
............ ... ... .... ... .... .... ....
. ... ... ....... .... .... ..... ..... ....
.
.... ... ... .. ... .. .. ..
... ... .. .. .. .. .. .. . . ... .... .. .. .. .. .. .. ..
... .................... .... .... .... .... .... .... ..... .... ... ..... ......... .... .... .... .... .... ...
... ... . . . . . . . . . ... ..... .. ... ... ... ... ... ... ...
. ...
..
..
..
.
.
. .
..... ..... .... .... .... .... .... .... .... .... .... . ...
...
........ .... .......
. .
...
...................................................................................................................................................................................... ....................................................................................................................................................................................
. .
... . .
.... ....
..
.. Probability ..
.. Uncertainty

Figure 7: When the sample size is large enough, the estimated probability
distribution (left curve) may be close enough to the cumulative frequency (left
histogram). In this case, probability theory is the only legitimate approach.
When the belief degrees are available (no samples), the estimated uncertainty
distribution (right curve) usually deviates far from the cumulative frequency
(right histogram but unknown). In this case, uncertainty theory is the only
legitimate approach.

However, single-variable system is an exception. When there exists one


and only one indeterminate variable in a real system, probability theory and
uncertainty theory will produce the same result because product measure is
not used. In this case, frequency may be modeled by uncertainty theory while
belief degree may be modeled by probability theory. Both are indifferent.
Since belief degrees are usually wrong compared with frequency, the gap
between belief degree and frequency always exists. Such an error is likely to
be further magnified if the belief degree is regarded as subjective probability.
Fortunately, uncertainty theory can successfully avoid turning small errors
to large ones.
Savage [134] said a rational man behaves as if he used subjective probabil-
ities. However, usually, we cannot make it to that. Liu [95] said a rational
man behaves as if he used uncertainty theory. In other words, a rational man
is expected to hold belief degrees that follow the laws of uncertainty theory
rather than probability theory.
Chapter 1

Uncertain Measure

Uncertainty theory was founded by Liu [77] in 2007 and subsequently studied
by many researchers. Nowadays uncertainty theory has become a branch of
mathematics for modelling belief degrees. This chapter will provide normal-
ity, duality, subadditivity and product axioms of uncertainty theory. From
those four axioms, this chapter will also introduce an uncertain measure that
is a fundamental concept in uncertainty theory. In addition, product uncer-
tain measure and conditional uncertain measure will be explored at the end
of this chapter.

1.1 Measurable Space


From the mathematical viewpoint, uncertainty theory is essentially an al-
ternative theory of measure. Thus uncertainty theory should begin with a
measurable space. In order to learn it, let us introduce algebra, σ-algebra,
measurable set, Borel algebra, Borel set, and measurable function. The main
results in this section are well-known. For this reason the credit references
are not provided. You may skip this section if you are familiar with them.
Definition 1.1 Let Γ be a nonempty set (sometimes called universal set).
A collection L consisting of subsets of Γ is called an algebra over Γ if the
following three conditions hold: (a) Γ ∈ L; (b) if Λ ∈ L, then Λc ∈ L; and
(c) if Λ1 , Λ2 , · · · , Λn ∈ L, then
n
[
Λi ∈ L. (1.1)
i=1

The collection L is called a σ-algebra over Γ if the condition (c) is replaced


with closure under countable union, i.e., when Λ1 , Λ2 , · · · ∈ L, we have

[
Λi ∈ L. (1.2)
i=1
12 Chapter 1 - Uncertain Measure

Example 1.1: The collection {∅, Γ} is the smallest σ-algebra over Γ, and
the power set (i.e., all subsets of Γ) is the largest σ-algebra.

Example 1.2: Let Λ be a proper nonempty subset of Γ. Then {∅, Λ, Λc , Γ}


is a σ-algebra over Γ.

Example 1.3: Let L be the collection of all finite disjoint unions of all
intervals of the form

(−∞, a], (a, b], (b, ∞), ∅. (1.3)

Then L is an algebra over < (the set of real numbers), but not a σ-algebra
because Λi = (0, (i − 1)/i] ∈ L for all i but

[
Λi = (0, 1) 6∈ L. (1.4)
i=1

Example 1.4: A σ-algebra L is closed under countable union, countable


intersection, difference, and limit. That is, if Λ1 , Λ2 , · · · ∈ L, then

[ ∞
\
Λi ∈ L; Λi ∈ L; Λ1 \ Λ2 ∈ L; lim Λi ∈ L. (1.5)
i→∞
i=1 i=1

Definition 1.2 Let Γ be a nonempty set, and let L be a σ-algebra over Γ.


Then (Γ, L) is called a measurable space, and any element in L is called a
measurable set.

Example 1.5: Let < be the set of real numbers. Then L = {∅, <} is a
σ-algebra over <. Thus (<, L) is a measurable space. Note that there exist
only two measurable sets in this space, one is ∅ and another is <. Keep in
mind that the intervals like [0, 1] and (0, +∞) are not measurable in this
space!

Example 1.6: Let Γ = {a, b, c}. Then L = {∅, {a}, {b, c}, Γ} is a σ-algebra
over Γ. Thus (Γ, L) is a measurable space. Furthermore, {a} and {b, c} are
measurable sets in this space, but {b}, {c}, {a, b}, {a, c} are not.

Definition 1.3 The smallest σ-algebra B containing all open intervals is


called the Borel algebra over the set of real numbers, and any element in B
is called a Borel set.

Example 1.7: It has been proved that intervals, open sets, closed sets,
rational numbers, and irrational numbers are all Borel sets.

Example 1.8: There exists a non-Borel set over <. Let [a] represent the set
of all rational numbers plus a. Note that if a1 − a2 is not a rational number,
Section 1.2 - Uncertain Measure 13

then [a1 ] and [a2 ] are disjoint sets. Thus < is divided into an infinite number
of those disjoint sets. Let A be a new set containing precisely one element
from them. Then A is not a Borel set.

Definition 1.4 A function ξ from a measurable space (Γ, L) to the set of


real numbers is said to be measurable if

ξ −1 (B) = {γ ∈ Γ | ξ(γ) ∈ B} ∈ L (1.6)

for any Borel set B of real numbers.

Continuous function and monotone function are instances of measurable


function. Let ξ1 , ξ2 , · · · be a sequence of measurable functions. Then the
following functions are also measurable:

sup ξi (γ); inf ξi (γ); lim sup ξi (γ); lim inf ξi (γ). (1.7)
1≤i<∞ 1≤i<∞ i→∞ i→∞

Especially, if limi→∞ ξi (γ) exists for each γ, then the limit is also a measurable
function.

1.2 Uncertain Measure


Let (Γ, L) be a measurable space. Recall that each element Λ in L is called a
measurable set. The first action we take is to rename measurable set as event
in uncertainty theory. The second action is to define an uncertain measure M
on the σ-algebra L. That is, a number M{Λ} will be assigned to each event
Λ to indicate the belief degree with which we believe Λ will happen. There is
no doubt that the assignment is not arbitrary, and the uncertain measure M
must have certain mathematical properties. In order to rationally deal with
belief degrees, Liu [77] suggested the following three axioms:
Axiom 1. (Normality Axiom) M{Γ} = 1 for the universal set Γ.
Axiom 2. (Duality Axiom) M{Λ} + M{Λc } = 1 for any event Λ.
Axiom 3. (Subadditivity Axiom) For every countable sequence of events Λ1 ,
Λ2 , · · · , we have (∞ )
[ X∞
M Λi ≤ M{Λi }. (1.8)
i=1 i=1

Remark 1.1: Uncertain measure is interpreted as the personal belief degree


(not frequency) of an uncertain event that may happen. Thus uncertain
measure and belief degree are synonymous, and will be used interchangeably
in this book.

Remark 1.2: Uncertain measure depends on the personal knowledge con-


cerning the event. It will change if the state of knowledge changes.
14 Chapter 1 - Uncertain Measure

Remark 1.3: Since “1” means “complete belief ” and we cannot be in more
belief than “complete belief ”, the belief degree of any event cannot exceed 1.
Furthermore, the belief degree of the universal set takes value 1 because it is
completely believable. Thus the belief degree meets the normality axiom.

Remark 1.4: Duality axiom is in fact an application of the law of truth


conservation in uncertainty theory. The property ensures that the uncer-
tainty theory is consistent with the law of excluded middle and the law of
contradiction. In addition, the human thinking is always dominated by the
duality. For example, if someone tells us that a proposition is true with belief
degree 0.6, then all of us will think that the proposition is false with belief
degree 0.4.

Remark 1.5: Given two events with known belief degrees, it is frequently
asked that how the belief degree for their union is generated from the in-
dividuals. Personally, I do not think there exists any rule to make it. A
lot of surveys showed that, generally speaking, the belief degree of a union
of events is neither the sum of belief degrees of the individual events (e.g.
probability measure) nor the maximum (e.g. possibility measure). It seems
that there is no explicit relation between the union and individuals except
for the subadditivity axiom.

Remark 1.6: Pathology occurs if subadditivity axiom is not assumed. For


example, suppose that a universal set contains 3 elements. We define a set
function that takes value 0 for each singleton, and 1 for each event with at
least 2 elements. Then such a set function satisfies all axioms but subaddi-
tivity. Do you think it is strange if such a set function serves as a measure?

Remark 1.7: Although probability measure satisfies the above three axioms,
probability theory is not a special case of uncertainty theory because the
product probability measure does not satisfy the fourth axiom, namely the
product axiom on Page 20.
Definition 1.5 (Liu [77]) The set function M is called an uncertain measure
if it satisfies the normality, duality, and subadditivity axioms.

Exercise 1.1: Let Γ be a nonempty set. For each subset Λ of Γ, we define



 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ (1.9)

0.5, otherwise.

Show that M is an uncertain measure. (Hint: Verify M meets the three


axioms.)

Exercise 1.2: Let Γ = {γ1 , γ2 }. It is clear that there exist 4 events in the
power set,
L = {∅, {γ1 }, {γ2 }, Γ}. (1.10)
Section 1.2 - Uncertain Measure 15

Assume c is a real number with 0 < c < 1, and define

M{∅} = 0, M{γ1 } = c, M{γ2 } = 1 − c, M{Γ} = 1.

Show that M is an uncertain measure.

Exercise 1.3: Let Γ = {γ1 , γ2 , γ3 }. It is clear that there exist 8 events in


the power set,

L = {∅, {γ1 }, {γ2 }, {γ3 }, {γ1 , γ2 }, {γ1 , γ3 }, {γ2 , γ3 }, Γ}. (1.11)

Assume c1 , c2 , c3 are nonnegative numbers satisfying the consistency condi-


tion
ci + cj ≤ 1 ≤ c1 + c2 + c3 , ∀i 6= j. (1.12)
Define
M{γ1 } = c1 , M{γ2 } = c2 , M{γ3 } = c3 ,

M{γ1 , γ2 } = 1 − c3 , M{γ1 , γ3 } = 1 − c2 , M{γ2 , γ3 } = 1 − c1 ,

M{∅} = 0, M{Γ} = 1.
Show that M is an uncertain measure.

Exercise 1.4: Let Γ = {γ1 , γ2 , γ3 , γ4 }, and let c be a real number with


0.5 ≤ c < 1. It is clear that there exist 16 events in the power set. For each
subset Λ, define

 0,
 if Λ = ∅

 1, if Λ = Γ
M{Λ} = (1.13)

 c, if γ1 ∈ Λ 6= Γ

1 − c, if γ1 6∈ Λ 6= ∅.

Show that M is an uncertain measure.

Exercise 1.5: Let Γ = {γ1 , γ2 , · · · }, and let c1 , c2 , · · · be nonnegative num-


bers such that c1 + c2 + · · · = 1. For each subset Λ, define
X
M{Λ} = ci . (1.14)
γi ∈Λ

Show that M is an uncertain measure.

Exercise 1.6: Lebesgue measure, named after French mathematician Henri


Lebesgue, is the standard way of assigning a length, area or volume to subsets
of Euclidean space. For example, the Lebesgue measure of the interval [a, b]
of real numbers is the length b − a. Let Γ = [0, 1], and let M be the Lebesgue
measure. Show that M is an uncertain measure.
16 Chapter 1 - Uncertain Measure

Exercise 1.7: Let Γ be the set of real numbers, and let c be a real number
with 0 < c ≤ 0.5. For each subset Λ, define


 0, if Λ = ∅

 c, if Λ is upper bounded and Λ 6= ∅



M{Λ} = 0.5, if both Λ and Λc are upper unbounded (1.15)

 1 − c, if Λc is upper bounded and Λ 6= Γ




1, if Λ = Γ.

Show that M is an uncertain measure.

Exercise 1.8: Suppose that λ(x) is a nonnegative function on < (the set of
real numbers) such that
sup λ(x) = 0.5. (1.16)
x∈<

Define a set function



 sup λ(x), if sup λ(x) < 0.5
x∈Λ x∈Λ

M{Λ} = (1.17)
 1 − sup λ(x), if sup λ(x) = 0.5

x∈Λc x∈Λ

for each subset Λ. Show that M is an uncertain measure.

Exercise 1.9: Suppose ρ(x) is a nonnegative and integrable function on <


(the set of real numbers) such that
Z
ρ(x)dx ≥ 1. (1.18)
<

Define a set function


 Z Z


 ρ(x)dx, if ρ(x)dx < 0.5


 Λ Λ

M{Λ} =
Z Z
(1.19)
 1− ρ(x)dx, if ρ(x)dx < 0.5
Λc Λc






0.5, otherwise

for each Borel set Λ. Show that M is an uncertain measure.

Theorem 1.1 (Monotonicity Theorem) The uncertain measure is a mono-


tone increasing set function. That is, for any events Λ1 and Λ2 with Λ1 ⊂ Λ2 ,
we have
M{Λ1 } ≤ M{Λ2 }. (1.20)
Section 1.2 - Uncertain Measure 17

Proof: The normality axiom says M{Γ} = 1, and the duality axiom says
M{Λc1 } = 1 − M{Λ1 }. Since Λ1 ⊂ Λ2 , we have Γ = Λc1 ∪ Λ2 . By using the
subadditivity axiom, we obtain

1 = M{Γ} ≤ M{Λc1 } + M{Λ2 } = 1 − M{Λ1 } + M{Λ2 }.

Thus M{Λ1 } ≤ M{Λ2 }.

Theorem 1.2 The empty set ∅ always has an uncertain measure zero. That
is,
M{∅} = 0. (1.21)

Proof: Since ∅ = Γc and M{Γ} = 1, it follows from the duality axiom that

M{∅} = 1 − M{Γ} = 1 − 1 = 0.

Theorem 1.3 The uncertain measure takes values between 0 and 1. That
is, for any event Λ, we have

0 ≤ M{Λ} ≤ 1. (1.22)

Proof: It follows from the monotonicity theorem that 0 ≤ M{Λ} ≤ 1 because


∅ ⊂ Λ ⊂ Γ and M{∅} = 0, M{Γ} = 1.

Theorem 1.4 Let Λ1 , Λ2 , · · · be a sequence of events with M{Λi } → 0 as


i → ∞. Then for any event Λ, we have

lim M{Λ ∪ Λi } = lim M{Λ\Λi } = M{Λ}. (1.23)


i→∞ i→∞

Especially, an uncertain measure remains unchanged if the event is enlarged


or reduced by an event with uncertain measure zero.

Proof: It follows from the monotonicity theorem and subadditivity axiom


that
M{Λ} ≤ M{Λ ∪ Λi } ≤ M{Λ} + M{Λi }
for each i. Thus we get M{Λ ∪ Λi } → M{Λ} by using M{Λi } → 0. Since
(Λ\Λi ) ⊂ Λ ⊂ ((Λ\Λi ) ∪ Λi ), we have

M{Λ\Λi } ≤ M{Λ} ≤ M{Λ\Λi } + M{Λi }.

Hence M{Λ\Λi } → M{Λ} by using M{Λi } → 0.

Theorem 1.5 (Asymptotic Theorem) For any events Λ1 , Λ2 , · · · , we have

lim M{Λi } > 0, if Λi ↑ Γ, (1.24)


i→∞

lim M{Λi } < 1, if Λi ↓ ∅. (1.25)


i→∞
18 Chapter 1 - Uncertain Measure

Proof: Assume Λi ↑ Γ. Since Γ = ∪i Λi , it follows from the subadditivity


axiom that
X∞
1 = M{Γ} ≤ M{Λi }.
i=1

Since M{Λi } is increasing with respect to i, we have limi→∞ M{Λi } > 0. If


Λi ↓ ∅, then Λci ↑ Γ. It follows from the first inequality and the duality axiom
that
lim M{Λi } = 1 − lim M{Λci } < 1.
i→∞ i→∞

The theorem is proved.

Example 1.9: Assume Γ is the set of real numbers. Let α be a number with
0 < α ≤ 0.5. Define an uncertain measure as follows,


 0, if Λ = ∅

 α, if Λ is upper bounded and Λ 6= ∅



M{Λ} = 0.5, if both Λ and Λc are upper unbounded (1.26)
 c
1 − α, if Λ is upper bounded and Λ 6= Γ





1, if Λ = Γ.

(i) Write Λi = (−∞, i] for i = 1, 2, · · · Then Λi ↑ Γ and limi→∞ M{Λi } = α.


(ii) Write Λi = [i, +∞) for i = 1, 2, · · · Then Λi ↓ ∅ and limi→∞ M{Λi } =
1 − α.

1.3 Uncertainty Space


Definition 1.6 (Liu [77]) Let Γ be a nonempty set, let L be a σ-algebra over
Γ, and let M be an uncertain measure. Then the triplet (Γ, L, M) is called
an uncertainty space.

Example 1.10: Let Γ be a two-point set {γ1 , γ2 }, let L be the power set
of {γ1 , γ2 }, and let M be an uncertain measure determined by M{γ1 } = 0.6
and M{γ2 } = 0.4. Then (Γ, L, M) is an uncertainty space.

Example 1.11: Let Γ be a three-point set {γ1 , γ2 , γ3 }, let L be the power set
of {γ1 , γ2 , γ3 }, and let M be an uncertain measure determined by M{γ1 } =
0.6, M{γ2 } = 0.3 and M{γ3 } = 0.2. Then (Γ, L, M) is an uncertainty space.

Example 1.12: Let Γ be the interval [0, 1], let L be the Borel algebra over
[0, 1], and let M be the Lebesgue measure. Then (Γ, L, M) is an uncertainty
space.
For practical purposes, the study of uncertainty spaces is sometimes re-
stricted to complete uncertainty spaces.
Section 1.4 - Product Uncertain Measure 19

Definition 1.7 (Liu [95]) An uncertainty space (Γ, L, M) is called complete


if for any Λ1 , Λ2 ∈ L with M{Λ1 } = M{Λ2 } and any subset A with Λ1 ⊂
A ⊂ Λ2 , one has A ∈ L. In this case, we also have

M{A} = M{Λ1 } = M{Λ2 }. (1.27)

Exercise 1.10: Let (Γ, L, M) be a complete uncertainty space, and let Λ be


an event with M{Λ} = 0. Show that A is an event and M{A} = 0 whenever
A ⊂ Λ.

Exercise 1.11: Let (Γ, L, M) be a complete uncertainty space, and let Λ be


an event with M{Λ} = 1. Show that A is an event and M{A} = 1 whenever
A ⊃ Λ.

Definition 1.8 (Gao [41]) An uncertainty space (Γ, L, M) is called contin-


uous if for any events Λ1 , Λ2 , · · · , we have
n o
M lim Λi = lim M{Λi } (1.28)
i→∞ i→∞

provided that limi→∞ Λi exists.

Exercise 1.12: Show that an uncertainty space (Γ, L, M) is always contin-


uous if Γ consists of a finite number of points.

Exercise 1.13: Let Γ = [0, 1], let L be the Borel algebra over Γ, and let M
be the Lebesgue measure. Show that (Γ, L, M) is a continuous uncertainty
space.

Exercise 1.14: Let Γ = [0, 1], and let L be the power set over Γ. For each
subset Λ of Γ, define

 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ (1.29)

0.5, otherwise.

Show that (Γ, L, M) is a discontinuous uncertainty space.

1.4 Product Uncertain Measure


Product uncertain measure was defined by Liu [80] in 2009, thus producing
the fourth axiom of uncertainty theory. Let (Γk , Lk , Mk ) be uncertainty
spaces for k = 1, 2, · · · Write

Γ = Γ1 × Γ2 × · · · (1.30)
20 Chapter 1 - Uncertain Measure

that is the set of all ordered tuples of the form (γ1 , γ2 , · · · ), where γk ∈ Γk
for k = 1, 2, · · · A measurable rectangle in Γ is a set

Λ = Λ1 × Λ2 × · · · (1.31)

where Λk ∈ Lk for k = 1, 2, · · · The smallest σ-algebra containing all mea-


surable rectangles of Γ is called the product σ-algebra, denoted by

L = L1 × L2 × · · · (1.32)

Then the product uncertain measure M on the product σ-algebra L is defined


by the following product axiom (Liu [80]).
Axiom 4. (Product Axiom) Let (Γk , Lk , Mk ) be uncertainty spaces for k =
1, 2, · · · The product uncertain measure M is an uncertain measure satisfying
(∞ ) ∞
Y ^
M Λk = Mk {Λk } (1.33)
k=1 k=1

where Λk are arbitrarily chosen events from Lk for k = 1, 2, · · · , respectively.

Remark 1.8: Note that (1.33) defines a product uncertain measure only for
rectangles. How do we extend the uncertain measure M from the class of
rectangles to the product σ-algebra L? For each event Λ ∈ L, we have

min Mk {Λk },

 sup


 Λ1 ×Λ2 ×···⊂Λ 1≤k<∞


if sup min Mk {Λk } > 0.5



Λ1 ×Λ2 ×···⊂Λ 1≤k<∞





M{Λ} = 1− sup min Mk {Λk }, (1.34)

 Λ1 ×Λ2 ×···⊂Λc 1≤k<∞


min Mk {Λk } > 0.5



 if sup
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞






0.5, otherwise.

Remark 1.9: The sum of the uncertain measures of the maximum rectangles
in Λ and Λc is always less than or equal to 1, i.e.,

sup min Mk {Λk } + sup min Mk {Λk } ≤ 1.


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

This means that at most one of

sup min Mk {Λk } and sup min Mk {Λk }


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

is greater than 0.5. Thus the expression (1.34) is reasonable.


Section 1.4 - Product Uncertain Measure 21

Γ.2
...
..........
...
.. ............................
.............. .........
... ........ .......
... ....... ......
... .
......... .....
.
. ...
.... .....
.
.........................................
.
. ..... ................................................................................... ......
...... .. .. .
. ... ...
. . . ...
.... .... .... ..
.
..
.
... ...
... ... ... .... ...
... ... .... ... ..... ...
. ... ... ... .... ...
... ...
... ... ... ...
... .... ... ..
Λ 2 .. .
. .
... .
.
..
.
Λ .
.
..
. ..
.
...
. . ... . ... .
.
.... .... ... .
.
. .
.. ...
... ... ... .
. . .
.
... ... ... .... ..... ...
... ... ... ...
......... ... ... ... ..
....................................... ... .... .. ...
... ..... .................................................................................. ......
... ..... .
...... . ......
... ... ......
... ... ............ .......
... .. .......... ............... ...
...................................
... .. ...
... .. ..
.. . .
.................................................................................................................................................................................................... Γ1
.... .... ....
... ... ...
.... . .
...................................
Λ1 ...................................

Figure 1.1: Extension from Rectangles to Product σ-Algebra. The uncertain


measure of Λ (the disk) is essentially the acreage of its inscribed rectangle
Λ1 ×Λ2 if it is greater than 0.5. Otherwise, we have to examine its complement
Λc . If the inscribed rectangle of Λc is greater than 0.5, then M{Λc } is just
its inscribed rectangle and M{Λ} = 1 − M{Λc }. If there does not exist an
inscribed rectangle of Λ or Λc greater than 0.5, then we set M{Λ} = 0.5.

Remark 1.10: It is clear that for each Λ ∈ L, the uncertain measure M{Λ}
defined by (1.34) takes possible values on the interval
 
sup min Mk {Λk }, 1 − sup min Mk {Λk } .
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

Thus (1.34) coincides with the maximum uncertainty principle (Liu [77]),
that is, M{Λ} takes the value as close to 0.5 as possible within the above
interval.

Remark 1.11: If the sum of the uncertain measures of the maximum rect-
angles in Λ and Λc is just 1, i.e.,
sup min Mk {Λk } + sup min Mk {Λk } = 1,
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

then the product uncertain measure (1.34) is simplified as


M{Λ} = sup min Mk {Λk }. (1.35)
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

Exercise 1.15: Let (Γ1 , L1 , M1 ) be the interval [0, 1] with Borel algebra and
Lebesgue measure, and let (Γ2 , L2 , M2 ) be also the interval [0, 1] with Borel
algebra and Lebesgue measure. Then
Λ = {(γ1 , γ2 ) ∈ Γ1 × Γ2 | γ1 + γ2 ≤ 1} (1.36)
22 Chapter 1 - Uncertain Measure

is an event on the product uncertainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ).


Show that
1
M{Λ} = . (1.37)
2

Exercise 1.16: Let (Γ1 , L1 , M1 ) be the interval [0, 1] with Borel algebra and
Lebesgue measure, and let (Γ2 , L2 , M2 ) be also the interval [0, 1] with Borel
algebra and Lebesgue measure. Then

Λ = (γ1 , γ2 ) ∈ Γ1 × Γ2 | (γ1 − 0.5)2 + (γ2 − 0.5)2 ≤ 0.52



(1.38)

is an event on the product uncertainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ). (i)


Show that
1
M{Λ} = √ . (1.39)
2

(ii) From the above result we derive M{Λc } = 1 − 1/√ 2. Please find a
rectangle Λ1 × Λ2 in Λc such that M{Λ1 × Λ2 } = 1 − 1/ 2.

Theorem 1.6 (Peng-Iwamura [123]) The product uncertain measure defined


by (1.34) is an uncertain measure.

Proof: In order to prove that the product uncertain measure (1.34) is indeed
an uncertain measure, we should verify that the product uncertain measure
satisfies the normality, duality and subadditivity axioms.
Step 1: The product uncertain measure is clearly normal, i.e., M{Γ} = 1.
Step 2: We prove the duality, i.e., M{Λ} + M{Λc } = 1. The argument
breaks down into three cases. Case 1: Assume

sup min Mk {Λk } > 0.5.


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

Then we immediately have

sup min Mk {Λk } < 0.5.


Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

It follows from (1.34) that

M{Λ} = sup min Mk {Λk },


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

M{Λc } = 1 − sup min Mk {Λk } = 1 − M{Λ}.


Λ1 ×Λ2 ×···⊂(Λc )c 1≤k<∞

The duality is proved. Case 2: Assume

sup min Mk {Λk } > 0.5.


Λ1 ×Λ2 ×···⊂Λc 1≤k<∞
Section 1.4 - Product Uncertain Measure 23

This case may be proved by a similar process. Case 3: Assume

sup min Mk {Λk } ≤ 0.5


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

and
sup min Mk {Λk } ≤ 0.5.
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

It follows from (1.34) that M{Λ} = M{Λc } = 0.5 which proves the duality.
Step 3: Let us prove that M is an increasing set function. Suppose Λ
and ∆ are two events in L with Λ ⊂ ∆. The argument breaks down into
three cases. Case 1: Assume

sup min Mk {Λk } > 0.5.


Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

Then

sup min Mk {∆k } ≥ sup min Mk {Λk } > 0.5.


∆1 ×∆2 ×···⊂∆ 1≤k<∞ Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

It follows from (1.34) that M{Λ} ≤ M{∆}. Case 2: Assume

sup min Mk {∆k } > 0.5.


∆1 ×∆2 ×···⊂∆c 1≤k<∞

Then

sup min Mk {Λk } ≥ sup min Mk {∆k } > 0.5.


Λ1 ×Λ2 ×···⊂Λc 1≤k<∞ ∆1 ×∆2 ×···⊂∆c 1≤k<∞

Thus
M{Λ} = 1 − sup min Mk {Λk }
Λ1 ×Λ2 ×···⊂Λc 1≤k<∞

≤1− sup min Mk {∆k } = M{∆}.


∆1 ×∆2 ×···⊂∆c 1≤k<∞

Case 3: Assume
sup min Mk {Λk } ≤ 0.5
Λ1 ×Λ2 ×···⊂Λ 1≤k<∞

and
sup min Mk {∆k } ≤ 0.5.
∆1 ×∆2 ×···⊂∆c 1≤k<∞

Then
M{Λ} ≤ 0.5 ≤ 1 − M{∆c } = M{∆}.

Step 4: Finally, we prove the subadditivity of M. For simplicity, we only


prove the case of two events Λ and ∆. The argument breaks down into three
24 Chapter 1 - Uncertain Measure

cases. Case 1: Assume M{Λ} < 0.5 and M{∆} < 0.5. For any given ε > 0,
there are two rectangles

Λ1 × Λ2 × · · · ⊂ Λc , ∆ 1 × ∆2 × · · · ⊂ ∆c

such that
1 − min Mk {Λk } ≤ M{Λ} + ε/2,
1≤k<∞

1 − min Mk {∆k } ≤ M{∆} + ε/2.


1≤k<∞

Note that
(Λ1 ∩ ∆1 ) × (Λ2 ∩ ∆2 ) × · · · ⊂ (Λ ∪ ∆)c .
It follows from the duality and subadditivity axioms that
Mk {Λk ∩ ∆k } = 1 − Mk {(Λk ∩ ∆k )c } = 1 − Mk {Λck ∪ ∆ck }
≥ 1 − (Mk {Λck } + Mk {∆ck })
= 1 − (1 − Mk {Λk }) − (1 − Mk {∆k })
= Mk {Λk } + Mk {∆k } − 1
for any k. Thus
M{Λ ∪ ∆} ≤ 1 − min Mk {Λk ∩ ∆k }
1≤k<∞

≤ 1 − min Mk {Λk } + 1 − min Mk {∆k }


1≤k<∞ 1≤k<∞

≤ M{Λ} + M{∆} + ε.
Letting ε → 0, we obtain

M{Λ ∪ ∆} ≤ M{Λ} + M{∆}.

Case 2: Assume M{Λ} ≥ 0.5 and M{∆} < 0.5. When M{Λ ∪ ∆} = 0.5, the
subadditivity is obvious. Now we consider the case M{Λ ∪ ∆} > 0.5, i.e.,
M{Λc ∩ ∆c } < 0.5. By using Λc ∪ ∆ = (Λc ∩ ∆c ) ∪ ∆ and Case 1, we get

M{Λc ∪ ∆} ≤ M{Λc ∩ ∆c } + M{∆}.

Thus
M{Λ ∪ ∆} = 1 − M{Λc ∩ ∆c } ≤ 1 − M{Λc ∪ ∆} + M{∆}
≤ 1 − M{Λc } + M{∆} = M{Λ} + M{∆}.

Case 3: If both M{Λ} ≥ 0.5 and M{∆} ≥ 0.5, then the subadditivity is
obvious because M{Λ} + M{∆} ≥ 1. The theorem is proved.

Definition 1.9 Assume (Γk , Lk , Mk ) are uncertainty spaces for k = 1, 2, · · ·


Let Γ = Γ1 × Γ2 × · · · , L = L1 × L2 × · · · and M = M1 ∧ M2 ∧ · · · Then the
triplet (Γ, L, M) is called a product uncertainty space.
Section 1.5 - Independence 25

1.5 Independence
Definition 1.10 (Liu [84]) The events Λ1 , Λ2 , · · · , Λn are said to be inde-
pendent if ( n )
\ n
^
M ∗
Λi = M{Λ∗i } (1.40)
i=1 i=1

where Λ∗iare arbitrarily chosen from {Λi , Λci , Γ}, i = 1, 2, · · · , n, respectively,


and Γ is the universal set.

Remark 1.12: Especially, two events Λ1 and Λ2 are independent if and only
if
M {Λ∗1 ∩ Λ∗2 } = M{Λ∗1 } ∧ M{Λ∗2 } (1.41)
where Λ∗i are arbitrarily chosen from {Λi , Λci }, i = 1, 2, respectively. That is,
the following four equations hold:

M{Λ1 ∩ Λ2 } = M{Λ1 } ∧ M{Λ2 },


M{Λc1 ∩ Λ2 } = M{Λc1 } ∧ M{Λ2 },
M{Λ1 ∩ Λc2 } = M{Λ1 } ∧ M{Λc2 },
M{Λc1 ∩ Λc2 } = M{Λc1 } ∧ M{Λc2 }.

Example 1.13: The impossible event ∅ is independent of any event Λ be-


cause the following four equations hold:

M{∅ ∩ Λ} = M{∅} = M{∅} ∧ M{Λ},


M{∅c ∩ Λ} = M{Λ} = M{∅c } ∧ M{Λ},
M{∅ ∩ Λc } = M{∅} = M{∅} ∧ M{Λc },
M{∅c ∩ Λc } = M{Λc } = M{∅c } ∧ M{Λc }.

Example 1.14: The sure event Γ is independent of any event Λ because the
following four equations hold:

M{Γ ∩ Λ} = M{Λ} = M{Γ} ∧ M{Λ},


M{Γc ∩ Λ} = M{Γc } = M{Γc } ∧ M{Λ},
M{Γ ∩ Λc } = M{Λc } = M{Γ} ∧ M{Λc },
M{Γc ∩ Λc } = M{Γc } = M{Γc } ∧ M{Λc }.

Example 1.15: Generally speaking, an event Λ is not independent of itself


because
M{Λ ∩ Λc } =6 M{Λ} ∧ M{Λc }
whenever M{Λ} is neither 1 nor 0.
26 Chapter 1 - Uncertain Measure

Theorem 1.7 (Liu [84]) The events Λ1 , Λ2 , · · · , Λn are independent if and


only if ( n )
[ _ n
M ∗
Λi = M{Λ∗i } (1.42)
i=1 i=1
where Λ∗i are arbitrarily chosen from {Λi , Λci , ∅}, i = 1, 2, · · · , n, respectively,
and ∅ is the impossible event.
Proof: Assume Λ1 , Λ2 , · · · , Λn are independent events. It follows from the
duality of uncertain measure that
( n ) ( n ) n n
[ \ ^ _
M Λi = 1 − M
∗ ∗c
Λi =1− M{Λ∗ci }= M{Λ∗i }
i=1 i=1 i=1 i=1

where Λ∗i
are arbitrarily chosen from i = 1, 2, · · · , n, respectively. {Λi , Λci , ∅},
The equation (1.42) is proved. Conversely, if the equation (1.42) holds, then
( n ) ( n ) n n
\ [ _ ^
M Λi = 1 − M

Λi∗c
=1− M{Λ∗ci } = M{Λ∗i }.
i=1 i=1 i=1 i=1

where Λ∗i
are arbitrarily chosen from i = 1, 2, · · · , n, respectively. {Λi , Λci , Γ},
The equation (1.40) is true. The theorem is proved.

Γ.2 ..........................................................................
... ...
.......... ..
... ...
.... ... ...
...
... .
.
... ..
. ...
... ..
. ...
..
.
. .
.
.
..............................................................................................................................................................................
.
....... .. .. ...
. . ....
.... .... .... ..... ...
... ... ... .... ...
.. ... ... ... ...
... ... ... ...
... ... ...
Λ 2 ... .
. .
.
..
.
Λ ×Λ 1 2 ...
...
. ...
...
.
.... .... .... .... ...
... ... ... ... ...
... ... ... ... ...
....... . .
. .
. . ..
...............................................................................................................................................................................
... ... ....
... ... ...
... ... ...
... ... ...
... ... ...
... ... ...
.
. .
. ..
..................................................................................................................................................................................... Γ1
.... ..... .....
... .. ..
.. ................................. .................................
Λ1

Figure 1.2: (Λ1 × Γ2 ) ∩ (Γ1 × Λ2 ) = Λ1 × Λ2

Theorem 1.8 (Liu [92]) Let (Γk , Lk , Mk ) be uncertainty spaces and Λk ∈


Lk for k = 1, 2, · · · , n. Then the events
Γ1 × · · · × Γk−1 × Λk × Γk+1 × · · · × Γn , k = 1, 2, · · · , n (1.43)
are always independent in the product uncertainty space. That is, the events
Λ1 , Λ2 , · · · , Λn (1.44)
are always independent if they are from different uncertainty spaces.
Section 1.6 - Polyrectangular Theorem 27

Proof: For simplicity, we only prove the case of n = 2. It follows from the
product axiom that the product uncertain measure of the intersection is

M{(Λ1 × Γ2 ) ∩ (Γ1 × Λ2 )} = M{Λ1 × Λ2 } = M1 {Λ1 } ∧ M2 {Λ2 }.

By using M{Λ1 × Γ2 } = M1 {Λ1 } and M{Γ1 × Λ2 } = M2 {Λ2 }, we obtain

M{(Λ1 × Γ2 ) ∩ (Γ1 × Λ2 )} = M{Λ1 × Γ2 } ∧ M{Γ1 × Λ2 }.

Similarly, we may prove that

M{(Λ1 × Γ2 )c ∩ (Γ1 × Λ2 )} = M{(Λ1 × Γ2 )c } ∧ M{Γ1 × Λ2 },


M{(Λ1 × Γ2 ) ∩ (Γ1 × Λ2 )c } = M{Λ1 × Γ2 } ∧ M{(Γ1 × Λ2 )c },
M{(Λ1 × Γ2 )c ∩ (Γ1 × Λ2 )c } = M{(Λ1 × Γ2 )c } ∧ M{(Γ1 × Λ2 )c }.

Thus Λ1 × Γ2 and Γ1 × Λ2 are independent events. Furthermore, since Λ1


and Λ2 are understood as Λ1 × Γ2 and Γ1 × Λ2 in the product uncertainty
space, respectively, the two events Λ1 and Λ2 are also independent.

1.6 Polyrectangular Theorem


Definition 1.11 (Liu [92]) Let (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ) be two uncer-
tainty spaces. A set on Γ1 × Γ2 is called a polyrectangle if it has the form
m
[
Λ= (Λ1i × Λ2i ) (1.45)
i=1

where Λ1i ∈ L1 and Λ2i ∈ L2 for i = 1, 2, · · · , m, and

Λ11 ⊂ Λ12 ⊂ · · · ⊂ Λ1m , (1.46)

Λ21 ⊃ Λ22 ⊃ · · · ⊃ Λ2m . (1.47)

A rectangle Λ1 × Λ2 is clearly a polyrectangle. In addition, a “cross”-like


set is also a polyrectangle. See Figure 1.3.

Theorem 1.9 (Liu [92], Polyrectangular Theorem) Let (Γ1 , L1 , M1 ) and


(Γ2 , L2 , M2 ) be two uncertainty spaces. Then the polyrectangle
m
[
Λ= (Λ1i × Λ2i ) (1.48)
i=1

on the product uncertainty space (Γ1 , L1 , M1 )×(Γ2 , L2 , M2 ) has an uncertain


measure
_m
M{Λ} = M{Λ1i × Λ2i }. (1.49)
i=1
28 Chapter 1 - Uncertain Measure

Γ.2
...
..........
... ...................... ..........................
.... ... ... ... ............................
.. ... ... .... ... ... ...
... ... ... ... ... ... ..
... ... ... ..
. .
... ... ..........................
... ... ... .
. .
. ...
... . ... . ...
... ... ...................... ....................... .
. ....................... .
.
... ... ... .. ... .... .
.........................
... ... ... .
. .
... .
. ...
... ... ... ..
. ...
.
.
. ...
... ... ... ..
. . .
.
. .
... ... ... ...
...
.... .........................
... ... ......................... ......................... ......................... ... ...
... ... ... .
. ... .
. ...
.
... ... ...
... .... ...
... .... .........................
... ... ... ... ... ... ...
... ... . ... ... ... ...
... ..................................................................... .
.
...................... ......................
.
. ..
...
...
.
................................................................................................................................................................................................................................................................................. Γ1
..
...

Figure 1.3: Three Polyrectangles

Proof: It is clear that the maximum rectangle in the polyrectangle Λ is one


of Λ1i × Λ2i , i = 1, 2, · · · , n. Denote the maximum rectangle by Λ1k × Λ2k .
Case I: If
M{Λ1k × Λ2k } = M1 {Λ1k },
then the maximum rectangle in Λc is Λc1k × Λc2,k+1 , and

M{Λc1k × Λc2,k+1 } = M1 {Λc1k } = 1 − M1 {Λ1k }.

Thus
M{Λ1k × Λ2k } + M{Λc1k × Λc2,k+1 } = 1.
Case II: If
M{Λ1k × Λ2k } = M2 {Λ2k },
then the maximum rectangle in Λc is Λc1,k−1 × Λc2k , and

M{Λc1,k−1 × Λc2k } = M2 {Λc2k } = 1 − M2 {Λ2k }.

Thus
M{Λ1k × Λ2k } + M{Λc1,k−1 × Λc2k } = 1.
No matter what case happens, the sum of the uncertain measures of the
maximum rectangles in Λ and Λc is always 1. It follows from the product
axiom that (1.49) holds.

Remark 1.13: Since M{Λ1i × Λ2i } = M1 {Λ1i } ∧ M2 {Λ2i } for each index i,
we also have
m
_
M{Λ} = M1 {Λ1i } ∧ M2 {Λ2i }. (1.50)
i=1

Remark 1.14: Note that the polyrectangular theorem is also applicable to


the polyrectangles that are unions of infinitely many rectangles. In this case,
the polyrectangles may become the shapes in Figure 1.4.
Section 1.7 - Conditional Uncertain Measure 29

Γ.2
...
..........
... .
.... .... ...... .......
.. ...... ...... ......
... ........ ... ... ... ....
... ...... ... .... ... ....
... .... .... ... ...
... ... ......
... ... ... ..
.. . ... .....
... .... .......
... ... .... ...
.
.. .
....... ... ........
... .... ... ....... ...
........ ... .............
... ... ... .............. .
.. .
. ...................
......
... ... .... ....... ..
....... .
. ................
... .... ... .....
. ..
...... .
.
.
. ....
...........
... .... ... . ....
... ... .... ..
... .
... .
. .
.. ..
... ... .... ... ... .... ......
... .... ..... ... ... ... ...
... ... ..... ... ...
....... ... ... ... ...
... ... ........ .
. .
.
... ....................................................................................... .....
..
.
......
...
...
.
...............................................................................................................................................................................................................................................................................
.
Γ1
..
...

Figure 1.4: Three Deformed Polyrectangles

1.7 Conditional Uncertain Measure


We consider the uncertain measure of an event Λ after it has been learned
that some other event A has occurred. This new uncertain measure of Λ is
called the conditional uncertain measure of Λ given A.
In order to define a conditional uncertain measure M{Λ|A}, at first we
have to enlarge M{Λ ∩ A} because M{Λ ∩ A} < 1 for all events whenever
M{A} < 1. It seems that we have no alternative but to divide M{Λ ∩ A} by
M{A}. Unfortunately, M{Λ ∩ A}/M{A} is not always an uncertain measure.
However, the value M{Λ|A} should not be greater than M{Λ ∩ A}/M{A}
(otherwise the normality will be lost), i.e.,

M{Λ ∩ A}
M{Λ|A} ≤ . (1.51)
M{A}

On the other hand, in order to preserve the duality, we should have

M{Λc ∩ A}
M{Λ|A} = 1 − M{Λc |A} ≥ 1 − . (1.52)
M{A}

Furthermore, since (Λ ∩ A) ∪ (Λc ∩ A) = A, we have M{A} ≤ M{Λ ∩ A} +


M{Λc ∩ A} by using the subadditivity axiom. Thus

M{Λc ∩ A} M{Λ ∩ A}
0≤1− ≤ ≤ 1. (1.53)
M{A} M{A}

Hence any numbers between 1 − M{Λc ∩ A}/M{A} and M{Λ ∩ A}/M{A} are
reasonable values that the conditional uncertain measure may take. Based
on the maximum uncertainty principle (Liu [77]), we have the following con-
ditional uncertain measure.

Definition 1.12 (Liu [77]) Let (Γ, L, M) be an uncertainty space, and Λ, A ∈


30 Chapter 1 - Uncertain Measure

L. Then the conditional uncertain measure of Λ given A is defined by

M{Λ ∩ A} M{Λ ∩ A}

 , if < 0.5
M{A} M{A}





M{Λ|A} = M{Λc ∩ A} M{Λc ∩ A} (1.54)
1− , if < 0.5
M{A} M{A}






0.5, otherwise

provided that M{A} > 0.

Remark 1.15: It follows immediately from the definition of conditional


uncertain measure that
M{Λc ∩ A} M{Λ ∩ A}
1− ≤ M{Λ|A} ≤ . (1.55)
M{A} M{A}

Remark 1.16: The conditional uncertain measure M{Λ|A} yields the pos-
terior uncertain measure of Λ after the occurrence of event A.

Theorem 1.10 (Liu [77]) Let (Γ, L, M) be an uncertainty space, and let A
be an event with M{A} > 0. Then M{·|A} defined by (1.54) is an uncertain
measure, and (Γ, L, M{·|A}) is an uncertainty space.

Proof: It is sufficient to prove that M{·|A} satisfies the normality, duality


and subadditivity axioms. At first, it satisfies the normality axiom, i.e.,

M{Γc ∩ A} M{∅}
M{Γ|A} = 1 − =1− = 1.
M{A} M{A}

For any event Λ, if

M{Λ ∩ A} M{Λc ∩ A}
≥ 0.5, ≥ 0.5,
M{A} M{A}

then we have M{Λ|A} + M{Λc |A} = 0.5 + 0.5 = 1 immediately. Otherwise,


without loss of generality, suppose

M{Λ ∩ A} M{Λc ∩ A}
< 0.5 < ,
M{A} M{A}

then we have

M{Λ ∩ A} M{Λ ∩ A}
 
M{Λ|A} + M{Λ |A} = c
+ 1− = 1.
M{A} M{A}
Section 1.8 - Bibliographic Notes 31

That is, M{·|A} satisfies the duality axiom. Finally, for any countable se-
quence {Λi } of events, if M{Λi |A} < 0.5 for all i, it follows from (1.55) and
the subadditivity axiom that
(∞ ) ∞
[ X
(∞ ) M Λi ∩ A M{Λi ∩ A} ∞
[ i=1 i=1
X
M Λi | A ≤ ≤ = M{Λi |A}.
i=1
M{A} M{A} i=1

Suppose there is one term greater than 0.5, say

M{Λ1 |A} ≥ 0.5, M{Λi |A} < 0.5, i = 2, 3, · · ·

If M{∪i Λi |A} = 0.5, then we immediately have


(∞ ) ∞
[ X
M Λi | A ≤ M{Λi |A}.
i=1 i=1

If M{∪i Λi |A} > 0.5, we may prove the above inequality by the following
facts:
∞ ∞
!
[ \
c c
Λ1 ∩ A ⊂ (Λi ∩ A) ∪ Λi ∩ A ,
i=2 i=1

(∞ )
X \
M{Λc1 ∩ A} ≤ M{Λi ∩ A} + M Λci ∩A ,
i=2 i=1
(∞ )
\
(∞ ) M Λci ∩A
[ i=1
M Λi | A =1− ,
i=1
M{A}

X

M{Λi ∩ A}
X M{Λc1 ∩ A} i=2
M{Λi |A} ≥ 1 − + .
i=1
M{A} M{A}

If there are at least two terms greater than 0.5, then the subadditivity is
clearly true. Thus M{·|A} satisfies the subadditivity axiom. Hence M{·|A}
is an uncertain measure. Furthermore, (Γ, L, M{·|A}) is an uncertainty space.

1.8 Bibliographic Notes


When no samples are available to estimate a probability distribution, we have
to invite some domain experts to evaluate the belief degree that each event
will happen. Perhaps some people think that the belief degree is subjective
probability or fuzzy concept. However, Liu [86] declared that it is usually
32 Chapter 1 - Uncertain Measure

inappropriate because both probability theory and fuzzy set theory may lead
to counterintuitive results in this case.
In order to rationally deal with belief degrees, uncertainty theory was
founded by Liu [77] in 2007 and perfected by Liu [80] in 2009. The core of
uncertainty theory is uncertain measure defined by the normality axiom, du-
ality axiom, subadditivity axiom, and product axiom. In practice, uncertain
measure is interpreted as the personal belief degree of an uncertain event
that may happen.
Uncertain measure was also actively investigated by Gao [41], Liu [84],
Zhang [203], Peng-Iwamura [123], and Liu [92], among others. Since then,
the tool of uncertain measure was well developed and became a rigorous
footstone of uncertainty theory.
Chapter 2

Uncertain Variable

Uncertain variable is a fundamental concept in uncertainty theory. It is used


to represent quantities with uncertainty. The emphasis in this chapter is
mainly on uncertain variable, uncertainty distribution, independence, opera-
tional law, expected value, variance, moments, distance, entropy, conditional
uncertainty distribution, uncertain sequence, uncertain vector, and uncertain
matrix.

2.1 Uncertain Variable


Roughly speaking, an uncertain variable is a measurable function on an un-
certainty space. A formal definition is given as follows.

Definition 2.1 (Liu [77]) An uncertain variable is a function ξ from an


uncertainty space (Γ, L, M) to the set of real numbers such that {ξ ∈ B} is
an event for any Borel set B of real numbers.

<..
...
........
.........
.... ........ ..........
... ..... ....
..... ....
... .... ...
... ....
. ...
...
...
ξ(γ) ...
.
... ...
...
.
..
... ..
... ....
... .
...
... ...
... ...
... ....
.
.. ..
...
.................................. ...
........ ... ....... ...
....... ....... ...
....... ... ...... .
....
.
..
... ...... .....
....... .....
... .........
......................
...
..
..............................................................................................................................................................................................................................................
.... Γ
.

Figure 2.1: An Uncertain Variable


34 Chapter 2 - Uncertain Variable

Remark 2.1: Note that the event {ξ ∈ B} is a subset of the universal set
Γ, i.e.,
{ξ ∈ B} = {γ ∈ Γ | ξ(γ) ∈ B}. (2.1)

Example 2.1: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with power


set and M{γ1 } = 0.6, M{γ2 } = 0.4. Then
(
0, if γ = γ1
ξ(γ) = (2.2)
1, if γ = γ2

is an uncertain variable. Furthermore, we have

M{ξ = 0} = M{γ | ξ(γ) = 0} = M{γ1 } = 0.6, (2.3)

M{ξ = 1} = M{γ | ξ(γ) = 1} = M{γ2 } = 0.4. (2.4)

Example 2.2: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Then

ξ(γ) = 3γ, ∀γ ∈ Γ (2.5)

is an uncertain variable. Furthermore, we have

M{ξ = 1} = M{γ | ξ(γ) = 1} = M{1/3} = 0, (2.6)

M{ξ ∈ [0, 2]} = M{γ | ξ(γ) ∈ [0, 2]} = M{[0, 2/3]} = 2/3, (2.7)
M{ξ > 2} = M{γ | ξ(γ) > 2} = M{(2/3, 1]} = 1/3. (2.8)

Example 2.3: A real number c may be regarded as a special uncertain


variable. In fact, it is the constant function

ξ(γ) ≡ c (2.9)

on the uncertainty space (Γ, L, M). Furthermore, for any Borel set B of real
numbers, we have

M{ξ ∈ B} = M{γ | ξ(γ) ∈ B} = M{Γ} = 1, if c ∈ B, (2.10)

M{ξ ∈ B} = M{γ | ξ(γ) ∈ B} = M{∅} = 0, if c 6∈ B. (2.11)

Example 2.4: Let ξ be an uncertain variable and let b be a real number.


Then
{ξ = b}c = {γ | ξ(γ) = b}c = {γ | ξ(γ) 6= b} = {ξ 6= b}.
Thus {ξ = b} and {ξ 6= b} are opposite events. Furthermore, by the duality
axiom, we obtain
M{ξ = b} + M{ξ 6= b} = 1. (2.12)
Section 2.2 - Uncertainty Distribution 35

Exercise 2.1: Let ξ be an uncertain variable and let B be a Borel set of


real numbers. Show that {ξ ∈ B} and {ξ ∈ B c } are opposite events, and

M{ξ ∈ B} + M{ξ ∈ B c } = 1. (2.13)

Exercise 2.2: Let ξ and η be two uncertain variables. Show that {ξ ≥ η}


and {ξ < η} are opposite events, and

M{ξ ≥ η} + M{ξ < η} = 1. (2.14)

Definition 2.2 An uncertain variable ξ on the uncertainty space (Γ, L, M) is


said to be (a) nonnegative if M{ξ < 0} = 0; and (b) positive if M{ξ ≤ 0} = 0.

Definition 2.3 Let ξ and η be uncertain variables defined on the uncertainty


space (Γ, L, M). We say ξ = η if ξ(γ) = η(γ) for almost all γ ∈ Γ.

Definition 2.4 Let ξ1 , ξ2 , · · · , ξn be uncertain variables, and let f be a real-


valued measurable function. Then ξ = f (ξ1 , ξ2 , · · · , ξn ) is an uncertain vari-
able defined by

ξ(γ) = f (ξ1 (γ), ξ2 (γ), · · · , ξn (γ)), ∀γ ∈ Γ. (2.15)

Example 2.5: Let ξ1 and ξ2 be two uncertain variables. Then the sum
ξ = ξ1 + ξ2 is an uncertain variable defined by

ξ(γ) = ξ1 (γ) + ξ2 (γ), ∀γ ∈ Γ.

The product ξ = ξ1 ξ2 is also an uncertain variable defined by

ξ(γ) = ξ1 (γ) · ξ2 (γ), ∀γ ∈ Γ.

The reader may wonder whether ξ(γ) defined by (2.15) is an uncertain


variable. The following theorem answers this question.

Theorem 2.1 Let ξ1 , ξ2 , · · · , ξn be uncertain variables, and let f be a real-


valued measurable function. Then f (ξ1 , ξ2 , · · · , ξn ) is an uncertain variable.

Proof: Since ξ1 , ξ2 , · · · , ξn are uncertain variables, they are measurable func-


tions from an uncertainty space (Γ, L, M) to the set of real numbers. Thus
f (ξ1 , ξ2 , · · · , ξn ) is also a measurable function from the uncertainty space
(Γ, L, M) to the set of real numbers. Hence f (ξ1 , ξ2 , · · · , ξn ) is an uncertain
variable.
36 Chapter 2 - Uncertain Variable

2.2 Uncertainty Distribution


This section introduces a concept of uncertainty distribution in order to de-
scribe uncertain variables. Mention that uncertainty distribution is a carrier
of incomplete information of uncertain variable. However, in many cases, it
is sufficient to know the uncertainty distribution rather than the uncertain
variable itself.

Definition 2.5 (Liu [77]) The uncertainty distribution Φ of an uncertain


variable ξ is defined by
Φ(x) = M {ξ ≤ x} (2.16)
for any real number x.

Φ(x)
...
..........
...
1 ............................................................................
... ....................................
................
... ...........
... .
...
..........
..
... .......
.....
... .....
... ......
... ...
......
... .....
.....
... .....
... ......
... ..
.......
.
.
... .......
.......
... ........
. ...............
......... .........................
............... .
.........................................................................................................................................................................................................................................................
..
.. x
0 ...
..
.

Figure 2.2: An Uncertainty Distribution

Exercise 2.3: A real number c is a special uncertain variable ξ(γ) ≡ c.


Show that such an uncertain variable has an uncertainty distribution
(
0, if x < c
Φ(x) =
1, if x ≥ c.

Exercise 2.4: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with power


set and M{γ1 } = 0.7, M{γ2 } = 0.3. Show that the uncertain variable

0, if γ = γ1
ξ(γ) =
1, if γ = γ2

has an uncertainty distribution



 0, if x < 0

Φ(x) = 0.7, if 0 ≤ x < 1

1, if x ≥ 1.

Section 2.2 - Uncertainty Distribution 37

Exercise 2.5: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2. Show that the
uncertain variable 
 1, if γ = γ1
ξ(γ) = 2, if γ = γ2
3, if γ = γ3

has an uncertainty distribution




 0, if x<1
1≤x<2

 0.6, if
Φ(x) =

 0.8, if 2≤x<3

x ≥ 3.

1, if

Exercise 2.6: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. (i) Show that the uncertain variable

ξ(γ) = γ, ∀γ ∈ [0, 1] (2.17)

has an uncertainty distribution



 0,
 if x ≤ 0
Φ(x) = x, if 0 < x ≤ 1 (2.18)

1, if x > 1.

(ii) What is the uncertainty distribution of ξ(γ) = 1 − γ? (iii) What do those


two uncertain variables make you think about? (iv) Design a third uncertain
variable whose uncertainty distribution is also (2.18).

Exercise 2.7: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. (i) Show that the uncertain variable ξ(γ) =
γ 2 has an uncertainty distribution


 0, if x < 0
 √
Φ(x) = x, if 0 ≤ x ≤ 1 (2.19)


1, if x > 1.


(ii) What is the uncertainty distribution of ξ(γ) = γ? (iii) What is the
uncertainty distribution of ξ(γ) = 1/γ?

Definition 2.6 Uncertain variables are said to be identically distributed if


they have the same uncertainty distribution.

It is clear that uncertain variables ξ and η are identically distributed if


ξ = η. However, identical distribution does not imply ξ = η. For example,
38 Chapter 2 - Uncertain Variable

let (Γ, L, M) be {γ1 , γ2 } with power set and M{γ1 } = M{γ2 } = 0.5. Define
( (
1, if γ = γ1 −1, if γ = γ1
ξ(γ) = η(γ) =
−1, if γ = γ2 , 1, if γ = γ2 .

Then ξ and η have the same uncertainty distribution,



 0, if x < −1

Φ(x) = 0.5, if − 1 ≤ x < 1

1, if x ≥ 1.

Thus the two uncertain variables ξ and η are identically distributed but ξ 6= η.

What is a “completely unknown number”?


A “completely unknown number” may be regarded as an uncertain variable
whose uncertainty distribution is

Φ(x) = 0.5 (2.20)

for any real number x.

How old is John?


Someone thinks John is neither younger than 24 nor older than 28, and
presents an uncertainty distribution of John’s age as follows,


 0, if x ≤ 24

Φ(x) = (x − 24)/4, if 24 ≤ x ≤ 28 (2.21)


1, if x ≥ 28.

How tall is James?


Someone thinks James’ height is between 180 and 185 centimeters, and
presents the following uncertainty distribution,


 0, if x ≤ 180

Φ(x) = (x − 180)/5, if 180 ≤ x ≤ 185 (2.22)


1, if x ≥ 185.

Sufficient and Necessary Condition


Theorem 2.2 (Peng-Iwamura Theorem [122]) A function Φ(x) : < → [0, 1]
is an uncertainty distribution if and only if it is a monotone increasing func-
tion except Φ(x) ≡ 0 and Φ(x) ≡ 1.
Section 2.2 - Uncertainty Distribution 39

Proof: It is obvious that an uncertainty distribution Φ is a monotone in-


creasing function. In addition, both Φ(x) 6≡ 0 and Φ(x) 6≡ 1 follow from the
asymptotic theorem immediately. Conversely, suppose that Φ is a monotone
increasing function but Φ(x) 6≡ 0 and Φ(x) 6≡ 1. We will prove that there is
an uncertain variable whose uncertainty distribution is just Φ. Let C be a
collection of all intervals of the form (−∞, a], (b, ∞), ∅ and <. We define a
set function on < as follows,
M{(−∞, a]} = Φ(a),
M{(b, +∞)} = 1 − Φ(b),
M{∅} = 0, M{<} = 1.
For an arbitrary Borel set B of real numbers, there exists a sequence {Ai } in
C such that
[∞
B⊂ Ai .
i=1

Note that such a sequence is not unique. We define a set function M{B} by
∞ ∞

X X
M{Ai }, M{Ai } < 0.5




 inf

if inf

S S
B⊂ A B⊂ Ai i=1
 i=1

 i

 i=1 i=1
∞ ∞
M{B} = X X

 1 − inf∞ M{Ai }, if inf∞ M{Ai } < 0.5
 S S

 B c ⊂ Ai i=1 Bc ⊂ Ai i=1
i=1 i=1




0.5, otherwise.

Then the set function M is indeed an uncertain measure on <, and the un-
certain variable defined by the identity function ξ(γ) = γ has the uncertainty
distribution Φ.

Example 2.6: It follows from the sufficient and necessary condition that
the function
Φ(x) ≡ 0.5 (2.23)
is an uncertainty distribution. Take an uncertainty space (Γ, L, M) to be <
with power set and

 0,
 if Λ = ∅
M{Λ} = 1, if Λ = < (2.24)

0.5, otherwise.

Then the uncertain variable ξ(γ) = γ has the uncertainty distribution (2.23).

Exercise 2.8: (i) Design an uncertain variable whose uncertainty distribu-


tion is
Φ(x) = 0.4 (2.25)
40 Chapter 2 - Uncertain Variable

for any real number x. (ii) Design an uncertain variable whose uncertainty
distribution is
Φ(x) = 0.6 (2.26)

for any real number x.

Exercise 2.9: Design an uncertain variable whose uncertainty distribution


is
−1
Φ(x) = (1 + exp(−x)) (2.27)

for any real number x.

Some Uncertainty Distributions

Definition 2.7 An uncertain variable ξ is called linear if it has a linear


uncertainty distribution

if x ≤ a

0, 
x−a


Φ(x) = , if a ≤ x ≤ b (2.28)

 b−a

1, if x ≥ b

denoted by L(a, b) where a and b are real numbers with a < b.

Φ(x)
...
..........
...
...
1 ..........................................................
...
...
...........................................................
..... .
..... ...
... .....
... ..
....... ...
..
... ..... ..
... ..... ..
... ...
...... ..
... ......
. ..
... ..
..... ..
... ......
. ..
... .
...... ..
... ...
.... ..
... ..
..... ..
... .
....... ..
... ..
..... ..
... ......
. ..
... ..
..... ..
.
.................................................................................................................................................................................................................................. x
..
0 a
....
... b

Figure 2.3: Linear Uncertainty Distribution

Example 2.7: John’s age (2.21) is a linear uncertain variable L(24, 28), and
James’ height (2.22) is another linear uncertain variable L(180, 185).

Definition 2.8 An uncertain variable ξ is called zigzag if it has a zigzag


Section 2.2 - Uncertainty Distribution 41

uncertainty distribution


 0, if x ≤ a
x−a


if a ≤ x ≤ b


 ,
 2(b − a)
Φ(x) = (2.29)
x + c − 2b
, if b ≤ x ≤ c


2(c − b)





1, if x ≥ c

denoted by Z(a, b, c) where a, b, c are real numbers with a < b < c.

Φ(x)
...
..........
...
.............................................................
1 ...
... ...... .
...... ...
......................................................
.......
... ......
... ....
....... ...
....
... ...... ..
... ...... ..
... ..
......... ..
... .
........ ..
...
........................................ ..
0.5 ...
.
.
.
.. ..
. ..
..
... .
.. ..
... .. ..
... ... ..
. ..
... .
... .
. ..
... .... .. ..
... .... ... ..
... ... . ..
......................................................................................................................................................................................................................................... x
...
0 a...
... b c

Figure 2.4: Zigzag Uncertainty Distribution

Definition 2.9 An uncertain variable ξ is called normal if it has a normal


uncertainty distribution
  −1
π(e − x)
Φ(x) = 1 + exp √ , x∈< (2.30)

denoted by N (e, σ) where e and σ are real numbers with σ > 0.

Definition 2.10 An uncertain variable ξ is called lognormal if ln ξ is a nor-


mal uncertain variable N (e, σ). In other words, a lognormal uncertain vari-
able has an uncertainty distribution
  −1
π(e − ln x)
Φ(x) = 1 + exp √ , x≥0 (2.31)

denoted by LOGN (e, σ), where e and σ are real numbers with σ > 0.
42 Chapter 2 - Uncertain Variable

Φ(x)
....
........
..
...
1 .........................................................................
..
.... ..............................
.............
........
... ..........
... ..
...
.........
.
... ......
.....
... ......
... .....
... ...
......
... ....
............
0.5 .........................................................................
...
... ..... ...
.
... ...
...... .. ....
... ..... ... ...
...... ...
... ...... .. ...
... ....... .. ...
... ...
...
........ . ....
.
. .
...
...
...
....... .
. .....
.........
.................... .
...........................................................................................................................................................................................................
........................ . ...............
......................................... x
..
0 ..
....
e
.

Figure 2.5: Normal Uncertainty Distribution

Φ(x)
...
..........
...
.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ........ .
1 .... .......................................
...................
... ...........
... ........
... ...
.........
... .....
.....
... .....
... .....
.... ..
.....
.
.. . . . . . . . . . . . . . . ........
0.5 ... ....
... .
... .... ..
... ..
... .... .
.... .
... .... .
... .... .
... .
..
...... .
.. .
... ...................... .
......................... ..............................................................................................................................................................................................
.....
x
0 ..
...exp(e)

Figure 2.6: Lognormal Uncertainty Distribution

Definition 2.11 An uncertain variable ξ is called empirical if it has an em-


pirical uncertainty distribution


 0, if x < x1
(αi+1 − αi )(x − xi )


Φ(x) = αi + , if xi ≤ x ≤ xi+1 , 1 ≤ i < n (2.32)

 xi+1 − xi

1, if x > xn

where x1 < x2 < · · · < xn and 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1.

Measure Inversion Theorem


Theorem 2.3 (Liu [84], Measure Inversion Theorem) Let ξ be an uncertain
variable with uncertainty distribution Φ. Then for any real number x, we
have
M{ξ ≤ x} = Φ(x), M{ξ > x} = 1 − Φ(x). (2.33)

Proof: The equation M{ξ ≤ x} = Φ(x) follows from the definition of uncer-
tainty distribution immediately. By using the duality of uncertain measure,
Section 2.2 - Uncertainty Distribution 43

Φ(x)
....
........
..
...
.
.... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .......................................
1 ....
..
..
.. ....
α 5 .............................................................................................................................• .
...
.. .
.........
.
α .. ............. ..
4 .......................................................................• ...... .
..
... .. ... ..
... ... ... ....
... ... .. ..
... ... .. ..
... ... ... ....
... ... .. ...
... ..
. .. ..
α 3 ... ... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...........•
.......... ..
.... ..
.
.
.
.
...
..
.... ..
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ................. .. .. ....
α 2 ... .
..
.
.•
.
.
.. ..
.
.
.
..
.
..
.
.
..
..
.... ...... ...
. .. .. ..
... . .. ..
... .
...
.
. .. .. .
.. ....
... .
...
.
. .
. .
. .
. ..
... ..
. .
.. . .. ..
... ..
.... . ..
. . ....
... ..
.... ..
. .
.
..
. ...
. ..
α .... .. .. .. .. .. .. .. .. .• ....... ... ..
.. ..
1 ... .. ..
..
.
.
. .
. ...
... .... . .
. .
. ..
........................................................................................................................................................................................................................................................................ x
x ... x x x x
0 ..
..
1 2 3 4 5

Figure 2.7: Empirical Uncertainty Distribution

we get
M{ξ > x} = 1 − M{ξ ≤ x} = 1 − Φ(x).

The theorem is verified.

Remark 2.2: When the uncertainty distribution Φ is a continuous function,


we also have

M{ξ < x} = Φ(x), M{ξ ≥ x} = 1 − Φ(x). (2.34)

Remark 2.3: Perhaps some readers would like to get an exactly scalar value
of the uncertain measure M{a ≤ ξ ≤ b}. Generally speaking, it is an impos-
sible job (except a = −∞ or b = +∞) if only an uncertainty distribution is
available. I would like to ask if there is a need to know it. In fact, it is not
necessary for practical purpose. Would you believe? I hope so!

Regular Uncertainty Distribution

Definition 2.12 (Liu [84]) An uncertainty distribution Φ(x) is said to be


regular if it is a continuous and strictly increasing function with respect to x
at which 0 < Φ(x) < 1, and

lim Φ(x) = 0, lim Φ(x) = 1. (2.35)


x→−∞ x→+∞

For example, linear uncertainty distribution, zigzag uncertainty distribu-


tion, normal uncertainty distribution, and lognormal uncertainty distribution
are all regular.
44 Chapter 2 - Uncertain Variable

Inverse Uncertainty Distribution


It is clear that a regular uncertainty distribution Φ(x) has an inverse function
on the range of x with 0 < Φ(x) < 1, and the inverse function Φ−1 (α) exists
on the open interval (0, 1).

Definition 2.13 (Liu [84]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ(x). Then the inverse function Φ−1 (α) is called the
inverse uncertainty distribution of ξ.

Note that the inverse uncertainty distribution Φ−1 (α) is well defined on the
open interval (0, 1). If needed, we may extend the domain to [0, 1] via

Φ−1 (0) = lim Φ−1 (α), Φ−1 (1) = lim Φ−1 (α). (2.36)
α↓0 α↑1

Example 2.8: The inverse uncertainty distribution of linear uncertain vari-


able L(a, b) is
Φ−1 (α) = (1 − α)a + αb. (2.37)

Φ−1 (α)
... ...
.......... .
.................................................................
b ...
... ..
..
..
.
..
... ...
.
... ...... ...
.....
... ...... ..
... ......
... ..
..
...... ..
..
... ...
...... ..
... ...
..... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
... ...
...... ..
... ..
..
.... ..
.
. ..
.....
.
........................................................................................................................................................................................
.
α
.... .
......
0 ... ......
.
....
.
1
... ...........
... .........
.......
a ....

Figure 2.8: Inverse Linear Uncertainty Distribution

Example 2.9: The inverse uncertainty distribution of zigzag uncertain vari-


able Z(a, b, c) is
(
−1 (1 − 2α)a + 2αb, if α < 0.5
Φ (α) = (2.38)
(2 − 2α)b + (2α − 1)c, if α ≥ 0.5.

Example 2.10: The inverse uncertainty distribution of normal uncertain


variable N (e, σ) is

−1 σ 3 α
Φ (α) = e + ln . (2.39)
π 1−α
Section 2.2 - Uncertainty Distribution 45

Φ−1 (α)
.... ...
......... .
.... ..
c .........................................................
.... .....
....... .
....... .
... ... ..
....... ..
... .......
.. ....... ..
... .
...
......... ..
... ...
.......
. ..
...
.......................................... ..
b ...
... ...
.
..
..
... ..
. ..
..
.
.... .
. ..
... . .
.... .
... ... .
. ..
... ...
.... . ..
.... .
. ..
... . .
..... .
.
. .
.. .
....................................................................................................................................................................................... α
.. .
....
0 ...
...
.
....
.....
..
.
0.5 1
... .........
.. ....
a .......
...

Figure 2.9: Inverse Zigzag Uncertainty Distribution

Φ−1 (α)
.... ...
......... .
.... ...
... ....
.
... .. ..
.
... ... ..
... ... ..
... .... .
... ..
....... ...
.
... ...... ...
......
... ....... ..
... .........
... ..
...
............
. ..
..
....
e ..................................................
... .
...
... .
..
..
... ...
...
....... .
. ..
... ...
.......
. .
. ..
... ...
...... .
. ..
..
.... ....... ... .
....................................................................................................................................................................................... α
... ...
0 ... ...
......
0.5 1
....

Figure 2.10: Inverse Normal Uncertainty Distribution

Example 2.11: The inverse uncertainty distribution of lognormal uncertain


variable LOGN (e, σ) is
√ !
−1 σ 3 α
Φ (α) = exp e + ln . (2.40)
π 1−α

Theorem 2.4 A function Φ−1 is an inverse uncertainty distribution of an


uncertain variable ξ if and only if

M{ξ ≤ Φ−1 (α)} = α (2.41)

for all α ∈ (0, 1).

Proof: Suppose Φ−1 is the inverse uncertainty distribution of ξ. Then for


any α, we have
M{ξ ≤ Φ−1 (α)} = Φ(Φ−1 (α)) = α.
Conversely, suppose Φ−1 meets (2.41). Write x = Φ−1 (α). Then α = Φ(x)
and
M{ξ ≤ x} = α = Φ(x).
46 Chapter 2 - Uncertain Variable

Φ−1 (α)
.... ...
......... .
.... . ..
... .... ...
... ... ..
... .. .
... ... ..
... ... ..
... ... ..
... ... ..
... ... ...
... .. ..
... .
...
. ..
.
... .... ..
... ...
..... ..
... .
...
..... ..
... ...
...
....... ..
... ..
...
...
......... ..
... ..
...
...
...
..........
. ..
......
... .............................. ..
..........
..................................................................................................................................................................................
.
α
0 .... 1
..

Figure 2.11: Inverse Lognormal Uncertainty Distribution

That is, Φ is the uncertainty distribution of ξ and Φ−1 is its inverse uncer-
tainty distribution. The theorem is verified.

Theorem 2.5 (Liu [89], Sufficient and Necessary Condition) A function


Φ−1 (α) : (0, 1) → < is an inverse uncertainty distribution if and only if
it is a continuous and strictly increasing function with respect to α.

Proof: Suppose Φ−1 (α) is an inverse uncertainty distribution. It follows


from the definition of inverse uncertainty distribution that Φ−1 (α) is a con-
tinuous and strictly increasing function with respect to α ∈ (0, 1).
Conversely, suppose Φ−1 (α) is a continuous and strictly increasing func-
tion on (0, 1). Define

0, if x ≤ lim Φ−1 (α)



α↓0




−1
Φ(x) = α, if x = Φ (α)

 1, if x ≥ lim Φ−1 (α).


α↑1

It follows from Peng-Iwamura theorem that Φ(x) is an uncertainty distribu-


tion of some uncertain variable ξ. Then for each α ∈ (0, 1), we have

M{ξ ≤ Φ−1 (α)} = Φ(Φ−1 (α)) = α.

Thus Φ−1 (α) is just the inverse uncertainty distribution of the uncertain
variable ξ. The theorem is verified.

2.3 Independence
Note that an uncertain variable is a measurable function from an uncertainty
space to the set of real numbers. The independence of two functions means
that knowing the value of one does not change our estimation of the value
of another. What uncertain variables meet this condition? A typical case is
Section 2.3 - Independence 47

that they are defined on different uncertainty spaces. For example, let ξ1 (γ1 )
and ξ2 (γ2 ) be uncertain variables on the uncertainty spaces (Γ1 , L1 , M1 ) and
(Γ2 , L2 , M2 ), respectively. It is clear that they are also uncertain variables
on the product uncertainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ). Then for any
Borel sets B1 and B2 of real numbers, we have
M{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )}
= M {(γ1 , γ2 ) | ξ1 (γ1 ) ∈ B1 , ξ2 (γ2 ) ∈ B2 }
= M {(γ1 | ξ1 (γ1 ) ∈ B1 ) × (γ2 | ξ2 (γ2 ) ∈ B2 )}
= M1 {γ1 | ξ1 (γ1 ) ∈ B1 } ∧ M2 {γ2 | ξ2 (γ2 ) ∈ B2 }
= M {ξ1 ∈ B1 } ∧ M {ξ2 ∈ B2 } .
That is,
M{(ξ1 ∈ B1 ) ∩ (ξ2 ∈ B2 )} = M {ξ1 ∈ B1 } ∧ M {ξ2 ∈ B2 } . (2.42)
Thus we say two uncertain variables are independent if the equation (2.42)
holds. Generally, we may define independence in the following form.
Definition 2.14 (Liu [80]) The uncertain variables ξ1 , ξ2 , · · · , ξn are said to
be independent if
( n ) n
\ ^
M (ξi ∈ Bi ) = M {ξi ∈ Bi } (2.43)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn of real numbers.

Exercise 2.10: Show that a constant (a special uncertain variable) is always


independent of any uncertain variable.

Exercise 2.11: John gives Tom 2 dollars. Thus John gets “−2 dollars”
and Tom “+2 dollars”. Are John’s “−2 dollars” and Tom’s “+2 dollars”
independent? Why?

Exercise 2.12: Let ξ be an uncertain variable. Are ξ and 1−ξ independent?


Please justify your answer.

Exercise 2.13: Construct n independent uncertain variables. (Hint: Define


them on the product uncertainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ) × · · · ×
(Γn , Ln , Mn ).)
Theorem 2.6 (Liu [80]) The uncertain variables ξ1 , ξ2 , · · · , ξn are indepen-
dent if and only if
( n ) n
[ _
M (ξi ∈ Bi ) = M {ξi ∈ Bi } (2.44)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn of real numbers.


48 Chapter 2 - Uncertain Variable

Proof: It follows from the duality of uncertain measure that ξ1 , ξ2 , · · · , ξn


are independent if and only if
( n ) ( n )
[ \
M (ξi ∈ Bi ) = 1 − M c
(ξi ∈ Bi )
i=1 i=1
^n n
_
=1− M{ξi ∈ Bic } = M {ξi ∈ Bi } .
i=1 i=1

Thus the proof is complete.


Theorem 2.7 Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables, and let
f1 , f2 , · · · , fn be measurable functions. Then f1 (ξ1 ), f2 (ξ2 ), · · · , fn (ξn ) are
independent uncertain variables.
Proof: For any Borel sets B1 , B2 , · · · , Bn of real numbers, it follows from
the definition of independence that
( n ) ( n )
\ \
−1
M (fi (ξi ) ∈ Bi ) = M (ξi ∈ fi (Bi ))
i=1 i=1
n
^ n
^
= M{ξi ∈ fi−1 (Bi )} = M{fi (ξi ) ∈ Bi }.
i=1 i=1

Thus f1 (ξ1 ), f2 (ξ2 ), · · · , fn (ξn ) are independent uncertain variables.

2.4 Operational Law: Inverse Distribution


This section provides some operational laws for calculating the inverse uncer-
tainty distributions of strictly increasing function, strictly decreasing func-
tion, and strictly monotone function of uncertain variables.

Strictly Increasing Function of Uncertain Variables


A real-valued function f (x1 , x2 , · · · , xn ) is said to be strictly increasing if
f (x1 , x2 , · · · , xn ) ≤ f (y1 , y2 , · · · , yn ) (2.45)
whenever xi ≤ yi for i = 1, 2, · · · , n, and
f (x1 , x2 , · · · , xn ) < f (y1 , y2 , · · · , yn ) (2.46)
whenever xi < yi for i = 1, 2, · · · , n. The following are strictly increasing
functions,
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn ,
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn ,
f (x1 , x2 , · · · , xn ) = x1 + x2 + · · · + xn ,
f (x1 , x2 , · · · , xn ) = x1 x2 · · · xn , x1 , x2 , · · · , xn ≥ 0.
Section 2.4 - Operational Law: Inverse Distribution 49

Theorem 2.8 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f
is a strictly increasing function, then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.47)

has an inverse uncertainty distribution

Ψ−1 (α) = f (Φ−1 −1 −1


1 (α), Φ2 (α), · · · , Φn (α)). (2.48)

Proof: For simplicity, we only prove the case n = 2. At first, we always have

{ξ ≤ Ψ−1 (α)} ≡ {f (ξ1 , ξ2 ) ≤ f (Φ−1 −1


1 (α), Φ2 (α))}.

On the one hand, since f is a strictly increasing function, we obtain

{ξ ≤ Ψ−1 (α)} ⊃ {ξ1 ≤ Φ−1 −1


1 (α)} ∩ {ξ2 ≤ Φ2 (α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≥ M{(ξ1 ≤ Φ−1 −1


1 (α)) ∩ (ξ2 ≤ Φ2 (α))}

= M{ξ1 ≤ Φ−1 −1
1 (α)} ∧ M{ξ2 ≤ Φ2 (α)}

= α ∧ α = α.

On the other hand, since f is a strictly increasing function, we obtain

{ξ ≤ Ψ−1 (α)} ⊂ {ξ1 ≤ Φ−1 −1


1 (α)} ∪ {ξ2 ≤ Φ2 (α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≤ M{(ξ1 ≤ Φ−1 −1


1 (α)) ∪ (ξ2 ≤ Φ2 (α))}

= M{ξ1 ≤ Φ−1 −1
1 (α)} ∨ M{ξ2 ≤ Φ2 (α)}

= α ∨ α = α.

It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.

Exercise 2.14: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the
sum
ξ = ξ1 + ξ2 + · · · + ξn (2.49)
has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) + Φ2 (α) + · · · + Φn (α). (2.50)
50 Chapter 2 - Uncertain Variable

Exercise 2.15: Let ξ1 , ξ2 , · · · , ξn be independent and positive uncertain


variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
Show that the product

ξ = ξ1 × ξ2 × · · · × ξn (2.51)

has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) × Φ2 (α) × · · · × Φn (α). (2.52)

Exercise 2.16: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the
minimum
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.53)
has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α). (2.54)

Exercise 2.17: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the
maximum
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.55)
has an inverse uncertainty distribution

Ψ−1 (α) = Φ−1 −1 −1


1 (α) ∨ Φ2 (α) ∨ · · · ∨ Φn (α). (2.56)

Example 2.12: The independence condition in Theorem 2.8 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then ξ1 (γ) = γ is a linear uncertain
variable with inverse uncertainty distribution

Φ−1
1 (α) = α, (2.57)

and ξ2 (γ) = 1 − γ is also a linear uncertain variable with inverse uncertainty


distribution
Φ−1
2 (α) = α. (2.58)
Note that ξ1 and ξ2 are not independent, and ξ1 + ξ2 ≡ 1 whose inverse
uncertainty distribution is Ψ−1 (α) ≡ 1. Thus

Ψ−1 (α) 6= Φ−1 −1


1 (α) + Φ2 (α). (2.59)

Therefore, the independence condition cannot be removed.


Section 2.4 - Operational Law: Inverse Distribution 51

Theorem 2.9 Assume that ξ1 and ξ2 are independent linear uncertain vari-
ables L(a1 , b1 ) and L(a2 , b2 ), respectively. Then the sum ξ1 + ξ2 is also a
linear uncertain variable L(a1 + a2 , b1 + b2 ), i.e.,

L(a1 , b1 ) + L(a2 , b2 ) = L(a1 + a2 , b1 + b2 ). (2.60)

The product of a linear uncertain variable L(a, b) and a scalar number k > 0
is also a linear uncertain variable L(ka, kb), i.e.,

k · L(a, b) = L(ka, kb). (2.61)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then

Φ−1
1 (α) = (1 − α)a1 + αb1 ,

Φ−1
2 (α) = (1 − α)a2 + αb2 .

It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is

Ψ−1 (α) = Φ−1 −1


1 (α) + Φ2 (α) = (1 − α)(a1 + a2 ) + α(b1 + b2 ).

Hence the sum is also a linear uncertain variable L(a1 + a2 , b1 + b2 ). The


first part is verified. Next, suppose that the uncertainty distribution of the
uncertain variable ξ ∼ L(a, b) is Φ. It follows from the operational law that
when k > 0, the inverse uncertainty distribution of kξ is

Ψ−1 (α) = kΦ−1 (α) = (1 − α)(ka) + α(kb).

Hence kξ is just a linear uncertain variable L(ka, kb).

Theorem 2.10 Assume that ξ1 and ξ2 are independent zigzag uncertain


variables Z(a1 , b1 , c1 ) and Z(a2 , b2 , c2 ), respectively. Then the sum ξ1 + ξ2 is
also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ), i.e.,

Z(a1 , b1 , c1 ) + Z(a2 , b2 , c2 ) = Z(a1 + a2 , b1 + b2 , c1 + c2 ). (2.62)

The product of a zigzag uncertain variable Z(a, b, c) and a scalar number


k > 0 is also a zigzag uncertain variable Z(ka, kb, kc), i.e.,

k · Z(a, b, c) = Z(ka, kb, kc). (2.63)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then
(
−1 (1 − 2α)a1 + 2αb1 , if α < 0.5
Φ1 (α) =
(2 − 2α)b1 + (2α − 1)c1 , if α ≥ 0.5,
52 Chapter 2 - Uncertain Variable

(
(1 − 2α)a2 + 2αb2 , if α < 0.5
Φ−1
2 (α) =
(2 − 2α)b2 + (2α − 1)c2 , if α ≥ 0.5.
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is
(
−1 (1 − 2α)(a1 + a2 ) + 2α(b1 + b2 ), if α < 0.5
Ψ (α) =
(2 − 2α)(b1 + b2 ) + (2α − 1)(c1 + c2 ), if α ≥ 0.5.

Hence the sum is also a zigzag uncertain variable Z(a1 + a2 , b1 + b2 , c1 + c2 ).


The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable ξ ∼ Z(a, b, c) is Φ. It follows from the operational law
that when k > 0, the inverse uncertainty distribution of kξ is
(
(1 − 2α)(ka) + 2α(kb), if α < 0.5
Ψ−1 (α) = kΦ−1 (α) =
(2 − 2α)(kb) + (2α − 1)(kc), if α ≥ 0.5.

Hence kξ is just a zigzag uncertain variable Z(ka, kb, kc).

Theorem 2.11 Let ξ1 and ξ2 be independent normal uncertain variables


N (e1 , σ1 ) and N (e2 , σ2 ), respectively. Then the sum ξ1 + ξ2 is also a normal
uncertain variable N (e1 + e2 , σ1 + σ2 ), i.e.,

N (e1 , σ1 ) + N (e2 , σ2 ) = N (e1 + e2 , σ1 + σ2 ). (2.64)

The product of a normal uncertain variable N (e, σ) and a scalar number


k > 0 is also a normal uncertain variable N (ke, kσ), i.e.,

k · N (e, σ) = N (ke, kσ). (2.65)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then

−1 σ1 3 α
Φ1 (α) = e1 + ln ,
π 1−α

−1 σ2 3 α
Φ2 (α) = e2 + ln .
π 1−α
It follows from the operational law that the inverse uncertainty distribution
of ξ1 + ξ2 is

(σ1 + σ2 ) 3 α
Ψ−1 (α) = Φ−1
1 (α) + Φ −1
2 (α) = (e 1 + e2 ) + ln .
π 1−α
Hence the sum is also a normal uncertain variable N (e1 + e2 , σ1 + σ2 ). The
first part is verified. Next, suppose that the uncertainty distribution of the
Section 2.4 - Operational Law: Inverse Distribution 53

uncertain variable ξ ∼ N (e, σ) is Φ. It follows from the operational law that,


when k > 0, the inverse uncertainty distribution of kξ is

−1 −1 (kσ) 3 α
Ψ (α) = kΦ (α) = (ke) + ln .
π 1−α
Hence kξ is just a normal uncertain variable N (ke, kσ).

Theorem 2.12 Assume that ξ1 and ξ2 are independent lognormal uncertain


variables LOGN (e1 , σ1 ) and LOGN (e2 , σ2 ), respectively. Then the product
ξ1 · ξ2 is also a lognormal uncertain variable LOGN (e1 + e2 , σ1 + σ2 ), i.e.,

LOGN (e1 , σ1 ) · LOGN (e2 , σ2 ) = LOGN (e1 + e2 , σ1 + σ2 ). (2.66)

The product of a lognormal uncertain variable LOGN (e, σ) and a scalar num-
ber k > 0 is also a lognormal uncertain variable LOGN (e + ln k, σ), i.e.,

k · LOGN (e, σ) = LOGN (e + ln k, σ). (2.67)

Proof: Assume that the uncertain variables ξ1 and ξ2 have uncertainty


distributions Φ1 and Φ2 , respectively. Then
√ !
−1 σ1 3 α
Φ1 (α) = exp e1 + ln ,
π 1−α
√ !
σ2 3 α
Φ−1
2 (α) = exp e2 + ln .
π 1−α
It follows from the operational law that the inverse uncertainty distribution
of ξ1 · ξ2 is
√ !
(σ 1 + σ 2 ) 3 α
Ψ−1 (α) = Φ−1 −1
1 (α) · Φ2 (α) = exp (e1 + e2 ) + ln .
π 1−α

Hence the product is a lognormal uncertain variable LOGN (e1 + e2 , σ1 + σ2 ).


The first part is verified. Next, suppose that the uncertainty distribution of
the uncertain variable ξ ∼ LOGN (e, σ) is Φ. It follows from the operational
law that, when k > 0, the inverse uncertainty distribution of kξ is
√ !
−1 −1 σ 3 α
Ψ (α) = kΦ (α) = exp (e + ln k) + ln .
π 1−α

Hence kξ is just a lognormal uncertain variable LOGN (e + ln k, σ).

Remark 2.4: Keep in mind that the sum of lognormal uncertain variables
is no longer lognormal.
54 Chapter 2 - Uncertain Variable

Strictly Decreasing Function of Uncertain Variables


A real-valued function f (x1 , x2 , · · · , xn ) is said to be strictly decreasing if

f (x1 , x2 , · · · , xn ) ≥ f (y1 , y2 , · · · , yn ) (2.68)

whenever xi ≤ yi for i = 1, 2, · · · , n, and

f (x1 , x2 , · · · , xn ) > f (y1 , y2 , · · · , yn ) (2.69)

whenever xi < yi for i = 1, 2, · · · , n. If f (x1 , x2 , · · · , xn ) is a strictly increas-


ing function, then −f (x1 , x2 , · · · , xn ) is a strictly decreasing function. Fur-
thermore, 1/f (x1 , x2 , · · · , xn ) is also a strictly decreasing function provided
that f is positive. Especially, the following are strictly decreasing functions,

f (x) = −x,
f (x) = exp(−x),
1
f (x) = , x > 0.
x
Theorem 2.13 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f
is a strictly decreasing function, then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.70)

has an inverse uncertainty distribution

Ψ−1 (α) = f (Φ−1 −1 −1


1 (1 − α), Φ2 (1 − α), · · · , Φn (1 − α)). (2.71)

Proof: For simplicity, we only prove the case n = 2. At first, we always have

{ξ ≤ Ψ−1 (α)} ≡ {f (ξ1 , ξ2 ) ≤ f (Φ−1 −1


1 (1 − α), Φ2 (1 − α))}.

On the one hand, since f is a strictly decreasing function, we obtain

{ξ ≤ Ψ−1 (α)} ⊃ {ξ1 ≥ Φ−1 −1


1 (1 − α)} ∩ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≥ M{(ξ1 ≥ Φ−1 −1


1 (1 − α)) ∩ (ξ2 ≥ Φ2 (1 − α))}

= M{ξ1 ≥ Φ−1 −1
1 (1 − α)} ∧ M{ξ2 ≥ Φ2 (1 − α)}

= α ∧ α = α.

On the other hand, since f is a strictly decreasing function, we obtain

{ξ ≤ Ψ−1 (α)} ⊂ {ξ1 ≥ Φ−1 −1


1 (1 − α)} ∪ {ξ2 ≥ Φ2 (1 − α)}.
Section 2.4 - Operational Law: Inverse Distribution 55

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≤ M{(ξ1 ≥ Φ−1 −1


1 (1 − α)) ∪ (ξ2 ≥ Φ2 (1 − α))}

= M{ξ1 ≥ Φ−1 −1
1 (1 − α)} ∨ M{ξ2 ≥ Φ2 (1 − α)}

= α ∨ α = α.

It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.

Exercise 2.18: Let ξ be a positive uncertain variable with regular uncer-


tainty distribution Φ. Show that the reciprocal 1/ξ has an inverse uncertainty
distribution
1
Ψ−1 (α) = −1 . (2.72)
Φ (1 − α)

Exercise 2.19: Let ξ be an uncertain variable with regular uncertainty


distribution Φ. Show that exp(−ξ) has an inverse uncertainty distribution

Ψ−1 (α) = exp −Φ−1 (1 − α) .



(2.73)

Exercise 2.20: Show that the independence condition in Theorem 2.13


cannot be removed.

Strictly Monotone Function of Uncertain Variables


A real-valued function f (x1 , x2 , · · · , xn ) is said to be strictly monotone if it
is strictly increasing with respect to x1 , x2 , · · · , xm and strictly decreasing
with respect to xm+1 , xm+2 , · · · , xn , that is,

f (x1 , · · · , xm , xm+1 , · · · , xn ) ≤ f (y1 , · · · , ym , ym+1 , · · · , yn ) (2.74)

whenever xi ≤ yi for i = 1, 2, · · · , m and xi ≥ yi for i = m + 1, m + 2, · · · , n,


and

f (x1 , · · · , xm , xm+1 , · · · , xn ) < f (y1 , · · · , ym , ym+1 , · · · , yn ) (2.75)

whenever xi < yi for i = 1, 2, · · · , m and xi > yi for i = m + 1, m + 2, · · · , n.


The following are strictly monotone functions,

f (x1 , x2 ) = x1 − x2 ,
f (x1 , x2 ) = x1 /x2 , x1 , x2 > 0,
f (x1 , x2 ) = x1 /(x1 + x2 ), x1 , x2 > 0.

Note that both strictly increasing function and strictly decreasing function
are special cases of strictly monotone function.
56 Chapter 2 - Uncertain Variable

Theorem 2.14 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If
f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly
decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.76)

has an inverse uncertainty distribution

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)). (2.77)

Proof: We only prove the case of m = 1 and n = 2. At first, we always have

{ξ ≤ Ψ−1 (α)} ≡ {f (ξ1 , ξ2 ) ≤ f (Φ−1 −1


1 (α), Φ2 (1 − α))}.

On the one hand, since the function f (x1 , x2 ) is strictly increasing with re-
spect to x1 and strictly decreasing with x2 , we obtain

{ξ ≤ Ψ−1 (α)} ⊃ {ξ1 ≤ Φ−1 −1


1 (α)} ∩ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≥ M{(ξ1 ≤ Φ1−1 (α)) ∩ (ξ2 ≥ Φ−1


2 (1 − α))}

= M{ξ1 ≤ Φ−1 −1
1 (α)} ∧ M{ξ2 ≥ Φ2 (1 − α)}

= α ∧ α = α.

On the other hand, since the function f (x1 , x2 ) is strictly increasing with
respect to x1 and strictly decreasing with x2 , we obtain

{ξ ≤ Ψ−1 (α)} ⊂ {ξ1 ≤ Φ−1 −1


1 (α)} ∪ {ξ2 ≥ Φ2 (1 − α)}.

By using the independence of ξ1 and ξ2 , we get

M{ξ ≤ Ψ−1 (α)} ≤ M{(ξ1 ≤ Φ1−1 (α)) ∪ (ξ2 ≥ Φ−1


2 (1 − α))}

= M{ξ1 ≤ Φ−1 −1
1 (α)} ∨ M{ξ2 ≥ Φ2 (1 − α)}

= α ∨ α = α.

It follows that M{ξ ≤ Ψ−1 (α)} = α. That is, Ψ−1 is just the inverse uncer-
tainty distribution of ξ. The theorem is proved.

Exercise 2.21: Let ξ1 and ξ2 be independent uncertain variables with regu-


lar uncertainty distributions Φ1 and Φ2 , respectively. Show that the inverse
uncertainty distribution of the difference ξ1 − ξ2 is

Ψ−1 (α) = Φ−1 −1


1 (α) − Φ2 (1 − α). (2.78)
Section 2.4 - Operational Law: Inverse Distribution 57

Exercise 2.22: Let ξ1 and ξ2 be independent and positive uncertain vari-


ables with regular uncertainty distributions Φ1 and Φ2 , respectively. Show
that the inverse uncertainty distribution of the quotient ξ1 /ξ2 is

Φ−1
1 (α)
Ψ−1 (α) = −1 . (2.79)
Φ2 (1 − α)

Exercise 2.23: Assume ξ1 and ξ2 are independent and positive uncer-


tain variables with regular uncertainty distributions Φ1 and Φ2 , respectively.
Show that the inverse uncertainty distribution of ξ1 /(ξ1 + ξ2 ) is

Φ−1
1 (α)
Ψ−1 (α) = . (2.80)
Φ−1
1 (α) + Φ−1
2 (1 − α)

Exercise 2.24: Show that the independence condition in Theorem 2.14


cannot be removed.

A Useful Theorem
In many cases, it is required to calculate M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0}. Perhaps
the first idea is to find the uncertainty distribution Ψ(x) of f (ξ1 , ξ2 , · · ·, ξn ),
and then the uncertain measure is just Ψ(0). However, for convenience, we
may use the following theorem.

Theorem 2.15 (Liu [83]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-


ables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If
f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly
decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then

M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0} (2.81)

is the root α of the equation

f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) = 0. (2.82)

Proof: It follows from Theorem 2.14 that f (ξ1 , ξ2 , · · · , ξn ) is an uncertain


variable whose inverse uncertainty distribution is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

Since M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0} = Ψ(0), it is the solution α of the equation


Ψ−1 (α) = 0. The theorem is proved.

Remark 2.5: Keep in mind that sometimes the equation (2.82) may not
have a root. In this case, if

f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) < 0 (2.83)
58 Chapter 2 - Uncertain Variable

for all α, then we set the root α = 1; and if


f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) > 0 (2.84)
for all α, then we set the root α = 0.

Remark 2.6: Since f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to


ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , the
function
f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))

is strictly increasing with respect to α. See Figure 2.12. Thus its root α may
be estimated by the bisection method:

Step 1. Set a = 0, b = 1 and c = (a + b)/2.


Step 2. If f (Φ−1 −1 −1 −1
1 (c), · · · , Φm (c), Φm+1 (1 − c), · · · , Φn (1 − c)) ≤ 0, then set
a = c. Otherwise, set b = c.
Step 3. If |b − a| > ε (a predetermined precision), then set c = (b − a)/2
and go to Step 2. Otherwise, output c as the root.

... ...
.......... .....
... .. .
.. ... ...
.
... .. .
... .
... ..... ...
... .....
... ........... ..
..
.
... ........
... ...
............... ..
..
. .........
.
.. •
. .
.........
. .
.
.........................................................................................................................................................................................
. ..
.. α
0 ...
... .
........
..
........
.
...
...
1 ..
..
... .....
.. ..
... ..
... ..
... ....... ..
... ... ..
... ... ..
... ... ..
....... ..
.... .
...
...
.

Figure 2.12: f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))

Exercise 2.25: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Assume the
function f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm
and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn . Show that
M{f (ξ1 , ξ2 , · · · , ξn ) > 0} (2.85)
is the root α of the equation
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) = 0. (2.86)

Exercise 2.26: Let ξ1 , ξ2 , ξ3 be independent uncertain variables with regular


uncertainty distributions Φ1 , Φ2 , Φ3 , respectively. Show that
M{ξ1 ∨ ξ2 ≥ ξ3 + 5} (2.87)
Section 2.5 - Operational Law: Distribution 59

is the root α of the equation

Φ−1 −1 −1
1 (1 − α) ∨ Φ2 (1 − α) = Φ3 (α) + 5. (2.88)

2.5 Operational Law: Distribution


This section will give some operational laws for calculating the uncertainty
distributions of strictly increasing function, strictly decreasing function, and
strictly monotone function of uncertain variables.

Strictly Increasing Function of Uncertain Variables


Theorem 2.16 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If f is a
continuous and strictly increasing function, then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.89)

has an uncertainty distribution

Ψ(x) = sup min Φi (xi ). (2.90)


f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

Proof: For simplicity, we only prove the case n = 2. Since f is a continuous


and strictly increasing function, it holds that
[
{f (ξ1 , ξ2 ) ≤ x} = (ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 ).
f (x1 ,x2 )=x

Thus the uncertainty distribution is


 
 [ 
Ψ(x) = M{f (ξ1 , ξ2 ) ≤ x} = M (ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 ) .
 
f (x1 ,x2 )=x

Note that for each given number x, the event


[
(ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that

Ψ(x) = sup M {(ξ1 ≤ x1 ) ∩ (ξ2 ≤ x2 )}


f (x1 ,x2 )=x

= sup M{ξ1 ≤ x1 } ∧ M{ξ2 ≤ x2 }


f (x1 ,x2 )=x

= sup Φ1 (x1 ) ∧ Φ2 (x2 ).


f (x1 ,x2 )=x
60 Chapter 2 - Uncertain Variable

The theorem is proved.

Remark 2.7: It is possible that the equation f (x1 , x2 , · · · , xn ) = x does not


have a root for some values of x. In this case, if

f (x1 , x2 , · · · , xn ) < x (2.91)

for any vector (x1 , x2 , · · · , xn ), then we set Ψ(x) = 1; and if

f (x1 , x2 , · · · , xn ) > x (2.92)

for any vector (x1 , x2 , · · · , xn ), then we set Ψ(x) = 0.

Exercise 2.27: Let ξ be an uncertain variable with uncertainty distribution


Φ, and let f be a continuous and strictly increasing function. Show that f (ξ)
has an uncertainty distribution

Ψ(x) = Φ(f −1 (x)), ∀x ∈ <. (2.93)

Exercise 2.28: Let ξ1 , ξ2 , · · · , ξn be iid uncertain variables with a common


uncertainty distribution Φ. Show that the sum

ξ = ξ1 + ξ2 + · · · + ξn (2.94)

has an uncertainty distribution


x
Ψ(x) = Φ . (2.95)
n

Exercise 2.29: Let ξ1 , ξ2 , · · · , ξn be iid and positive uncertain variables with


a common uncertainty distribution Φ. Show that the product

ξ = ξ1 ξ2 · · · ξn (2.96)

has an uncertainty distribution



n

Ψ(x) = Φ x . (2.97)

Exercise 2.30: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the mini-
mum
ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.98)
has an uncertainty distribution

Ψ(x) = Φ1 (x) ∨ Φ2 (x) ∨ · · · ∨ Φn (x). (2.99)


Section 2.5 - Operational Law: Distribution 61

Exercise 2.31: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Show that the maxi-
mum
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.100)
has an uncertainty distribution
Ψ(x) = Φ1 (x) ∧ Φ2 (x) ∧ · · · ∧ Φn (x). (2.101)

Example 2.13: The independence condition in Theorem 2.16 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then ξ1 (γ) = γ is a linear uncertain
variable with uncertainty distribution

 0, if x ≤ 0

Φ1 (x) = x, if 0 < x ≤ 1 (2.102)

1, if x > 1,

and ξ2 (γ) = 1 − γ is also a linear uncertain variable with uncertainty distri-


bution 
 0, if x ≤ 0

Φ2 (x) = x, if 0 < x ≤ 1 (2.103)

1, if x > 1.

Note that ξ1 and ξ2 are not independent, and ξ1 + ξ2 ≡ 1 whose uncertainty


distribution is (
0, if x < 1
Ψ(x) = (2.104)
1, if x ≥ 1.
Thus
Ψ(x) 6= sup Φ1 (x1 ) ∧ Φ2 (x2 ). (2.105)
x1 +x2 =x

Therefore, the independence condition cannot be removed.


Definition 2.15 (Gao-Gao-Yang [48], Order Statistic) Let ξ1 , ξ2 , · · · , ξn be
uncertain variables, and let k be an index with 1 ≤ k ≤ n. Then
ξ = k-min[ξ1 , ξ2 , · · · , ξn ] (2.106)
is called the kth order statistic of ξ1 , ξ2 , · · · , ξn , where k-min represents the
kth smallest value.
Theorem 2.17 (Gao-Gao-Yang [48]) Let ξ1 , ξ2 , · · · , ξn be independent un-
certain variables with uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
Then the kth order statistic of ξ1 , ξ2 , · · · , ξn has an uncertainty distribution
Ψ(x) = k-max[Φ1 (x), Φ2 (x), · · · , Φn (x)] (2.107)
where k-max represents the kth largest value.
62 Chapter 2 - Uncertain Variable

Proof: Since f (x1 , x2 , · · · , xn ) = k-min[x1 , x2 , · · · , xn ] is a strictly increas-


ing funtion, it follows from Theorem 2.16 that the kth order statistic has an
uncertainty distribution

Ψ(x) = sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φn (xn )


k-min[x1 ,x2 ,··· ,xn ]=x

= k-max[Φ1 (x), Φ2 (x), · · · , Φn (x)].

The theorem is proved.

Exercise 2.32: Let ξ1 , ξ2 , · · · , ξn be independent uncertain variables with


uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Then

ξ = k-max[ξ1 , ξ2 , · · · , ξn ] (2.108)

is just the (n − k + 1)th order statistic. Show that ξ has an uncertainty


distribution
Ψ(x) = k-min[Φ1 (x), Φ2 (x), · · · , Φn (x)]. (2.109)

Theorem 2.18 (Liu [90], Extreme Value Theorem) Let ξ1 , ξ2 , · · · , ξn be in-


dependent uncertain variables. Assume that

Si = ξ1 + ξ2 + · · · + ξi (2.110)

have uncertainty distributions Ψi for i = 1, 2, · · · , n, respectively. Then the


maximum
S = S1 ∨ S2 ∨ · · · ∨ Sn (2.111)
has an uncertainty distribution

Υ(x) = Ψ1 (x) ∧ Ψ2 (x) ∧ · · · ∧ Ψn (x); (2.112)

and the minimum


S = S1 ∧ S2 ∧ · · · ∧ Sn (2.113)
has an uncertainty distribution

Υ(x) = Ψ1 (x) ∨ Ψ2 (x) ∨ · · · ∨ Ψn (x). (2.114)

Proof: Assume that the uncertainty distributions of the uncertain variables


ξ1 , ξ2 , · · · , ξn are Φ1 , Φ2 , · · · , Φn , respectively. It follows from Theorem 2.16
that
Ψi (x) = sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φi (xi )
x1 +x2 +···+xi =x

for i = 1, 2, · · · , n. Define

f (x1 , x2 , · · · , xn ) = x1 ∨ (x1 + x2 ) ∨ · · · ∨ (x1 + x2 + · · · + xn ).


Section 2.5 - Operational Law: Distribution 63

Then f is a strictly increasing function and

S = f (ξ1 , ξ2 , · · · , ξn ).

It follows from Theorem 2.16 that S has an uncertainty distribution

Υ(x) = sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φn (xn )


f (x1 ,x2 ,··· ,xn )=x

= min sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φi (xi )


1≤i≤n x1 +x2 +···+xi =x

= min Ψi (x).
1≤i≤n

Thus (2.112) is verified. Similarly, define

f (x1 , x2 , · · · , xn ) = x1 ∧ (x1 + x2 ) ∧ · · · ∧ (x1 + x2 + · · · + xn ).

Then f is a strictly increasing function and

S = f (ξ1 , ξ2 , · · · , ξn ).

It follows from Theorem 2.16 that S has an uncertainty distribution

Υ(x) = sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φn (xn )


f (x1 ,x2 ,··· ,xn )=x

= max sup Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φi (xi )


1≤i≤n x1 +x2 +···+xi =x

= max Ψi (x).
1≤i≤n

Thus (2.114) is verified.

Strictly Decreasing Function of Uncertain Variables


Theorem 2.19 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with continuous uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
If f is a continuous and strictly decreasing function, then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.115)

has an uncertainty distribution

Ψ(x) = sup min (1 − Φi (xi )). (2.116)


f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

Proof: For simplicity, we only prove the case n = 2. Since f is a continuous


and strictly decreasing function, it holds that
[
{f (ξ1 , ξ2 ) ≤ x} = (ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 ).
f (x1 ,x2 )=x
64 Chapter 2 - Uncertain Variable

Thus the uncertainty distribution is


 
 [ 
Ψ(x) = M{f (ξ1 , ξ2 ) ≤ x} = M (ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 ) .
 
f (x1 ,x2 )=x

Note that for each given number x, the event


[
(ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that


Ψ(x) = sup M {(ξ1 ≥ x1 ) ∩ (ξ2 ≥ x2 )}
f (x1 ,x2 )=x

= sup M{ξ1 ≥ x1 } ∧ M{ξ2 ≥ x2 }


f (x1 ,x2 )=x

= sup (1 − Φ1 (x1 )) ∧ (1 − Φ2 (x2 )).


f (x1 ,x2 )=x

The theorem is proved.

Exercise 2.33: Let ξ be an uncertain variable with continuous uncertainty


distribution Φ, and let f be a continuous and strictly decreasing function.
Show that f (ξ) has an uncertainty distribution
Ψ(x) = 1 − Φ(f −1 (x)), ∀x ∈ <. (2.117)

Exercise 2.34: Let ξ be an uncertain variable with continuous uncertainty


distribution Φ, and let a and b be real numbers with a < 0. Show that aξ + b
has an uncertainty distribution
 
x−b
Ψ(x) = 1 − Φ , ∀x ∈ <. (2.118)
a

Exercise 2.35: Let ξ be a positive uncertain variable with continuous un-


certainty distribution Φ. Show that 1/ξ has an uncertainty distribution
 
1
Ψ(x) = 1 − Φ , ∀x > 0. (2.119)
x

Exercise 2.36: Let ξ be an uncertain variable with continuous uncertainty


distribution Φ. Show that exp(−ξ) has an uncertainty distribution
Ψ(x) = 1 − Φ(− ln(x)), ∀x > 0. (2.120)

Exercise 2.37: Show that the independence condition in Theorem 2.19


cannot be removed.
Section 2.5 - Operational Law: Distribution 65

Strictly Monotone Function of Uncertain Variables


Theorem 2.20 (Liu [84]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain vari-
ables with continuous uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
If f (ξ1 , ξ2 , · · · , ξn ) is continuous, strictly increasing with respect to ξ1 , ξ2 , · · · ,
ξm and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.121)

has an uncertainty distribution


 
Ψ(x) = sup min Φi (xi ) ∧ min (1 − Φi (xi )) . (2.122)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤m m+1≤i≤n

Proof: For simplicity, we only prove the case of m = 1 and n = 2. Since


f (x1 , x2 ) is continuous, strictly increasing with respect to x1 and strictly
decreasing with respect to x2 , it holds that
[
{f (ξ1 , ξ2 ) ≤ x} = (ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 ).
f (x1 ,x2 )=x

Thus the uncertainty distribution is


 
 [ 
Ψ(x) = M{f (ξ1 , ξ2 ) ≤ x} = M (ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 ) .
 
f (x1 ,x2 )=x

Note that for each given number x, the event


[
(ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 )
f (x1 ,x2 )=x

is just a polyrectangle. It follows from the polyrectangular theorem that

Ψ(x) = sup M {(ξ1 ≤ x1 ) ∩ (ξ2 ≥ x2 )}


f (x1 ,x2 )=x

= sup M{ξ1 ≤ x1 } ∧ M{ξ2 ≥ x2 }


f (x1 ,x2 )=x

= sup Φ1 (x1 ) ∧ (1 − Φ2 (x2 )).


f (x1 ,x2 )=x

The theorem is proved.

Exercise 2.38: Let ξ1 and ξ2 be independent uncertain variables with con-


tinuous uncertainty distributions Φ1 and Φ2 , respectively. Show that ξ1 − ξ2
has an uncertainty distribution

Ψ(x) = sup Φ1 (x + y) ∧ (1 − Φ2 (y)). (2.123)


y∈<
66 Chapter 2 - Uncertain Variable

Exercise 2.39: Let ξ1 and ξ2 be independent and positive uncertain vari-


ables with continuous uncertainty distributions Φ1 and Φ2 , respectively. Show
that ξ1 /ξ2 has an uncertainty distribution
Ψ(x) = sup Φ1 (xy) ∧ (1 − Φ2 (y)). (2.124)
y>0

Exercise 2.40: Let ξ1 and ξ2 be independent and positive uncertain vari-


ables with continuous uncertainty distributions Φ1 and Φ2 , respectively. Show
that ξ1 /(ξ1 + ξ2 ) has an uncertainty distribution
Ψ(x) = sup Φ1 (xy) ∧ (1 − Φ2 (y − xy)). (2.125)
y>0

Exercise 2.41: Show that the independence condition in Theorem 2.20


cannot be removed.

2.6 Operational Law: Boolean System


A function is said to be Boolean if it is a mapping from {0, 1}n to {0, 1}. For
example,
f (x1 , x2 , x3 ) = x1 ∨ x2 ∧ x3 (2.126)
is a Boolean function. An uncertain variable is said to be Boolean if it
takes values either 0 or 1. For example, the following is a Boolean uncertain
variable, (
1 with uncertain measure a
ξ= (2.127)
0 with uncertain measure 1 − a
where a is a number between 0 and 1. This section introduces an operational
law for Boolean system.
Theorem 2.21 (Liu [84]) Assume ξ1 , ξ2 , · · · , ξn are independent Boolean
uncertain variables, i.e.,
(
1 with uncertain measure ai
ξi = (2.128)
0 with uncertain measure 1 − ai
for i = 1, 2, · · · , n. If f is a Boolean function (not necessarily monotone),
then ξ = f (ξ1 , ξ2 , · · · , ξn ) is a Boolean uncertain variable such that

 sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n







 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n


M{ξ = 1} = (2.129)

 1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n





if sup min νi (xi ) ≥ 0.5



1≤i≤n

f (x1 ,x2 ,··· ,xn )=1
Section 2.6 - Operational Law: Boolean System 67

where xi take values either 0 or 1, and νi are defined by


(
ai , if xi = 1
νi (xi ) = (2.130)
1 − ai , if xi = 0

for i = 1, 2, · · · , n, respectively.

Proof: Let B1 , B2 , · · · , Bn be nonempty subsets of {0, 1}. In other words,


they take values of {0}, {1} or {0, 1}. Write

Λ = {ξ = 1}, Λc = {ξ = 0}, Λi = {ξi ∈ Bi }

for i = 1, 2, · · · , n. It is easy to verify that

Λ1 × Λ2 × · · · × Λn = Λ if and only if f (B1 , B2 , · · · , Bn ) = {1},

Λ1 × Λ2 × · · · × Λn = Λc if and only if f (B1 , B2 , · · · , Bn ) = {0}.


It follows from the product axiom that
min M{ξi ∈ Bi },

sup
f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n




min M{ξi ∈ Bi } > 0.5

if sup



f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n




M{ξ = 1} = 1− sup min M{ξi ∈ Bi }, (2.131)

 f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n

min M{ξi ∈ Bi } > 0.5



 if sup
f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n





0.5, otherwise.
Please note that

νi (1) = M{ξi = 1}, νi (0) = M{ξi = 0}

for i = 1, 2, · · · , n. The argument breaks down into four cases. Case 1:


Assume
sup min νi (xi ) < 0.5.
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Then we have

sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.


f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from (2.131) that

M{ξ = 1} = sup min νi (xi ).


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Case 2: Assume
sup min νi (xi ) > 0.5.
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
68 Chapter 2 - Uncertain Variable

Then we have

sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.


f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

It follows from (2.131) that

M{ξ = 1} = 1 − sup min νi (xi ).


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Case 3: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

sup min νi (xi ) = 0.5.


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Then we have

sup min M{ξi ∈ Bi } = 0.5,


f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n

sup min M{ξi ∈ Bi } = 0.5.


f (B1 ,B2 ,··· ,Bn )={0} 1≤i≤n

It follows from (2.131) that

M{ξ = 1} = 0.5 = 1 − sup min νi (xi ).


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Case 4: Assume
sup min νi (xi ) = 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

sup min νi (xi ) < 0.5.


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Then we have

sup min M{ξi ∈ Bi } = 1 − sup min νi (xi ) > 0.5.


f (B1 ,B2 ,··· ,Bn )={1} 1≤i≤n f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

It follows from (2.131) that

M{ξ = 1} = 1 − sup min νi (xi ).


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

Hence the equation (2.129) is proved for the four cases.

Example 2.14: The independence condition in Theorem 2.21 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be {γ1 , γ2 }
with power set and M{γ1 } = M{γ2 } = 0.5. Then
(
0, if γ = γ1
ξ1 (γ) = (2.132)
1, if γ = γ2
Section 2.6 - Operational Law: Boolean System 69

is a Boolean uncertain variable with

M{ξ1 = 1} = 0.5, (2.133)

and (
1, if γ = γ1
ξ2 (γ) = (2.134)
0, if γ = γ2
is also a Boolean uncertain variable with

M{ξ2 = 1} = 0.5. (2.135)

Note that ξ1 and ξ2 are not independent, and ξ1 ∧ ξ2 ≡ 0 from which we


obtain
M{ξ1 ∧ ξ2 = 1} = 0. (2.136)
However, by using (2.129), we get

M{ξ1 ∧ ξ2 = 1} = 0.5. (2.137)

Thus the independence condition cannot be removed.

Theorem 2.22 (Liu [84]), Order Statistic) Assume that ξ1 , ξ2 , · · · , ξn are


independent Boolean uncertain variables, i.e.,
(
1 with uncertain measure ai
ξi = (2.138)
0 with uncertain measure 1 − ai

for i = 1, 2, · · · , n. Then the kth order statistic

ξ = k-min [ξ1 , ξ2 , · · · , ξn ] (2.139)

is a Boolean uncertain variable such that

M{ξ = 1} = k-min [a1 , a2 , · · · , an ]. (2.140)

Proof: The corresponding Boolean function for the kth order statistic is

f (x1 , x2 , · · · , xn ) = k-min [x1 , x2 , · · · , xn ]. (2.141)

Without loss of generality, we assume a1 ≤ a2 ≤ · · · ≤ an . Then we have

sup min νi (xi ) = ak ∧ min (ai ∨ (1 − ai )),


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n 1≤i<k

sup min νi (xi ) = (1 − ak ) ∧ min (ai ∨ (1 − ai ))


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n k<i≤n

where νi (xi ) are defined by (2.130) for i = 1, 2, · · · , n. When ak ≥ 0.5, we


have
sup min νi (xi ) ≥ 0.5,
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n
70 Chapter 2 - Uncertain Variable

sup min νi (xi ) = 1 − ak .


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

It follows from Theorem 2.21 that

M{ξ = 1} = 1 − sup min νi (xi ) = 1 − (1 − ak ) = ak .


f (x1 ,x2 ,··· ,xn )=0 1≤i≤n

When ak < 0.5, we have

sup min νi (xi ) = ak < 0.5.


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

It follows from Theorem 2.21 that

M{ξ = 1} = sup min νi (xi ) = ak .


f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

Thus M{ξ = 1} is always ak , i.e., the kth smallest value of a1 , a2 , · · · , an .


The theorem is proved.

Exercise 2.42: Let ξ1 , ξ2 , · · · , ξn be independent Boolean uncertain vari-


ables defined by (2.138). Then the minimum

ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn (2.142)

is the 1st order statistic. Show that

M{ξ = 1} = a1 ∧ a2 ∧ · · · ∧ an . (2.143)

Exercise 2.43: Let ξ1 , ξ2 , · · · , ξn be independent Boolean uncertain vari-


ables defined by (2.138). Then the maximum

ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn (2.144)

is the nth order statistic. Show that

M{ξ = 1} = a1 ∨ a2 ∨ · · · ∨ an . (2.145)

Exercise 2.44: Let ξ1 , ξ2 , · · · , ξn be independent Boolean uncertain vari-


ables defined by (2.138). Then

ξ = k-max [ξ1 , ξ2 , · · · , ξn ] (2.146)

is the (n − k + 1)th order statistic. Show that

M{ξ = 1} = k-max [a1 , a2 , · · · , an ]. (2.147)


Section 2.7 - Expected Value 71

Boolean System Calculator


Boolean System Calculator is a function in the Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) for computing the uncertain measure
like
M{f (ξ1 , ξ2 , · · · , ξn ) = 1} (2.148)
where ξ1 , ξ2 , · · · , ξn are independent Boolean uncertain variables and f is a
Boolean function. For example, let ξ1 , ξ2 , ξ3 be independent Boolean uncer-
tain variables,
(
1 with uncertain mesure 0.8
ξ1 =
0 with uncertain mesure 0.2,
(
1 with uncertain mesure 0.7
ξ2 =
0 with uncertain mesure 0.3,
(
1 with uncertain mesure 0.6
ξ3 =
0 with uncertain mesure 0.4.

We also assume the Boolean function is


(
1, if x1 + x2 + x3 = 0 or 2
f (x1 , x2 , x3 ) =
0, if x1 + x2 + x3 = 1 or 3.

The Boolean System Calculator yields M{f (ξ1 , ξ2 , ξ3 ) = 1} = 0.4.

2.7 Expected Value


Expected value is the average value of uncertain variable in the sense of
uncertain measure, and represents the size of uncertain variable.

Definition 2.16 (Liu [77]) Let ξ be an uncertain variable. Then the expected
value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx (2.149)
0 −∞

provided that at least one of the two integrals is finite.

Theorem 2.23 (Liu [77]) Let ξ be an uncertain variable with uncertainty


distribution Φ. Then
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx. (2.150)
0 −∞
72 Chapter 2 - Uncertain Variable

Proof: It follows from the measure inversion theorem that for almost all
numbers x, we have M{ξ ≥ x} = 1 − Φ(x) and M{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = M{ξ ≥ x}dx − M{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞

See Figure 2.13. The theorem is proved.

Φ(x)
....
........
....
...........................................................................................................................................
1 ... .. .. .. ... ... .. ... .. ............................
... ... ... .. .. .. ... ..............
... .. .. .. .. .............
... .. ... .. ............
... ... .. ..........
... .. .. .......
... .. .......
... ........
. ..
.
..........
.... ... ...
...
..... ... ...
..... . . ..
....... . .. ..
....... .. .. .. ..
....... .. . . . ..
.................. ... ... .. ... ... ....
...... . . .
. . .
.................... . .. .. . .. . .. .. ..
........................................................................................................................................................................................................................................................................... x
....
0 ..
...

Z +∞ Z 0
Figure 2.13: E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞

Theorem 2.24 (Liu [84]) Let ξ be an uncertain variable with uncertainty


distribution Φ. Then Z +∞
E[ξ] = xdΦ(x). (2.151)
−∞

Proof: It follows from the integration by parts and Theorem 2.23 that the
expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞

See Figure 2.14. The theorem is proved.

Remark 2.8: If the uncertainty distribution Φ(x) has a derivative φ(x),


then we immediately have
Z +∞
E[ξ] = xφ(x)dx. (2.152)
−∞
Section 2.7 - Expected Value 73

Φ(x)
....
........
..
.
...............................................................................................................................
1 ....................................................................................
............................................................
.............................................
.......................................
.....................................
............................
........................
..................
..............
.......
...
.........
.
..
.
.............
.
..
..................
.
.
............................
......
................................
........................................
.
............................................................
.....................................................................
....................................................................................................................................................................................................................................................................... x
..
0 ...
...
.

Z +∞ Z 1
Figure 2.14: E[ξ] = xdΦ(x) = Φ−1 (α)dα
−∞ 0

However, it is inappropriate to regard φ(x) as an uncertainty density function


because uncertain measure is not additive, i.e., generally speaking,
Z b
M{a ≤ ξ ≤ b} = 6 φ(x)dx. (2.153)
a

Theorem 2.25 (Liu [84]) Let ξ be an uncertain variable with regular uncer-
tainty distribution Φ. Then
Z 1
E[ξ] = Φ−1 (α)dα. (2.154)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.24 that the expected value is
Z +∞ Z 1
E[ξ] = xdΦ(x) = Φ−1 (α)dα.
−∞ 0

See Figure 2.14. The theorem is proved.

Exercise 2.45: Show that the linear uncertain variable ξ ∼ L(a, b) has an
expected value
a+b
E[ξ] = . (2.155)
2

Exercise 2.46: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an expected value
a + 2b + c
E[ξ] = . (2.156)
4

Exercise 2.47: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an expected value e, i.e.,
E[ξ] = e. (2.157)
74 Chapter 2 - Uncertain Variable

Exercise 2.48: Show that the lognormal uncertain variable ξ ∼ LOGN (e, σ)
has an expected value
( √ √ √
σ 3 exp(e) csc(σ 3), if σ < π/ 3
E[ξ] = √ (2.158)
+∞, if σ ≥ π/ 3.

This formula was first discovered by Dr. Zhongfeng Qin with the help of
Maple software, and was verified again by Dr. Kai Yao through a rigorous
mathematical derivation.

Exercise 2.49: Let ξ be an uncertain variable with empirical uncertainty


distribution


 0, if x < x1
(αi+1 − αi )(x − xi )


Φ(x) = αi + , if xi ≤ x ≤ xi+1 , 1 ≤ i < n

 xi+1 − xi

1, if x > xn

where x1 < x2 < · · · < xn and 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1. Show that


n−1
X αi+1 − αi−1  
α1 + α2 αn−1 + αn
E[ξ] = x1 + xi + 1 − xn . (2.159)
2 i=2
2 2

Expected Value of Monotone Function of Uncertain Variables


Theorem 2.26 (Liu-Ha [104]) Assume ξ1 , ξ2 , · · · , ξn are independent uncer-
tain variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respec-
tively. If f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm
and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.160)

has an expected value


Z 1
E[ξ] = f (Φ−1 −1 −1 −1
1 (α), · · ·, Φm (α), Φm+1 (1 − α), · · ·, Φn (1 − α))dα. (2.161)
0

Proof: Since the function f (x1 , x2 , · · · , xn ) is strictly increasing with respect


to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn ,
it follows from Theorem 2.14 that the inverse uncertainty distribution of ξ is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

By using Theorem 2.25, we obtain (2.161). The theorem is proved.


Section 2.7 - Expected Value 75

Exercise 2.50: Let ξ be an uncertain variable with regular uncertainty


distribution Φ, and let f (x) be a strictly monotone (increasing or decreasing)
function. Show that
Z 1
E[f (ξ)] = f (Φ−1 (α))dα. (2.162)
0

Exercise 2.51: Let ξ be an uncertain variable with uncertainty distribution


Φ, and let f (x) be a strictly monotone (increasing or decreasing) function.
Show that Z +∞
E[f (ξ)] = f (x)dΦ(x). (2.163)
−∞

Exercise 2.52: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
E[ξη] = Φ−1 (α)Ψ−1 (α)dα. (2.164)
0

Exercise 2.53: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
  Z 1
ξ Φ−1 (α)
E = −1 (1 − α)
dα. (2.165)
η 0 Ψ

Exercise 2.54: Assume ξ and η are independent and positive uncertain


variables with regular uncertainty distributions Φ and Ψ, respectively. Show
that  Z 1
Φ−1 (α)

ξ
E = −1 (α) + Ψ−1 (1 − α)
dα. (2.166)
ξ+η 0 Φ

Linearity of Expected Value Operator


Theorem 2.27 (Liu [84]) Let ξ and η be independent uncertain variables
with finite expected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (2.167)

Proof: Without loss of generality, suppose ξ and η have regular uncertainty


distributions Φ and Ψ, respectively. Otherwise, we may give the uncertainty
distributions a small perturbation such that they become regular.
Step 1: We first prove E[aξ] = aE[ξ]. If a = 0, then the equation holds
trivially. If a > 0, then the inverse uncertainty distribution of aξ is

Υ−1 (α) = aΦ−1 (α).


76 Chapter 2 - Uncertain Variable

It follows from Theorem 2.25 that


Z 1 Z 1
−1
E[aξ] = aΦ (α)dα = a Φ−1 (α)dα = aE[ξ].
0 0

If a < 0, then the inverse uncertainty distribution of aξ is

Υ−1 (α) = aΦ−1 (1 − α).

It follows from Theorem 2.25 that


Z 1 Z 1
E[aξ] = aΦ−1 (1 − α)dα = a Φ−1 (α)dα = aE[ξ].
0 0

Thus we always have E[aξ] = aE[ξ].


Step 2: We prove E[ξ + η] = E[ξ] + E[η]. The inverse uncertainty
distribution of the sum ξ + η is

Υ−1 (α) = Φ−1 (α) + Ψ−1 (α).

It follows from Theorem 2.25 that


Z 1 Z 1 Z 1
−1 −1
E[ξ + η] = Υ (α)dα = Φ (α)dα + Ψ−1 (α)dα = E[ξ] + E[η].
0 0 0

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.

Example 2.15: Generally speaking, the expected value operator is not


necessarily linear if the independence is not assumed. For example, take an
uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with power set and M{γ1 } =
0.6, M{γ2 } = 0.3 and M{γ3 } = 0.2. Define two uncertain variables as follows,
 
 1, if γ = γ1
  0, if γ = γ1

ξ(γ) = 0, if γ = γ2 η(γ) = 2, if γ = γ2
 
2, if γ = γ3 , 3, if γ = γ3 .
 

Note that ξ and η are not independent, and their sum is



 1, if γ = γ1

(ξ + η)(γ) = 2, if γ = γ2

5, if γ = γ3 .

It is easy to verify that E[ξ] = 0.9, E[η] = 1 and E[ξ + η] = 2. Thus we have

E[ξ + η] > E[ξ] + E[η].


Section 2.7 - Expected Value 77

If the uncertain variables are defined by


 
 0,
 if γ = γ1  0, if γ = γ1

ξ(γ) = 1, if γ = γ2 η(γ) = 3, if γ = γ2
 
2, if γ = γ3 , 1, if γ = γ3 .
 

Then 
 0, if γ = γ1

(ξ + η)(γ) = 4, if γ = γ2

3, if γ = γ3 .

It is easy to verify that E[ξ] = 0.6, E[η] = 1 and E[ξ + η] = 1.5. Thus we
have
E[ξ + η] < E[ξ] + E[η].
Therefore, the independence condition cannot be removed.

Comonotonic Functions of Uncertain Variable


Two real-valued functions f and g are said to be comonotonic if for any
numbers x and y, we always have
(f (x) − f (y))(g(x) − g(y)) ≥ 0. (2.168)
It is easy to verify that (i) any function is comonotonic with any positive
constant multiple of the function; (ii) any monotone increasing functions are
comonotonic with each other; and (iii) any monotone decreasing functions
are also comonotonic with each other.
Theorem 2.28 (Yang [167]) Let f and g be comonotonic functions. Then
for any uncertain variable ξ, we have
E[f (ξ) + g(ξ)] = E[f (ξ)] + E[g(ξ)]. (2.169)
Proof: Without loss of generality, suppose f (ξ) and g(ξ) have regular un-
certainty distributions Φ and Ψ, respectively. Otherwise, we may give the
uncertainty distributions a small perturbation such that they become regu-
lar. Since f and g are comonotonic functions, at least one of the following
relations is true,
{f (ξ) ≤ Φ−1 (α)} ⊂ {g(ξ) ≤ Ψ−1 (α)},
{f (ξ) ≤ Φ−1 (α)} ⊃ {g(ξ) ≤ Ψ−1 (α)}.
On the one hand, we have
M{f (ξ) + g(ξ) ≤ Φ−1 (α) + Ψ−1 (α)}
≥ M{(f (ξ) ≤ Φ−1 (α)) ∩ (g(ξ) ≤ Ψ−1 (α))}
= M{f (ξ) ≤ Φ−1 (α)} ∧ M{g(ξ) ≤ Ψ−1 (α)}
= α ∧ α = α.
78 Chapter 2 - Uncertain Variable

On the other hand, we have

M{f (ξ) + g(ξ) ≤ Φ−1 (α) + Ψ−1 (α)}


≤ M{(f (ξ) ≤ Φ−1 (α)) ∪ (g(ξ) ≤ Ψ−1 (α))}
= M{f (ξ) ≤ Φ−1 (α)} ∨ M{g(ξ) ≤ Ψ−1 (α)}
= α ∨ α = α.

It follows that

M{f (ξ) + g(ξ) ≤ Φ−1 (α) + Ψ−1 (α)} = α

holds for each α. That is, Φ−1 (α) + Ψ−1 (α) is the inverse uncertainty distri-
bution of f (ξ) + g(ξ). By using Theorem 2.25, we obtain
Z 1
E[f (ξ) + g(ξ)] = (Φ−1 (α) + Ψ−1 (α))dα
0
Z 1 Z 1
−1
= Φ (α)dα + Ψ−1 (α)dα
0 0

= E[f (ξ)] + E[g(ξ)].

The theorem is verified.

Exercise 2.55: Let ξ be a positive uncertain variable. Show that ln x and


exp(x) are comonotonic functions on (0, +∞), and

E[ln ξ + exp(ξ)] = E[ln ξ] + E[exp(ξ)]. (2.170)

Exercise 2.56: Let ξ be a positive uncertain variable. Show that x, x2 ,


· · · , xn are comonotonic functions on [0, +∞), and

E[ξ + ξ 2 + · · · + ξ n ] = E[ξ] + E[ξ 2 ] + · · · + E[ξ n ]. (2.171)

Some Inequalities
Theorem 2.29 (Liu [77]) Let ξ be an uncertain variable, and let f be a
nonnegative even function. If f is decreasing on (−∞, 0] and increasing on
[0, ∞), then for any given number t > 0, we have

E[f (ξ)]
M{|ξ| ≥ t} ≤ . (2.172)
f (t)

Proof: It is clear that M{|ξ| ≥ f −1 (r)} is a monotone decreasing function


Section 2.7 - Expected Value 79

of r on [0, ∞). It follows from the nonnegativity of f (ξ) that


Z +∞ Z +∞
E[f (ξ)] = M{f (ξ) ≥ x}dx = M{|ξ| ≥ f −1 (x)}dx
0 0
Z f (t) Z f (t)
≥ M{|ξ| ≥ f −1 (x)}dx ≥ M{|ξ| ≥ f −1 (f (t))}dx
0 0
Z f (t)
= M{|ξ| ≥ t}dx = f (t) · M{|ξ| ≥ t}
0

which proves the inequality.


Theorem 2.30 (Liu [77], Markov Inequality) Let ξ be an uncertain variable.
Then for any given numbers t > 0 and p > 0, we have
E[|ξ|p ]
M{|ξ| ≥ t} ≤ . (2.173)
tp
Proof: It is a special case of Theorem 2.29 when f (x) = |x|p .

Example 2.16: For any given positive number t, we define an uncertain


variable as follows,
(
0 with uncertain measure 1/2
ξ=
t with uncertain measure 1/2.

Then E[ξ p ] = tp /2 and M{ξ ≥ t} = 1/2 = E[ξ p ]/tp .


Theorem 2.31 (Liu [77], Hölder’s Inequality) Let p and q be positive num-
bers with 1/p + 1/q = 1, and let ξ and η be independent uncertain variables.
Then p p
E[|ξη|] ≤ p E[|ξ|p ] q E[|η|q ]. (2.174)
Proof: The inequality holds trivially if at least one of ξ and η is zero a.s.
p
Now we assume E[|ξ|√ ] > 0 and E[|η|q ] > 0. It is easy to prove that the

function f (x, y) = x q y is a concave function on {(x, y) : x ≥ 0, y ≥ 0}.
p

Thus for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real
numbers a and b such that
f (x, y) − f (x0 , y0 ) ≤ a(x − x0 ) + b(y − y0 ), ∀x ≥ 0, y ≥ 0.
Letting x0 = E[|ξ|p ], y0 = E[|η|q ], x = |ξ|p and y = |η|q , we have
f (|ξ|p , |η|q ) − f (E[|ξ|p ], E[|η|q ]) ≤ a(|ξ|p − E[|ξ|p ]) + b(|η|q − E[|η|q ]).
Taking the expected values on both sides, we obtain
E[f (|ξ|p , |η|q )] ≤ f (E[|ξ|p ], E[|η|q ]).
Hence the inequality (2.174) holds.
80 Chapter 2 - Uncertain Variable

Theorem 2.32 (Liu [77], Minkowski Inequality) Let p be a real number with
p ≥ 1, and let ξ and η be independent uncertain variables. Then
pp
p p
E[|ξ + η|p ] ≤ p E[|ξ|p ] + p E[|η|p ]. (2.175)

Proof: The inequality holds trivially if at least one of ξ and η is zero a.s. Now
we assume √ E[|ξ|p ] > 0 and E[|η|p ] > 0. It is easy to prove that the function

f (x, y) = ( p x + p y)p is a concave function on {(x, y) : x ≥ 0, y ≥ 0}. Thus
for any point (x0 , y0 ) with x0 > 0 and y0 > 0, there exist two real numbers
a and b such that

f (x, y) − f (x0 , y0 ) ≤ a(x − x0 ) + b(y − y0 ), ∀x ≥ 0, y ≥ 0.

Letting x0 = E[|ξ|p ], y0 = E[|η|p ], x = |ξ|p and y = |η|p , we have

f (|ξ|p , |η|p ) − f (E[|ξ|p ], E[|η|p ]) ≤ a(|ξ|p − E[|ξ|p ]) + b(|η|p − E[|η|p ]).

Taking the expected values on both sides, we obtain

E[f (|ξ|p , |η|p )] ≤ f (E[|ξ|p ], E[|η|p ]).

Hence the inequality (2.175) holds.

Theorem 2.33 (Liu [77], Jensen’s Inequality) Let ξ be an uncertain vari-


able, and let f be a convex function. Then

f (E[ξ]) ≤ E[f (ξ)]. (2.176)

Especially, when f (x) = |x|p and p ≥ 1, we have |E[ξ]|p ≤ E[|ξ|p ].

Proof: Since f is a convex function, for each y, there exists a number k such
that f (x) − f (y) ≥ k · (x − y). Replacing x with ξ and y with E[ξ], we obtain

f (ξ) − f (E[ξ]) ≥ k · (ξ − E[ξ]).

Taking the expected values on both sides, we have

E[f (ξ)] − f (E[ξ]) ≥ k · (E[ξ] − E[ξ]) = 0

which proves the inequality.

Exercise 2.57: (Zhang [203]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain


variables with finite expected values, and let f be a convex function. Show
that
f (E[ξ1 ], E[ξ2 ], · · · , E[ξn ]) ≤ E[f (ξ1 , ξ2 , · · · , ξn )]. (2.177)
Section 2.8 - Variance 81

2.8 Variance
The variance of uncertain variable provides a degree of the spread of the
distribution around its expected value. A small value of variance indicates
that the uncertain variable is tightly concentrated around its expected value;
and a large value of variance indicates that the uncertain variable has a wide
spread around its expected value.
Definition 2.17 (Liu [77]) Let ξ be an uncertain variable with finite expected
value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (2.178)
This definition tells us that the variance is just the expected value of
(ξ − e)2 . Since (ξ − e)2 is a nonnegative uncertain variable, we also have
Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx. (2.179)
0

Theorem 2.34 (Liu [77]) If ξ is an uncertain variable with finite expected


value, a and b are real numbers, then
V [aξ + b] = a2 V [ξ]. (2.180)
Proof: Let e be the expected value of ξ. Then aξ + b has an expected value
ae + b. It follows from the definition of variance that
V [aξ + b] = E (aξ + b − (ae + b))2 = a2 E[(ξ − e)2 ] = a2 V [ξ].
 

The theorem is thus verified.


Theorem 2.35 (Liu [77]) Let ξ be an uncertain variable with expected value
e. Then V [ξ] = 0 if and only if M{ξ = e} = 1. That is, the uncertain
variable ξ is essentially the constant e.
Proof: We first assume V [ξ] = 0. It follows from the equation (2.179) that
Z +∞
M{(ξ − e)2 ≥ x}dx = 0
0

which implies M{(ξ − e) ≥ x} = 0 for any x > 0. Hence we have


2

M{(ξ − e)2 = 0} = 1.
That is, M{ξ = e} = 1. Conversely, assume M{ξ = e} = 1. Then we
immediately have M{(ξ − e)2 = 0} = 1 and M{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx = 0.
0
The theorem is proved.
82 Chapter 2 - Uncertain Variable

Theorem 2.36 (Yao [178]) Let ξ and η be independent uncertain variables


whose variances exist. Then
p p p
V [ξ + η] ≤ V [ξ] + V [η]. (2.181)

Proof: It is a special case of Theorem 2.32 when p = 2 and the uncertain


variables ξ and η are replaced with ξ − E[ξ] and η − E[η], respectively.

Theorem 2.37 (Liu [77], Chebyshev Inequality) Let ξ be an uncertain vari-


able whose variance exists. Then for any given number t > 0, we have

V [ξ]
M {|ξ − E[ξ]| ≥ t} ≤ . (2.182)
t2
Proof: It is a special case of Theorem 2.29 when the uncertain variable ξ is
replaced with ξ − E[ξ], and f (x) = x2 .

Example 2.17: For any given positive number t, we define an uncertain


variable as follows,
(
−t with uncertain measure 1/2
ξ=
t with uncertain measure 1/2.

Then V [ξ] = t2 and M{|ξ − E[ξ]| ≥ t} = 1 = V [ξ]/t2 .

How to Obtain Variance from Uncertainty Distribution?


Let ξ be an uncertain variable with expected value e. If we only know its
uncertainty distribution Φ, then the variance
Z +∞
V [ξ] = M{(ξ − e)2 ≥ x}dx
0
Z +∞ √ √
= M{(ξ ≥ e + x) ∪ (ξ ≤ e − x)}dx
0
Z +∞ √ √
≤ (M{ξ ≥ e + x} + M{ξ ≤ e − x})dx
0
Z +∞ √ √
= (1 − Φ(e + x) + Φ(e − x))dx.
0

Thus we have the following stipulation.

Stipulation 2.1 (Liu [84]) Let ξ be an uncertain variable with uncertainty


distribution Φ and finite expected value e. Then
Z +∞
√ √
V [ξ] = (1 − Φ(e + x) + Φ(e − x))dx. (2.183)
0
Section 2.8 - Variance 83

Theorem 2.38 (Liu [95]) Let ξ be an uncertain variable with uncertainty


distribution Φ and finite expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (2.184)
−∞

Proof: This theorem is based on Stipulation 2.1 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0
√ 2
Substituting e + y with x and y with (x − e) , the change of variables and
integration by parts produce
+∞ +∞ +∞

Z Z Z
(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e)2 = (x − e)2 dΦ(x).
0 e e

Similarly, substituting e − y with x and y with (x − e)2 , we obtain
+∞ −∞ e

Z Z Z
2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞

It follows that the variance is


Z +∞ Z e Z +∞
2 2
V [ξ] = (x − e) dΦ(x) + (x − e) dΦ(x) = (x − e)2 dΦ(x).
e −∞ −∞

The theorem is verified.

Theorem 2.39 (Yao [178]) Let ξ be an uncertain variable with regular un-
certainty distribution Φ and finite expected value e. Then
Z 1
V [ξ] = (Φ−1 (α) − e)2 dα. (2.185)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.38 that the variance is
Z +∞ Z 1
V [ξ] = 2
(x − e) dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0

The theorem is verified.

Exercise 2.58: Show that the linear uncertain variable ξ ∼ L(a, b) has a
variance
(b − a)2
V [ξ] = . (2.186)
12
84 Chapter 2 - Uncertain Variable

Exercise 2.59: Show that the normal uncertain variable ξ ∼ N (e, σ) has a
variance
V [ξ] = σ 2 . (2.187)

Exercise 2.60: Let ξ and η be independent uncertain variables with regular


uncertainty distributions Φ and Ψ, respectively. Assume there exist two real
numbers a and b such that

Φ−1 (α) = aΨ−1 (α) + b (2.188)

for all α ∈ (0, 1). Show that


p p p
V [ξ + η] = V [ξ] + V [η] (2.189)

in the sense of Stipulation 2.1.

Remark 2.9: If ξ and η are independent linear uncertain variables, then the
condition (2.188) is met. If they are independent normal uncertain variables,
then the condition (2.188) is also met.

2.9 Moment
Definition 2.18 (Liu [77]) Let ξ be an uncertain variable and let k be a
positive integer. Then E[ξ k ] is called the k-th moment of ξ.

Theorem 2.40 (Liu [95]) Let ξ be an uncertain variable with uncertainty


distribution Φ, and let k be an odd number. Then the k-th moment of ξ is
Z +∞ √
Z 0 √
k
E[ξ ] = (1 − Φ( x))dx −
k
Φ( k x)dx. (2.190)
0 −∞

Proof: Since k is an odd number, it follows from the definition of expected


value operator that
Z +∞ Z 0
k
E[ξ ] = M{ξ ≥ x}dx −
k
M{ξ k ≤ x}dx
0 −∞
Z +∞ √
Z 0 √
= M{ξ ≥ k
x}dx − M{ξ ≤ k
x}dx
0 −∞
Z +∞ √
Z 0 √
= (1 − Φ( k x))dx − Φ( k x)dx.
0 −∞

The theorem is proved.


Section 2.9 - Moment 85

However, when k is an even number, the k-th moment of ξ cannot be


uniquely determined by the uncertainty distribution Φ. In this case, we have
Z +∞
E[ξ ] =k
M{ξ k ≥ x}dx
0
Z +∞ √ √
= M{(ξ ≥ k
x) ∪ (ξ ≤ − k x)}dx
0
Z +∞ √ √
≤ (M{ξ ≥ k
x} + M{ξ ≤ − k x})dx
0
Z +∞ √ √
= (1 − Φ( k x) + Φ(− k x))dx.
0

Thus for the even number k, we have the following stipulation.

Stipulation 2.2 (Liu [95]) Let ξ be an uncertain variable with uncertainty


distribution Φ, and let k be an even number. Then the k-th moment of ξ is
Z +∞ √ √
k
E[ξ ] = (1 − Φ( k x) + Φ(− k x))dx. (2.191)
0

Theorem 2.41 (Liu [95]) Let ξ be an uncertain variable with uncertainty


distribution Φ, and let k be a positive integer. Then the k-th moment of ξ is
Z +∞
E[ξ k ] = xk dΦ(x). (2.192)
−∞

Proof: When k is an odd number, Theorem 2.40 says that the k-th moment
is Z +∞ Z 0
√ √
E[ξ k ] = (1 − Φ( k y))dy − Φ( k y)dy.
0 −∞

Substituting y with x and y with xk , the change of variables and integration
k

by parts produce
+∞ +∞ +∞

Z Z Z
(1 − Φ( k y))dy = (1 − Φ(x))dxk = xk dΦ(x)
0 0 0

and
0 0 0

Z Z Z
Φ( k y)dy = Φ(x)dxk = − xk dΦ(x).
−∞ −∞ −∞

Thus we have
Z +∞ Z 0 Z +∞
k k k
E[ξ ] = x dΦ(x) + x dΦ(x) = xk dΦ(x).
0 −∞ −∞
86 Chapter 2 - Uncertain Variable

When k is an even number, the theorem is based on Stipulation 2.2 that says
the k-th moment is
Z +∞
√ √
E[ξ k ] = (1 − Φ( k y) + Φ(− k y))dy.
0

Substituting k y with x and y with xk , the change of variables and integration
by parts produce
+∞ +∞ +∞

Z Z Z
k
(1 − Φ( y))dy =
k
(1 − Φ(x))dx = xk dΦ(x).
0 0 0

Similarly, substituting − k y with x and y with xk , we obtain
+∞ 0 0

Z Z Z
k
Φ(− y)dy =
k
Φ(x)dx = xk dΦ(x).
0 −∞ −∞

It follows that the k-th moment is


Z +∞ Z 0 Z +∞
E[ξ k ] = xk dΦ(x) + xk dΦ(x) = xk dΦ(x).
0 −∞ −∞

The theorem is thus verified for any positive integer k.

Theorem 2.42 (Sheng-Kar [140]) Let ξ be an uncertain variable with regu-


lar uncertainty distribution Φ, and let k be a positive integer. Then the k-th
moment of ξ is
Z 1
E[ξ k ] = (Φ−1 (α))k dα. (2.193)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem 2.41 that the k-th moment is
Z +∞ Z 1
E[ξ k ] = xk dΦ(x) = (Φ−1 (α))k dα.
−∞ 0

The theorem is verified.

Exercise 2.61: Show that the second moment of linear uncertain variable
ξ ∼ L(a, b) is
a2 + ab + b2
E[ξ 2 ] = . (2.194)
3

Exercise 2.62: Show that the second moment of normal uncertain variable
ξ ∼ N (e, σ) is
E[ξ 2 ] = e2 + σ 2 . (2.195)
Section 2.10 - Distance 87

2.10 Distance
Definition 2.19 (Liu [77]) The distance between uncertain variables ξ and
η is defined as
d(ξ, η) = E[|ξ − η|]. (2.196)
That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain variable, we always have
Z +∞
d(ξ, η) = M{|ξ − η| ≥ x}dx. (2.197)
0

Theorem 2.43 (Liu [77]) Let ξ, η, τ be uncertain variables, and let d(·, ·) be
the distance. Then we have
(a) (Nonnegativity) d(ξ, η) ≥ 0;
(b) (Identification) d(ξ, η) = 0 if and only if ξ = η;
(c) (Symmetry) d(ξ, η) = d(η, ξ);
(d) (Triangle Inequality) d(ξ, η) ≤ 2d(ξ, τ ) + 2d(η, τ ).
Proof: The parts (a), (b) and (c) follow immediately from the definition.
Now we prove the part (d). It follows from the subadditivity axiom that
Z +∞
d(ξ, η) = M {|ξ − η| ≥ x} dx
0
Z +∞
≤ M {|ξ − τ | + |τ − η| ≥ x} dx
0
Z +∞
≤ M {(|ξ − τ | ≥ x/2) ∪ (|τ − η| ≥ x/2)} dx
0
Z +∞
≤ (M{|ξ − τ | ≥ x/2} + M{|τ − η| ≥ x/2}) dx
0

= 2E[|ξ − τ |] + 2E[|τ − η|] = 2d(ξ, τ ) + 2d(τ, η).

Example 2.18: Let Γ = {γ1 , γ2 , γ3 }. Define M{∅} = 0, M{Γ} = 1 and


M{Λ} = 1/2 for any subset Λ (excluding ∅ and Γ). We set uncertain variables
ξ, η and τ as follows,
 
 1, if γ = γ1
  0, if γ = γ1

ξ(γ) = 1, if γ = γ2 η(γ) = −1, if γ = γ2 τ (γ) ≡ 0.
 
0, if γ = γ3 , −1, if γ = γ3 ,
 

It is easy to verify that d(ξ, τ ) = d(τ, η) = 0.5 and d(ξ, η) = 1.5. Thus
d(ξ, η) = 1.5(d(ξ, τ ) + d(τ, η)).
A conjecture is d(ξ, η) ≤ 1.5(d(ξ, τ )+d(τ, η)) for arbitrary uncertain variables
ξ, η and τ . This is an open problem.
88 Chapter 2 - Uncertain Variable

How to Obtain Distance from Uncertainty Distributions?


Let ξ and η be independent uncertain variables. If ξ − η has an uncertainty
distribution Υ, then the distance between ξ and η is
Z +∞
d(ξ, η) = M{|ξ − η| ≥ x}dx
0
Z +∞
= M{(ξ − η ≥ x) ∪ (ξ − η ≤ −x)}dx
0
Z +∞
≤ (M{ξ − η ≥ x} + M{ξ − η ≤ −x})dx
0
Z +∞
= (1 − Υ(x) + Υ(−x))dx.
0
Thus we have the following stipulation.
Stipulation 2.3 (Liu [95]) Let ξ and η be independent uncertain variables,
and let Υ be the uncertainty distribution of ξ − η. Then the distance between
ξ and η is Z +∞
d(ξ, η) = (1 − Υ(x) + Υ(−x))dx. (2.198)
0
Theorem 2.44 (Liu [95]) Let ξ and η be independent uncertain variables,
and let Υ be the uncertainty distribution of ξ − η. Then the distance between
ξ and η is Z +∞
d(ξ, η) = |x|dΥ(x). (2.199)
−∞

Proof: This theorem is based on Stipulation 2.3. The change of variables


and integration by parts produce
Z +∞
d(ξ, η) = (1 − Υ(x) + Υ(−x))dx
0
Z +∞ Z +∞
= xdΥ(x) − xdΥ(−x)
0 0
Z +∞ Z 0
= |x|dΥ(x) + |x|dΥ(x)
0 −∞
Z +∞
= |x|dΥ(x).
−∞

The theorem is proved.

Exercise 2.63: Let ξ be an uncertain variable with uncertainty distribution


Φ, and let c be a constant. Show that the distance between ξ and c is
Z +∞
d(ξ, c) = |x − c|dΦ(x). (2.200)
−∞
Section 2.11 - Entropy 89

Theorem 2.45 (Liu [95]) Let ξ and η be independent uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Then the dis-
tance between ξ and η is
Z 1
d(ξ, η) = |Υ−1 (α)|dα. (2.201)
0

−1
where Υ (α) is the inverse uncertainty distribution of ξ − η, and

Υ−1 (α) = Φ−1 (α) − Ψ−1 (1 − α). (2.202)

Proof: Substituting Υ(x) with α and x with Υ−1 (α), it follows from the
change of variables and Theorem 2.44 that the distance is
Z +∞ Z 1
d(ξ, η) = |x|dΥ(x) = |Υ−1 (α)|dα.
−∞ 0

The theorem is verified.

Exercise 2.64: Let ξ be an uncertain variable with regular uncertainty


distribution Φ, and let c be a constant. Show that the distance between ξ
and c is Z 1
d(ξ, c) = |Φ−1 (α) − c|dα. (2.203)
0

2.11 Entropy
This section defines an entropy as the degree of difficulty of predicting the
realization of an uncertain variable.

Definition 2.20 (Liu [80]) Suppose that ξ is an uncertain variable with un-
certainty distribution Φ. Then its entropy is defined by
Z +∞
H[ξ] = S(Φ(x))dx (2.204)
−∞

where S(t) = −t ln t − (1 − t) ln(1 − t).

Example 2.19: Let ξ be an uncertain variable with uncertainty distribution


(
0, if x < a
Φ(x) = (2.205)
1, if x ≥ a.

Essentially, ξ is a constant a. It follows from the definition of entropy that


Z a Z +∞
H[ξ] = − (0 ln 0 + 1 ln 1) dx − (1 ln 1 + 0 ln 0) dx = 0.
−∞ a
90 Chapter 2 - Uncertain Variable

S(t)
.
....
.......
..
...
.
... . . . . . . . . . . . . . . ..............................
ln 2 .... .......
.....
.
.
.......
.....
.....
... ..... . .....
... ..... . ....
... ..
..... .
. ....
... ....
. . ....
...
... ...
. .
. ...
... ..
. . ...
...
... ... . ...
... ... . ...
.. .
... . . ...
... .... .
.
...
...
... ... . ...
... ... . ...
... ... .
. ...
... ... . ...
... ... . ...
...... . ...
...... . ...
.
....................................................................................................................................................................................
. .
.
.. t
0 ...
.. 0.5 1

Figure 2.15: Function S(t) = −t ln t − (1 − t) ln(1 − t). It is easy to verify


that S(t) is a symmetric function about t = 0.5, strictly increasing on the
interval [0, 0.5], strictly decreasing on the interval [0.5, 1], and reaches its
unique maximum ln 2 at t = 0.5.

This means a constant has entropy 0.

Example 2.20: Let ξ be a linear uncertain variable L(a, b). Then its entropy
is
Z b 
x−a x−a b−x b−x b−a
H[ξ] = − ln + ln dx = . (2.206)
a b−a b−a b−a b−a 2

Exercise 2.65: Show that the zigzag uncertain variable ξ ∼ Z(a, b, c) has
an entropy
c−a
H[ξ] = . (2.207)
2

Exercise 2.66: Show that the normal uncertain variable ξ ∼ N (e, σ) has
an entropy
πσ
H[ξ] = √ . (2.208)
3

Theorem 2.46 Let ξ be an uncertain variable. Then H[ξ] ≥ 0 and equality


holds if ξ is essentially a constant.

Proof: The nonnegativity is clear. In addition, when an uncertain variable


tends to a constant, its entropy tends to the minimum 0.

Theorem 2.47 Let ξ be an uncertain variable taking values on the interval


[a, b]. Then
H[ξ] ≤ (b − a) ln 2 (2.209)
and equality holds if ξ has an uncertainty distribution Φ(x) = 0.5 on [a, b].
Section 2.11 - Entropy 91

Proof: The theorem follows from the fact that the function S(t) reaches its
maximum ln 2 at t = 0.5.

Theorem 2.48 Let ξ be an uncertain variable, and let c be a real number.


Then
H[ξ + c] = H[ξ]. (2.210)
That is, the entropy is invariant under arbitrary translations.

Proof: Write the uncertainty distribution of ξ by Φ. Then the uncertain


variable ξ + c has an uncertainty distribution Φ(x − c). It follows from the
definition of entropy that
Z +∞ Z +∞
H[ξ + c] = S (Φ(x − c)) dx = S(Φ(x))dx = H[ξ].
−∞ −∞

The theorem is proved.

Theorem 2.49 (Dai-Chen [20]) Let ξ be an uncertain variable with regular


uncertainty distribution Φ. Then
Z 1
α
H[ξ] = Φ−1 (α) ln dα. (2.211)
0 1 − α

Proof: It is clear that S(α) is a derivable function whose derivative has the
form
α
S 0 (α) = − ln .
1−α
Since Z Φ(x)Z 1
S(Φ(x)) = S 0 (α)dα = − S 0 (α)dα,
0 Φ(x)

we have
Z +∞ Z 0 Z Φ(x) Z +∞ Z 1
H[ξ] = S(Φ(x))dx = S 0 (α)dαdx − S 0 (α)dαdx.
−∞ −∞ 0 0 Φ(x)

It follows from Fubini theorem that


Z Φ(0) Z 0 Z 1 Z Φ−1 (α)
H[ξ] = S 0 (α)dxdα − S 0 (α)dxdα
0 Φ−1 (α) Φ(0) 0
Z Φ(0) Z 1
=− Φ−1 (α)S 0 (α)dα − Φ−1 (α)S 0 (α)dα
0 Φ(0)
Z 1 Z 1
α
=− Φ−1 (α)S 0 (α)dα = Φ−1 (α) ln dα.
0 0 1−α
The theorem is verified.
92 Chapter 2 - Uncertain Variable

Theorem 2.50 (Dai-Chen [20]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain


variables with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively.
If f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and
strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (2.212)

has an entropy
Z 1
α
H[ξ] = f (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)) ln dα.
0 1−α

Proof: Since f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · ,


xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , it follows from
Theorem 2.14 that the inverse uncertainty distribution of ξ is

Ψ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

By using Theorem 2.49, we get the entropy formula.

Exercise 2.67: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
Z 1
α
H[ξη] = Φ−1 (α)Ψ−1 (α) ln dα.
0 1−α

Exercise 2.68: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
  Z 1
ξ Φ−1 (α) α
H = −1 (1 − α)
ln dα.
η 0 Ψ 1 − α

Exercise 2.69: Let ξ and η be independent and positive uncertain variables


with regular uncertainty distributions Φ and Ψ, respectively. Show that
 Z 1
Φ−1 (α)

ξ α
H = −1 −1
ln dα.
ξ+η 0 Φ (α) + Ψ (1 − α) 1 − α

Theorem 2.51 (Dai-Chen [20]) Let ξ and η be independent uncertain vari-


ables. Then for any real numbers a and b, we have

H[aξ + bη] = |a|H[ξ] + |b|H[η]. (2.213)

Proof: Without loss of generality, suppose ξ and η have regular uncertainty


distributions Φ and Ψ, respectively. Otherwise, we may give the uncertainty
distributions a small perturbation such that they become regular.
Section 2.11 - Entropy 93

Step 1: We prove H[aξ] = |a|H[ξ]. If a > 0, then the inverse uncertainty


distribution of aξ is
Υ−1 (α) = aΦ−1 (α).
It follows from Theorem 2.49 that
Z 1 Z 1
−1 α α
H[aξ] = aΦ (α) ln dα = a Φ−1 (α) ln dα = |a|H[ξ].
0 1 − α 0 1 − α
If a = 0, then we immediately have H[aξ] = 0 = |a|H[ξ]. If a < 0, then the
inverse uncertainty distribution of aξ is
Υ−1 (α) = aΦ−1 (1 − α).
It follows from Theorem 2.49 that
Z 1 Z 1
−1 α α
H[aξ] = aΦ (1 − α) ln dα =(−a) Φ−1 (α) ln dα = |a|H[ξ].
0 1 − α 0 1 − α
Thus we always have H[aξ] = |a|H[ξ].
Step 2: We prove H[ξ + η] = H[ξ] + H[η]. Note that the inverse uncer-
tainty distribution of ξ + η is
Υ−1 (α) = Φ−1 (α) + Ψ−1 (α).
It follows from Theorem 2.49 that
Z 1
α
H[ξ + η] = (Φ−1 (α) + Ψ−1 (α)) ln dα = H[ξ] + H[η].
0 1−α

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
H[aξ + bη] = H[aξ] + H[bη] = |a|H[ξ] + |b|H[η].
The theorem is proved.

Example 2.21: The independence condition in Theorem 2.51 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then ξ(γ) = γ is a linear uncertain
variable L(0, 1) with entropy
H[ξ] = 0.5, (2.214)
and η(γ) = 1 − γ is also a linear uncertain variable L(0, 1) with entropy
H[η] = 0.5. (2.215)
Note that ξ and η are not independent, and ξ + η ≡ 1 whose entropy is
H[ξ + η] = 0. (2.216)
Thus
H[ξ + η] 6= H[ξ] + H[η]. (2.217)
Therefore, the independence condition cannot be removed.
94 Chapter 2 - Uncertain Variable

Maximum Entropy Principle


Given some constraints, for example, expected value and variance, there are
usually multiple compatible uncertainty distributions. Which uncertainty
distribution shall we take? The maximum entropy principle attempts to
select the uncertainty distribution that has maximum entropy and satisfies
the prescribed constraints.

Theorem 2.52 (Chen-Dai [8]) Let ξ be an uncertain variable whose un-


certainty distribution is arbitrary but the expected value e and variance σ 2 .
Then
πσ
H[ξ] ≤ √ (2.218)
3
and the equality holds if ξ is a normal uncertain variable N (e, σ).

Proof: Let Φ(x) be the uncertainty distribution of ξ and write Ψ(x) =


Φ(2e − x) for x ≥ e. It follows from the stipulation (2.1) and the change of
variable of integral that the variance is
Z +∞ Z +∞
V [ξ] = 2 (x − e)(1 − Φ(x))dx + 2 (x − e)Ψ(x)dx = σ 2 .
e e

Thus there exists a real number κ such that


Z +∞
2 (x − e)(1 − Φ(x))dx = κσ 2 ,
e
Z +∞
2 (x − e)Ψ(x)dx = (1 − κ)σ 2 .
e
The maximum entropy distribution Φ should maximize the entropy
Z +∞ Z +∞ Z +∞
H[ξ] = S(Φ(x))dx = S(Φ(x))dx + S(Ψ(x))dx
−∞ e e

subject to the above two constraints. The Lagrangian is


Z +∞ Z +∞
L= S(Φ(x))dx + S(Ψ(x))dx
e e
 Z +∞ 
2
−α 2 (x − e)(1 − Φ(x))dx − κσ
e
 Z +∞ 
2
−β 2 (x − e)Ψ(x)dx − (1 − κ)σ .
e

The maximum entropy distribution meets Euler-Lagrange equations

ln Φ(x) − ln(1 − Φ(x)) = 2α(x − e),


Section 2.12 - Conditional Uncertainty Distribution 95

ln Ψ(x) − ln(1 − Ψ(x)) = 2β(e − x).


Thus Φ and Ψ have the forms
Φ(x) = (1 + exp(2α(e − x)))−1 ,
Ψ(x) = (1 + exp(2β(x − e)))−1 .
Substituting them into the variance constraints, we get
  −1
π(e − x)
Φ(x) = 1 + exp √ ,
6κσ
!!−1
π(x − e)
Ψ(x) = 1 + exp p .
6(1 − κ)σ
Then the entropy is √ √
πσ κ πσ 1 − κ
H[ξ] = √ + √
6 6
which achieves the maximum when κ = 1/2. Thus the maximum entropy
distribution is just the normal uncertainty distribution N (e, σ).

2.12 Conditional Uncertainty Distribution


Definition 2.21 (Liu [77]) The conditional uncertainty distribution Φ of an
uncertain variable ξ given A is defined by
Φ(x|A) = M {ξ ≤ x|A} (2.219)
provided that M{A} > 0.
Theorem 2.53 (Liu [84]) Let ξ be an uncertain variable with uncertainty
distribution Φ(x), and let t be a real number with Φ(t) < 1. Then the condi-
tional uncertainty distribution of ξ given ξ > t is
0, if Φ(x) ≤ Φ(t)




 Φ(x)


∧ 0.5, if Φ(t) < Φ(x) ≤ (1 + Φ(t))/2
Φ(x|(t, +∞)) = 1 − Φ(t)


 Φ(x) − Φ(t) , if (1 + Φ(t))/2 ≤ Φ(x).



1 − Φ(t)
Proof: It follows from Φ(x|(t, +∞)) = M {ξ ≤ x|ξ > t} and the definition of
conditional uncertainty that
M{(ξ ≤ x) ∩ (ξ > t)} M{(ξ ≤ x) ∩ (ξ > t)}

 , if < 0.5
M{ξ > t} M{ξ > t}





Φ(x|(t, +∞)) = M{(ξ > x) ∩ (ξ > t)} M{(ξ > x) ∩ (ξ > t)}
1− , if < 0.5
M{ξ > t} M{ξ > t}






0.5, otherwise.
96 Chapter 2 - Uncertain Variable

When Φ(x) ≤ Φ(t), we have x ≤ t, and

M{(ξ ≤ x) ∩ (ξ > t)} M{∅}


= = 0 < 0.5.
M{ξ > t} 1 − Φ(t)

Thus
M{(ξ ≤ x) ∩ (ξ > t)}
Φ(x|(t, +∞)) = = 0.
M{ξ > t}

When Φ(t) < Φ(x) ≤ (1 + Φ(t))/2, we have x > t, and

M{(ξ > x) ∩ (ξ > t)} 1 − Φ(x) 1 − (1 + Φ(t))/2


= ≥ = 0.5
M{ξ > t} 1 − Φ(t) 1 − Φ(t)

and
M{(ξ ≤ x) ∩ (ξ > t)} Φ(x)
≤ .
M{ξ > t} 1 − Φ(t)
It follows from the maximum uncertainty principle that

Φ(x)
Φ(x|(t, +∞)) = ∧ 0.5.
1 − Φ(t)

When (1 + Φ(t))/2 ≤ Φ(x), we have x ≥ t, and

M{(ξ > x) ∩ (ξ > t)} 1 − Φ(x) 1 − (1 + Φ(t))/2


= ≤ ≤ 0.5.
M{ξ > t} 1 − Φ(t) 1 − Φ(t)

Thus

M{(ξ > x) ∩ (ξ > t)} 1 − Φ(x) Φ(x) − Φ(t)


Φ(x|(t, +∞)) = 1 − =1− = .
M{ξ > t} 1 − Φ(t) 1 − Φ(t)

The theorem is proved.

Exercise 2.70: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
of ξ given ξ > t is


 0, if x ≤ t

 x−a


∧ 0.5, if t < x ≤ (b + t)/2
Φ(x|(t, +∞)) = b−t

x−t




 ∧ 1, if (b + t)/2 ≤ x.
b−t
Section 2.12 - Conditional Uncertainty Distribution 97

Φ(x|(t, +∞))
....
........
..
...
..
....
1 ........................................................................
... .......................................
.......
............
... .
... ..
.. ..
.
... ... ....
. ....
... ............
... .. ....
... ..... .......
... . ...
... ..... .......
.. ..
... ..... .........
... .
. .
..
.
................................................... .... ..
...
.
0.5 .... ............................................
.... .
... ..... .....
... .... .
..... ... .
.
... .
.... .....
...
... ..........
... .. .
... ..... ..
... ... ..
... ... ...
.
... ..... ..
... . .
................................................................................................................................................................................................................................................... x
....
0 ...
.
t

Figure 2.16: Conditional Uncertainty Distribution Φ(x|(t, +∞))

Theorem 2.54 (Liu [84]) Let ξ be an uncertain variable with uncertainty


distribution Φ(x), and let t be a real number with Φ(t) > 0. Then the condi-
tional uncertainty distribution of ξ given ξ ≤ t is
Φ(x)


 , if Φ(x) ≤ Φ(t)/2


 Φ(t)

Φ(x|(−∞, t]) = Φ(x) + Φ(t) − 1
∨ 0.5, if Φ(t)/2 ≤ Φ(x) < Φ(t)
Φ(t)






1, if Φ(t) ≤ Φ(x).
Proof: It follows from Φ(x|(−∞, t]) = M {ξ ≤ x|ξ ≤ t} and the definition of
conditional uncertainty that
M{(ξ ≤ x) ∩ (ξ ≤ t)} M{(ξ ≤ x) ∩ (ξ ≤ t)}

 , if < 0.5
M{ξ ≤ t} M{ξ ≤ t}





Φ(x|(−∞, t]) = M{(ξ > x) ∩ (ξ ≤ t)} M{(ξ > x) ∩ (ξ ≤ t)}
1− , if < 0.5
M{ξ ≤ t} M{ξ ≤ t}






0.5, otherwise.
When Φ(x) ≤ Φ(t)/2, we have x < t, and
M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x) Φ(t)/2
= ≤ = 0.5.
M{ξ ≤ t} Φ(t) Φ(t)
Thus
M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x)
Φ(x|(−∞, t]) = = .
M{ξ ≤ t} Φ(t)
When Φ(t)/2 ≤ Φ(x) < Φ(t), we have x < t, and
M{(ξ ≤ x) ∩ (ξ ≤ t)} Φ(x) Φ(t)/2
= ≥ = 0.5
M{ξ ≤ t} Φ(t) Φ(t)
98 Chapter 2 - Uncertain Variable

and
M{(ξ > x) ∩ (ξ ≤ t)} 1 − Φ(x)
≤ ,
M{ξ ≤ t} Φ(t)
i.e.,
M{(ξ > x) ∩ (ξ ≤ t)} Φ(x) + Φ(t) − 1
1− ≥ .
M{ξ ≤ t} Φ(t)
It follows from the maximum uncertainty principle that

Φ(x) + Φ(t) − 1
Φ(x|(−∞, t]) = ∨ 0.5.
Φ(t)

When Φ(t) ≤ Φ(x), we have x ≥ t, and

M{(ξ > x) ∩ (ξ ≤ t)} M{∅}


= = 0 < 0.5.
M{ξ ≤ t} Φ(t)

Thus
M{(ξ > x) ∩ (ξ ≤ t)}
Φ(x|(−∞, t]) = 1 − = 1 − 0 = 1.
M{ξ ≤ t}
The theorem is proved.

Exercise 2.71: Let ξ be a linear uncertain variable L(a, b), and let t be a real
number with a < t < b. Show that the conditional uncertainty distribution
of ξ given ξ ≤ t is
 x−a
 ∨ 0, if x ≤ (a + t)/2
t−a




  
Φ(x|(−∞, t]) = b−x
1− ∨ 0.5, if (a + t)/2 ≤ x < t
t−a





if x ≥ t.

1,

2.13 Uncertain Sequence


Uncertain sequence is a sequence of uncertain variables indexed by integers.
This section introduces four convergence concepts of uncertain sequence: con-
vergence almost surely (a.s.), convergence in measure, convergence in mean,
and convergence in distribution.

Definition 2.22 (Liu [77]) The uncertain sequence {ξi } is said to be con-
vergent a.s. to ξ if there exists an event Λ with M{Λ} = 1 such that

lim |ξi (γ) − ξ(γ)| = 0 (2.220)


i→∞

for every γ ∈ Λ. In that case we write ξi → ξ, a.s.


Section 2.13 - Uncertain Sequence 99

Φ(x|(−∞, t])
....
........
..
...
..
....
1 ........................................................................
...
.........................................................................
.. ..
.....
... ..
... .
.. .....
... .. .. .
... .. ....
... ..
... ......
... ..
. .
... .. .......
.
.. .
.
... ..... ........ ...
... .
. .
.. ..
... .... ..
..... ..
0.5 ....................................
...
... ..
.
...........................................
.... . ...
..
..
... .
.... ......
. ..
... ..
.... .... ..
... ...
.
... ..
.... .......
..
... ..
.
... ..
..
... ..
. ..
... ..
.... ..... ..
... ..
....... ..
... ..
.... ... ..
... ..
.......... ..
.
...
... .
. .
................................................................................................................................................................................................................................................... x
....
0 ..
..
t

Figure 2.17: Conditional Uncertainty Distribution Φ(x|(−∞, t])

Table 2.1: Relationship among Convergence Concepts

Convergence Convergence Convergence


⇒ ⇒
in Mean in Measure in Distribution
Convergence Almost Surely

Definition 2.23 (Liu [77]) The uncertain sequence {ξi } is said to be con-
vergent in measure to ξ if
lim M {|ξi − ξ| ≥ ε} = 0 (2.221)
i→∞

for every ε > 0.


Definition 2.24 (Liu [77]) The uncertain sequence {ξi } is said to be con-
vergent in mean to ξ if
lim E[|ξi − ξ|] = 0. (2.222)
i→∞

Definition 2.25 (Liu [77]) Let Φ, Φ1 , Φ2 , · · · be the uncertainty distributions


of uncertain variables ξ, ξ1 , ξ2 , · · · , respectively. We say the uncertain se-
quence {ξi } converges in distribution to ξ if
lim Φi (x) = Φ(x) (2.223)
i→∞

for all x at which Φ(x) is continuous.

Convergence in Mean vs. Convergence in Measure


Theorem 2.55 (Liu [77]) If the uncertain sequence {ξi } converges in mean
to ξ, then {ξi } converges in measure to ξ.
100 Chapter 2 - Uncertain Variable

Proof: Since {ξi } converges in mean to ξ, we have E[|ξi − ξ|] → 0 as i → ∞.


For any given number ε > 0, it follows from Markov inequality that

E[|ξi − ξ|]
M{|ξi − ξ| ≥ ε} ≤ →0
ε
as i → ∞. Thus {ξi } converges in measure to ξ. The theorem is proved.

Example 2.22: Convergence in measure does not imply convergence in


mean. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with power set
and X 1
M{Λ} = .
2j
γj ∈Λ

Define uncertain variables as


(
2i , if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. For any small number ε > 0, we have


1
M{|ξi − ξ| ≥ ε} = M{|ξi − ξ| ≥ ε} = →0
2i
as i → ∞. That is, the sequence {ξi } converges in measure to ξ. However,
for each i, we have
E[|ξi − ξ|] = 1.
That is, the sequence {ξi } does not converge in mean to ξ.

Convergence in Measure vs. Convergence in Distribution


Theorem 2.56 (Liu [77]) If the uncertain sequence {ξi } converges in mea-
sure to ξ, then {ξi } converges in distribution to ξ.

Proof: Let x be a continuity point of the uncertainty distribution Φ. On


the one hand, for any y > x, we have

{ξi ≤ x} = {ξi ≤ x, ξ ≤ y} ∪ {ξi ≤ x, ξ > y} ⊂ {ξ ≤ y} ∪ {|ξi − ξ| ≥ y − x}.

It follows from the subadditivity axiom that

Φi (x) ≤ Φ(y) + M{|ξi − ξ| ≥ y − x}.

Since {ξi } converges in measure to ξ, we have M{|ξi − ξ| ≥ y − x} → 0 as


i → ∞. Thus we obtain lim supi→∞ Φi (x) ≤ Φ(y) for any y > x. Letting
y → x, we get
lim sup Φi (x) ≤ Φ(x). (2.224)
i→∞
Section 2.13 - Uncertain Sequence 101

On the other hand, for any z < x, we have


{ξ ≤ z} = {ξi ≤ x, ξ ≤ z} ∪ {ξi > x, ξ ≤ z} ⊂ {ξi ≤ x} ∪ {|ξi − ξ| ≥ x − z}
which implies that
Φ(z) ≤ Φi (x) + M{|ξi − ξ| ≥ x − z}.
Since M{|ξi − ξ| ≥ x − z} → 0, we obtain Φ(z) ≤ lim inf i→∞ Φi (x) for any
z < x. Letting z → x, we get
Φ(x) ≤ lim inf Φi (x). (2.225)
i→∞

It follows from (2.224) and (2.225) that Φi (x) → Φ(x) as i → ∞. The


theorem is proved.

Example 2.23: Convergence in distribution does not imply convergence in


measure. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with power set
and M{γ1 } = M{γ2 } = 1/2. Define uncertain variables as
(
−1, if γ = γ1
ξ(γ) =
1, if γ = γ2 ,

and ξi = −ξ for i = 1, 2, · · · Then ξi and ξ have the same uncertainty


distribution. Thus {ξi } converges in distribution to ξ. However, for some
small number ε > 0, we have
M{|ξi − ξ| ≥ ε} = M{|ξi − ξ| ≥ ε} = 1.
That is, the sequence {ξi } does not converge in measure to ξ.

Convergence Almost Surely vs. Convergence in Measure

Example 2.24: Convergence a.s. does not imply convergence in measure.


Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with power set and

 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ

0.5, otherwise.

Define uncertain variables as


(
i, if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. Then the sequence {ξi } converges a.s. to ξ.


However, for some small number ε > 0, we have
M{|ξi − ξ| ≥ ε} = 0.5
102 Chapter 2 - Uncertain Variable

for each i. That is, the sequence {ξi } does not converge in measure to ξ.

Example 2.25: Convergence in measure does not imply convergence a.s.


Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra and
Lebesgue measure. For any positive integer i, there is an integer j such
that i = 2j + k, where k is an integer between 0 and 2j − 1. Define uncertain
variables as (
1, if k/2j ≤ γ ≤ (k + 1)/2j
ξi (γ) =
0, otherwise
for i = 1, 2, · · · and ξ ≡ 0. Then for any small number ε > 0, we have
1
M{|ξi − ξ| ≥ ε} = →0
2j
as i → ∞. That is, the sequence {ξi } converges in measure to ξ. However, for
any γ ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing γ. Thus ξi (γ) does not converge to 0. In other words, the
sequence {ξi } does not converge a.s. to ξ.

Convergence Almost Surely vs. Convergence in Mean

Example 2.26: Convergence a.s. does not imply convergence in mean. Take
an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with power set and
X 1
M{Λ} = .
2j
γj ∈Λ

Define uncertain variables as


(
2i , if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. Then ξi converges a.s. to ξ. However, the sequence


{ξi } does not converge in mean to ξ because E[|ξi − ξ|] ≡ 1 for each i.

Example 2.27: Convergence in mean does not imply convergence a.s. Take
an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra and Lebesgue
measure. For any positive integer i, there is an integer j such that i = 2j + k,
where k is an integer between 0 and 2j − 1. Define uncertain variables as
(
1, if k/2j ≤ γ ≤ (k + 1)/2j
ξi (γ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. Then


1
E[|ξi − ξ|] = →0
2j
Section 2.14 - Uncertain Vector 103

as i → ∞. That is, the sequence {ξi } converges in mean to ξ. However, for


any γ ∈ [0, 1], there is an infinite number of intervals of the form [k/2j , (k +
1)/2j ] containing γ. Thus ξi (γ) does not converge to 0. In other words, the
sequence {ξi } does not converge a.s. to ξ.

Convergence Almost Surely vs. Convergence in Distribution

Example 2.28: Convergence in distribution does not imply convergence


a.s. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with power set and
M{γ1 } = M{γ2 } = 1/2. Define uncertain variables as
(
−1, if γ = γ1
ξ(γ) =
1, if γ = γ2

and ξi = −ξ for i = 1, 2, · · · Then ξi and ξ have the same uncertainty


distribution. Thus {ξi } converges in distribution to ξ. However, the sequence
{ξi } does not converge a.s. to ξ.

Example 2.29: Convergence a.s. does not imply convergence in distribution.


Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , · · · } with power set and

 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ

0.5, otherwise.

Define uncertain variables as


(
i, if j = i
ξi (γj ) =
0, otherwise

for i = 1, 2, · · · and ξ ≡ 0. Then the sequence {ξi } converges a.s. to ξ.


However, the uncertainty distributions of ξi are

 0, if x < 0

Φi (x) = 0.5, if 0 ≤ x < i

1, if x ≥ i

for i = 1, 2, · · · , respectively, and the uncertainty distribution of ξ is


(
0, if x < 0
Φ(x) =
1, if x ≥ 0.

It is clear that Φi (x) does not converge to Φ(x) at x > 0. That is, the
sequence {ξi } does not converge in distribution to ξ.
104 Chapter 2 - Uncertain Variable

2.14 Uncertain Vector


As an extension of uncertain variable, this section introduces a concept of
uncertain vector whose components are uncertain variables.

Definition 2.26 (Liu [77]) A k-dimensional uncertain vector is a function ξ


from an uncertainty space (Γ, L, M) to the set of k-dimensional real vectors
such that {ξ ∈ B} is an event for any Borel set B of k-dimensional real
vectors.

Theorem 2.57 (Liu [77]) The vector (ξ1 , ξ2 , · · · , ξk ) is an uncertain vector


if and only if ξ1 , ξ2 , · · · , ξk are uncertain variables.

Proof: Write ξ = (ξ1 , ξ2 , · · · , ξk ). Suppose that ξ is an uncertain vector on


the uncertainty space (Γ, L, M). For any Borel set B of real numbers, the set
B × <k−1 is a Borel set of k-dimensional real vectors. Thus the set

{ξ1 ∈ B} = {ξ1 ∈ B, ξ2 ∈ <, · · · , ξk ∈ <} = {ξ ∈ B × <k−1 }

is an event. Hence ξ1 is an uncertain variable. A similar process may prove


that ξ2 , ξ3 , · · · , ξk are uncertain variables.
Conversely, suppose that all ξ1 , ξ2 , · · · , ξk are uncertain variables on the
uncertainty space (Γ, L, M). We define

B = B ⊂ <k {ξ ∈ B} is an event .


The vector ξ = (ξ1 , ξ2 , · · · , ξk ) is proved to be an uncertain vector if we can


prove that B contains all Borel sets of k-dimensional real vectors. First, the
class B contains all open intervals of <k because
( k
) k
Y \
ξ∈ (ai , bi ) = {ξi ∈ (ai , bi )}
i=1 i=1

is an event. Next, the class B is a σ-algebra over <k because (i) we have
<k ∈ B since {ξ ∈ <k } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and

{ξ ∈ B c } = {ξ ∈ B}c

is an event. This means that B c ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · · , then


{ξ ∈ Bi } are events and
∞ ∞
( )
[ [
ξ∈ Bi = {ξ ∈ Bi }
i=1 i=1

is an event. This means that ∪i Bi ∈ B. Since the smallest σ-algebra con-


taining all open intervals of <k is just the Borel algebra over <k , the class B
contains all Borel sets of k-dimensional real vectors. The theorem is proved.
Section 2.14 - Uncertain Vector 105

Definition 2.27 (Liu [77]) The joint uncertainty distribution of an uncer-


tain vector (ξ1 , ξ2 , · · · , ξk ) is defined by

Φ(x1 , x2 , · · · , xk ) = M {ξ1 ≤ x1 , ξ2 ≤ x2 , · · · , ξk ≤ xk } (2.226)

for any real numbers x1 , x2 , · · · , xk .

Theorem 2.58 (Liu [77]) Let ξ1 , ξ2 , · · · , ξk be independent uncertain vari-


ables with uncertainty distributions Φ1 , Φ2 , · · · , Φk , respectively. Then the
uncertain vector (ξ1 , ξ2 , · · · , ξk ) has a joint uncertainty distribution

Φ(x1 , x2 , · · · , xk ) = Φ1 (x1 ) ∧ Φ2 (x2 ) ∧ · · · ∧ Φk (xk ) (2.227)

for any real numbers x1 , x2 , · · · , xk .

Proof: Since ξ1 , ξ2 , · · · , ξk are independent uncertain variables, we have


( k
) k k
\ ^ ^
Φ(x1 , x2 , · · · , xk ) = M (ξi ≤ xi ) = M{ξi ≤ xi } = Φi (xi )
i=1 i=1 i=1

for any real numbers x1 , x2 , · · · , xk . The theorem is proved.

Remark 2.10: However, the equation (2.227) does not imply that the un-
certain variables are independent. For example, let ξ be an uncertain variable
with uncertainty distribution Φ. Then the joint uncertainty distribution Ψ
of uncertain vector (ξ, ξ) is

Ψ(x1 , x2 ) = M{(ξ ≤ x1 ) ∩ (ξ ≤ x2 )} = Φ(x1 ) ∧ Φ(x2 )

for any real numbers x1 and x2 . But, generally speaking, an uncertain vari-
able is not independent of itself.

Definition 2.28 (Liu [92]) The k-dimensional uncertain vectors ξ 1 , ξ 2 , · · · ,


ξ n are said to be independent if for any Borel sets B1 , B2 , · · · , Bn of k-
dimensional real vectors, we have
( n ) n
\ ^
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi }. (2.228)
i=1 i=1

Exercise 2.72: Let (ξ1 , ξ2 , ξ3 ) and (η1 , η2 , η3 ) be independent uncertain


vectors. Show that ξ1 and η2 are independent uncertain variables.

Exercise 2.73: Let (ξ1 , ξ2 , ξ3 ) and (η1 , η2 , η3 ) be independent uncertain


vectors. Show that (ξ1 , ξ2 ) and (η2 , η3 ) are independent uncertain vectors.
106 Chapter 2 - Uncertain Variable

Theorem 2.59 (Liu [92]) The k-dimensional uncertain vectors ξ 1 , ξ 2 , · · · ,


ξ n are independent if and only if
( n ) n
[ _
M (ξ i ∈ Bi ) = M {ξ i ∈ Bi } (2.229)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn of k-dimensional real vectors.

Proof: It follows from the duality of uncertain measure that ξ 1 , ξ 2 , · · · , ξ n


are independent if and only if
( n ) ( n )
[ \
M (ξ i ∈ Bi ) = 1 − M c
(ξ i ∈ Bi )
i=1 i=1
^n n
_
=1− M{ξ i ∈ Bic } = M {ξ i ∈ Bi } .
i=1 i=1

The theorem is thus proved.

Theorem 2.60 Let ξ 1 , ξ 2 , · · · , ξ n be independent uncertain vectors, and let


f1 , f2 , · · · , fn be vector-valued measurable functions. Then f1 (ξ 1 ), f2 (ξ 2 ), · · · ,
fn (ξ n ) are also independent uncertain vectors.

Proof: For any Borel sets B1 , B2 , · · · , Bn of k-dimensional real vectors, it


follows from the definition of independence that
( n ) ( n )
\ \
−1
M (fi (ξ i ) ∈ Bi ) = M (ξ i ∈ fi (Bi ))
i=1 i=1
n
^ n
^
= M{ξ i ∈ fi−1 (Bi )} = M{fi (ξ i ) ∈ Bi }.
i=1 i=1

Thus f1 (ξ 1 ), f2 (ξ 2 ), · · · , fn (ξ n ) are independent uncertain variables.

Normal Uncertain Vector


Definition 2.29 (Liu [92]) Let τ1 , τ2 , · · · , τm be independent normal uncer-
tain variables with expected value 0 and variance 1. Then

τ = (τ1 , τ2 , · · · , τm ) (2.230)

is called a standard normal uncertain vector.

It is easy to verify that a standard normal uncertain vector (τ1 , τ2 , · · · , τm )


has a joint uncertainty distribution
  −1
π(x1 ∧ x2 ∧ · · · ∧ xm )
Φ(x1 , x2 , · · · , xm ) = 1 + exp − √ (2.231)
3
Section 2.15 - Uncertain Matrix 107

for any real numbers x1 , x2 , · · · , xm . It is also easy to show that

lim Φ(x1 , x2 , · · · , xm ) = 0, for each i, (2.232)


xi →−∞

lim Φ(x1 , x2 , · · · , xm ) = 1. (2.233)


(x1 ,x2 ,··· ,xm )→+∞

Furthermore, the limit

lim Φ(x1 , x2 , · · · , xm ) (2.234)


(x1 ,··· ,xi−1 ,xi+1 ,··· ,xm )→+∞

is a standard normal distribution with respect to xi .

Definition 2.30 (Liu [92]) Let (τ1 , τ2 , · · · , τm ) be a standard normal uncer-


tain vector, and let ei , σij , i = 1, 2, · · · , k, j = 1, 2, · · · , m be real numbers.
Define
X m
ξi = ei + σij τj (2.235)
j=1

for i = 1, 2, · · · , k. Then (ξ1 , ξ2 , · · · , ξk ) is called a normal uncertain vector.

That is, an uncertain vector ξ has a multivariate normal distribution if it


can be represented in the form

ξ = e + στ (2.236)

for some real vector e and some real matrix σ, where τ is a standard normal
uncertain vector. Note that ξ, e and τ are understood as column vectors.
Please also note that for every index i, the component ξi is a normal uncertain
variable with expected value ei and standard variance
m
X
|σij |. (2.237)
j=1

Theorem 2.61 (Liu [92]) Assume ξ is a normal uncertain vector, c is a


real vector, and D is a real matrix. Then

η = c + Dξ (2.238)

is another normal uncertain vector.

Proof: Since ξ is a normal uncertain vector, there exists a standard normal


uncertain vector τ , a real vector e and a real matrix σ such that ξ = e + στ .
It follows that

η = c + Dξ = c + D(e + στ ) = (c + De) + (Dσ)τ .

Hence η is a normal uncertain vector.


108 Chapter 2 - Uncertain Variable

2.15 Uncertain Matrix


This section introduces a concept of uncertain matrix that is a matrix all of
whose elements are uncertain variables.

Definition 2.31 (Liu [98]) A p × q uncertain matrix is a function ξ from an


uncertainty space (Γ, L, M) to the set of p×q real matrices such that {ξ ∈ B}
is an event for any Borel set B of p × q real matrices.

Theorem 2.62 (Liu [98]) The p × q matrix ξ is an uncertain matrix if and


only if
ξ11 ξ12 · · · ξ1q
 

 21 ξ22 · · ·
 ξ ξ2q 

ξ=  .. .. .. ..
 (2.239)
.

 . . . 
ξp1 ξp2 · · · ξpq
where ξij , i = 1, 2, · · · , p, j = 1, 2, · · · , q are uncertain variables.

Proof: Suppose that ξ is defined on the uncertainty space (Γ, L, M). For
any Borel set B of real numbers, the set

B < ··· <


 
 < < ··· < 

 
B = .. .. . . ..

.
 
 . . . 
< < ··· <

is a Borel set of p × q real matrices. Thus the set {ξ11 ∈ B} = {ξ ∈ B ∗ } is


an event. Hence ξ11 is an uncertain variable. A similar process may prove
that other ξij ’s are uncertain variables.
Conversely, suppose that all ξij ’s are uncertain variables on the uncer-
tainty space (Γ, L, M). We define

B = B ⊂ <p×q {ξ ∈ B} is an event .


The matrix ξ = (ξij )p×q is proved to be an uncertain matrix if we can prove


that B contains all Borel sets of p×q real matrices. First, the class B contains
all open intervals of <p×q because

· · · (a1q , b1q )
 
(a11 , b11 ) (a12 , b12 )


 

· · · (a2q , b2q )
  p \ q

 
 (a21 , b21 ) (a22 , b22 ) 
 \
ξ ∈ .. .. .. ..
 = {ξij ∈ (aij , bij )}
.
 




 . . . 

 i=1 j=1
· · · (apq , bpq )
 
(ap1 , bp1 ) (ap2 , bp2 )
Section 2.15 - Uncertain Matrix 109

is an event. Next, the class B is a σ-algebra over <p×q because (i) we have
<p×q ∈ B since {ξ ∈ <p×q } = Γ; (ii) if B ∈ B, then {ξ ∈ B} is an event, and
{ξ ∈ B c } = {ξ ∈ B}c
is an event. This means that B c ∈ B; (iii) if Bi ∈ B for i = 1, 2, · · · , then
{ξ ∈ Bi } are events and
∞ ∞
( )
[ [
ξ∈ Bi = {ξ ∈ Bi }
i=1 i=1

is an event. This means that ∪i Bi ∈ B. Since the smallest σ-algebra contain-


ing all open intervals of <p×q is just the Borel algebra over <p×q , the class
B contains all Borel sets of p × q real matrices. The theorem is proved.
Definition 2.32 (Liu [98]) The p × q uncertain matrices ξ 1 , ξ 2 , · · · , ξ n are
said to be independent if for any Borel sets B1 , B2 , · · · , Bn of p × q real
matrices, we have
( n ) n
\ ^
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi }. (2.240)
i=1 i=1

Exercise 2.74: Let (ξij )3×3 and (ηij )3×3 be independent uncertain matrices.
Show that (ξ11 , ξ12 ) and (η31 , η32 , η33 ) are independent uncertain vectors.

Exercise 2.75: Let (ξij )3×3 and (ηij )3×3 be independent uncertain matrices.
Show that  
! η11 η12
ξ11 ξ12 ξ13
and  η21 η22 
 
ξ21 ξ22 ξ23
η31 η32
are independent uncertain matrices.
Theorem 2.63 (Liu [98]) The p × q uncertain matrices ξ 1 , ξ 2 , · · · , ξ n are
independent if and only if
( n ) n
[ _
M (ξ i ∈ Bi ) = M {ξ i ∈ Bi } (2.241)
i=1 i=1

for any Borel sets B1 , B2 , · · · , Bn of p × q real matrices.


Proof: It follows from the duality of uncertain measure that ξ 1 , ξ 2 , · · · , ξ n
are independent if and only if
( n ) ( n )
[ \
M (ξ i ∈ Bi ) = 1 − M c
(ξ i ∈ Bi )
i=1 i=1
^n n
_
=1− M{ξ i ∈ Bic } = M {ξ i ∈ Bi } .
i=1 i=1
110 Chapter 2 - Uncertain Variable

The theorem is thus proved.

Theorem 2.64 (Liu [98]) Let ξ 1 , ξ 2 , · · · , ξ n be independent uncertain ma-


trices, and let f1 , f2 , · · · , fn be matrix-valued measurable functions. Then
f1 (ξ 1 ), f2 (ξ 2 ), · · · , fn (ξ n ) are also independent uncertain matrices.

Proof: For any Borel sets B1 , B2 , · · · , Bn of real matrices, it follows from


the definition of independence that
( n ) ( n )
\ \
−1
M (fi (ξ i ) ∈ Bi ) = M (ξ i ∈ fi (Bi ))
i=1 i=1
n
^ n
^
= M{ξ i ∈ fi−1 (Bi )} = M{fi (ξ i ) ∈ Bi }.
i=1 i=1

Thus f1 (ξ 1 ), f2 (ξ 2 ), · · · , fn (ξ n ) are independent uncertain variables.

2.16 Bibliographic Notes


As a fundamental concept in uncertainty theory, the uncertain variable was
presented by Liu [77] in 2007. In order to describe uncertain variable, Liu
[77] also introduced the uncertainty distribution. Later, Peng-Iwamura [122]
proved a sufficient and necessary condition for uncertainty distribution. In
addition, Liu [84] proposed the inverse uncertainty distribution, and Liu [89]
verified a sufficient and necessary condition for it. Furthermore, Liu [77]
proposed the conditional uncertainty distribution, and derived some formulas
for calculating it.
Following the independence concept of uncertain variables proposed by
Liu [80], the operational law was given by Liu [84] for calculating the uncer-
tainty distribution and inverse uncertainty distribution of strictly monotone
function of independent uncertain variables.
In order to rank uncertain variables, Liu [77] proposed the expected value
operator. In addition, the linearity of expected value operator was verified
by Liu [84]. As an important contribution, Liu-Ha [104] derived a useful
formula for calculating the expected values of strictly monotone functions
of independent uncertain variables. Based on the expected value operator,
Liu [77] presented the variance, moments and distance between uncertain
variables.
The entropy was proposed by Liu [80] as the degree of difficulty of pre-
dicting the realization of an uncertain variable. Chen-Dai [8] discussed the
maximum entropy principle in order to select the uncertainty distribution
that has maximum entropy and satisfies the prescribed constraints. Espe-
cially, normal uncertainty distribution is proved to have maximum entropy
when the expected value and variance are fixed in advance.
Section 2.16 - Bibliographic Notes 111

Uncertain sequence was presented by Liu [77] with convergence almost


surely, convergence in measure, convergence in mean, and convergence in
distribution. Furthermore, Gao [41], You [190], Zhang [203], and Chen-Li-
Ralescu [15] developed some other concepts of convergence and investigated
their mathematical properties.
Uncertain vector was defined by Liu [77] as a measurable function from
an uncertainty space to the set of real vectors. In addition, Liu [92] discussed
the independence of uncertain vectors and proposed the concept of normal
uncertain vector.
Uncertain matrix was suggested by Liu [98] as a measurable function from
an uncertainty space to the set of real matrices.
Chapter 3

Uncertain Programming

Uncertain programming was founded by Liu [79] in 2009. This chapter will
provide the theory of uncertain programming, and present some uncertain
programming models for machine scheduling problem, vehicle routing prob-
lem, and project scheduling problem.

3.1 Uncertain Programming


Uncertain programming is a type of mathematical programming involving
uncertain variables. Assume that x is a decision vector, and ξ is an uncer-
tain vector. Since an uncertain objective function f (x, ξ) cannot be directly
minimized, we may minimize its expected value, i.e.,

min E[f (x, ξ)]. (3.1)


x

In addition, since the uncertain constraints gj (x, ξ) ≤ 0, j = 1, 2, · · · , p do not


define a crisp feasible set, it is naturally desired that the uncertain constraints
hold with confidence levels α1 , α2 , · · · , αp . Then we have a set of chance
constraints,
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p. (3.2)

In order to obtain a decision with minimum expected objective value sub-


ject to a set of chance constraints, Liu [79] proposed the following uncertain
programming model,

 min E[f (x, ξ)]
 x

subject to: (3.3)

M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p.


114 Chapter 3 - Uncertain Programming

Definition 3.1 (Liu [79]) A vector x is called a feasible solution to the un-
certain programming model (3.3) if

M{gj (x, ξ) ≤ 0} ≥ αj (3.4)

for j = 1, 2, · · · , p.

Definition 3.2 (Liu [79]) A feasible solution x∗ is called an optimal solution


to the uncertain programming model (3.3) if

E[f (x∗ , ξ)] ≤ E[f (x, ξ)] (3.5)

for any feasible solution x.

Theorem 3.1 Assume the objective function f (x, ξ1 , ξ2 , · · · , ξn ) is strictly


increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect
to ξm+1 , ξm+2 , · · · , ξn . If ξ1 , ξ2 , · · · , ξn are independent uncertain variables
with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, then the
expected objective function E[f (x, ξ1 , ξ2 , · · · , ξn )] is equal to
Z 1
f (x, Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα. (3.6)
0

Proof: It follows from Theorem 2.26 immediately.

Exercise 3.1: Assume f (x, ξ) = h1 (x)ξ1 + h2 (x)ξ2 + · · · + hn (x)ξn + h0 (x)


where h1 (x), h2 (x), · · · , hn (x), h0 (x) are real-valued functions and ξ1 , ξ2 , · · · ,
ξn are independent uncertain variables. Show that

E[f (x, ξ)] = h1 (x)E[ξ1 ] + h2 (x)E[ξ2 ] + · · · + hn (x)E[ξn ] + h0 (x). (3.7)

Theorem 3.2 Assume the constraint function g(x, ξ1 , ξ2 , · · · , ξn ) is strictly


increasing with respect to ξ1 , ξ2 , · · · , ξk and strictly decreasing with respect
to ξk+1 , ξk+2 , · · · , ξn . If ξ1 , ξ2 , · · · , ξn are independent uncertain variables
with uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, then the chance
constraint
M {g(x, ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ α (3.8)
holds if and only if

g(x, Φ−1 −1 −1 −1
1 (α), · · · , Φk (α), Φk+1 (1 − α), · · · , Φn (1 − α)) ≤ 0. (3.9)

Proof: It follows from the operational law of uncertain variables that the
inverse uncertainty distribution of g(x, ξ1 , ξ2 , · · · , ξn ) is

Ψ−1 (α) = g(x, Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

Thus (3.8) holds if and only if Ψ−1 (α) ≤ 0. The theorem is thus verified.
Section 3.1 - Uncertain Programming 115

Exercise 3.2: Assume x1 , x2 , · · · , xn are nonnegative decision variables, and


ξ1 , ξ2 , · · · , ξn , ξ are independent linear uncertain variables L(a1 , b1 ), L(a2 , b2 ),
· · · , L(an , bn ), L(a, b), respectively. Show that for any confidence level α ∈
(0, 1), the chance constraint
( n )
X
M ξi xi ≤ ξ ≥ α (3.10)
i=1

holds if and only if


n
X
((1 − α)ai + αbi )xi ≤ αa + (1 − α)b. (3.11)
i=1

Exercise 3.3: Assume x1 , x2 , · · · , xn are nonnegative decision variables,


and ξ1 , ξ2 , · · · , ξn , ξ are independent normal uncertain variables N (e1 , σ1 ),
N (e2 , σ2 ), · · · , N (en , σn ), N (e, σ), respectively. Show that for any confidence
level α ∈ (0, 1), the chance constraint
( n )
X
M ξi xi ≤ ξ ≥ α (3.12)
i=1

holds if and only if


n √ ! √
X σi 3 α σ 3 α
ei + ln xi ≤ e − ln . (3.13)
i=1
π 1−α π 1−α

Exercise 3.4: Assume ξ1 , ξ2 , · · · , ξn are independent uncertain variables


with regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, and h1 (x),
h2 (x), · · · , hn (x), h0 (x) are real-valued functions. Show that
( n )
X
M hi (x)ξi ≤ h0 (x) ≥ α (3.14)
i=1

holds if and only if


n
X n
X
−1
h+
i (x)Φi (α) − h− −1
i (x)Φi (1 − α) ≤ h0 (x) (3.15)
i=1 i=1

where (
hi (x), if hi (x) > 0
h+
i (x) = (3.16)
0, if hi (x) ≤ 0,
(
−hi (x), if hi (x) < 0
h−
i (x) = (3.17)
0, if hi (x) ≥ 0
for i = 1, 2, · · · , n.
116 Chapter 3 - Uncertain Programming

Theorem 3.3 Assume f (x, ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect


to ξ1 , ξ2 , · · · , ξm and strictly decreasing with respect to ξm+1 , ξm+2 , · · · , ξn ,
and gj (x, ξ1 , ξ2 , · · · , ξn ) are strictly increasing with respect to ξ1 , ξ2 , · · · , ξk
and strictly decreasing with respect to ξk+1 , ξk+2 , · · · , ξn for j = 1, 2, · · · , p.
If ξ1 , ξ2 , · · · , ξn are independent uncertain variables with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively, then the uncertain programming

 min E[f (x, ξ1 , ξ2 , · · · , ξn )]
x

subject to: (3.18)

M{gj (x, ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ αj , j = 1, 2, · · · , p

is equivalent to the crisp mathematical programming


 Z 1
 min
 f (x, Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα
x



 0

subject to:



 gj (x, Φ−1 −1 −1 −1
1 (αj ), · · · , Φk (αj ), Φk+1 (1 − αj ), · · · , Φn (1 − αj )) ≤ 0



j = 1, 2, · · · , p.
Proof: It follows from Theorems 3.1 and 3.2 immediately.

3.2 Numerical Method


When the objective functions and constraint functions are monotone with
respect to the uncertain parameters, the uncertain programming model may
be converted to a crisp mathematical programming.
It is fortunate for us that almost all objective and constraint functions
in practical problems are indeed monotone with respect to the uncertain
parameters (not decision variables).
From the mathematical viewpoint, there is no difference between crisp
mathematical programming and classical mathematical programming except
for an integral. Thus we may solve it by simplex method, branch-and-bound
method, cutting plane method, implicit enumeration method, interior point
method, gradient method, genetic algorithm, particle swarm optimization,
neural networks, tabu search, and so on.

Example 3.1: Assume that x1 , x2 , x3 are nonnegative decision variables,


ξ1 , ξ2 , ξ3 are independent linear uncertain variables L(1, 2), L(2, 3), L(3, 4),
and η1 , η2 , η3 are independent zigzag uncertain variables Z(1, 2, 3), Z(2, 3, 4),
Z(3, 4, 5), respectively. Consider the uncertain programming,
 √ √ √ 
 max E x1 + ξ1 + x2 + ξ2 + x3 + ξ3


 x1 ,x2 ,x3
subject to:



 M{(x1 + η1 )2 + (x2 + η2 )2 + (x3 + η3 )2 ≤ 100} ≥ 0.9

x1 , x2 , x3 ≥ 0.

Section 3.3 - Machine Scheduling Problem 117

√ √ √
Note that x1 + ξ1 + x2 + ξ2 + x3 + ξ3 is a strictly increasing function
with respect to ξ1 , ξ2 , ξ3 , and (x1 + η1 )2 + (x2 + η2 )2 + (x3 + η3 )2 is a strictly
increasing function with respect to η1 , η2 , η3 . It is easy to verify that the
uncertain programming model can be converted to the crisp model,
 Z 1 q q q 
−1 −1 −1
max x1 + Φ1 (α) + x2 + Φ2 (α) + x3 + Φ3 (α) dα



 x ,x ,x
 1 2 3 0


subject to:
(x1 + Ψ−1 2 −1 2 −1
1 (0.9)) + (x2 + Ψ2 (0.9)) + (x3 + Ψ3 (0.9)) ≤ 100
2






x1 , x2 , x3 ≥ 0

where Φ−1 −1 −1 −1 −1 −1
1 , Φ2 , Φ3 , Ψ1 , Ψ2 , Ψ3 are inverse uncertainty distributions of
uncertain variables ξ1 , ξ2 , ξ3 , η1 , η2 , η3 , respectively. The Matlab Uncertainty
Toolbox (http://orsc.edu.cn/liu/resources.htm) may solve this model and ob-
tain an optimal solution

(x∗1 , x∗2 , x∗3 ) = (2.9735, 1.9735, 0.9735)

whose objective value is 6.3419.

Example 3.2: Assume that x1 and x2 are decision variables, ξ1 and ξ2 are iid
linear uncertain variables L(0, π/2). Consider the uncertain programming,

min E [x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 )]
 x1 ,x2


subject to:
 π π
0 ≤ x1 ≤ , 0 ≤ x2 ≤ .


2 2
It is clear that x1 sin(x1 − ξ1 ) − x2 cos(x2 + ξ2 ) is strictly decreasing with
respect to ξ1 and strictly increasing with respect to ξ2 . Thus the uncertain
programming is equivalent to the crisp model,
 Z 1
x1 sin(x1 − Φ−1 −1

min 1 (1 − α)) − x2 cos(x2 + Φ2 (α)) dα


 x ,x
 1 2 0

 subject to:
π π


0 ≤ x1 ≤ , 0 ≤ x2 ≤


2 2

where Φ−1 −1
1 , Φ2 are inverse uncertainty distributions of ξ1 , ξ2 , respectively.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
solve this model and obtain an optimal solution

(x∗1 , x∗2 ) = (0.4026, 0.4026)

whose objective value is −0.2708.


118 Chapter 3 - Uncertain Programming

3.3 Machine Scheduling Problem


Machine scheduling problem is concerned with finding an efficient schedule
during an uninterrupted period of time for a set of machines to process a set
of jobs. A lot of research work has been done on this type of problem. The
study of machine scheduling problem with uncertain processing times was
started by Liu [84] in 2010.

Machine
.. ...
.......
..
.
.............................................................................................................................................................................................
.... .... ....
... ... ...
... ... ...
M 3 .. .
. J 6 .
. J 7 ...
.. .... ...
... ... ..
..............................................................................................................................................................................................
.... .... .... ..
... ... ... ..
.. ... ... ..
M 2 ...... J 4 ...
...
J 5 ...
...
..
..
.... ... ... ..
......................................................................................................................................................................... ..
.... ... ... ... ..
... ... ... ... ..
... ... ... ... ..
M 1 ... J .
. 1 .
.
..
.
J 2 .
.
..
.
J 3 .
.
..
.
..
..
..... .... .... .... ..
.......................................................................................................................................................................................................................
... ..
..
Time
... .
.
............................................. Makespan .............................................

Figure 3.1: A Machine Schedule with 3 Machines and 7 Jobs

In a machine scheduling problem, we assume that (a) each job can be


processed on any machine without interruption; (b) each machine can process
only one job at a time; and (c) the processing times are uncertain variables
with known uncertainty distributions. We also use the following indices and
parameters:
i = 1, 2, · · · , n: jobs;
k = 1, 2, · · · , m: machines;
ξik : uncertain processing time of job i on machine k;
Φik : uncertainty distribution of ξik .

How to Represent a Schedule?


Liu [75] suggested that a schedule should be represented by two decision
vectors x and y, where
x = (x1 , x2 , · · · , xn ): integer decision vector representing n jobs with
1 ≤ xi ≤ n and xi 6= xj for all i 6= j, i, j = 1, 2, · · · , n. That is, the sequence
{x1 , x2 , · · · , xn } is a rearrangement of {1, 2, · · · , n};
y = (y1 , y2 , · · · , ym−1 ): integer decision vector with y0 ≡ 0 ≤ y1 ≤ y2 ≤
· · · ≤ ym−1 ≤ n ≡ ym .
We note that the schedule is fully determined by the decision vectors x
and y in the following way. For each k (1 ≤ k ≤ m), if yk = yk−1 , then the
machine k is not used; if yk > yk−1 , then the machine k is used and processes
jobs xyk−1 +1 , xyk−1 +2 , · · · , xyk in turn. Thus the schedule of all machines is
Section 3.3 - Machine Scheduling Problem 119

as follows,

Machine 1: xy0 +1 → xy0 +2 → · · · → xy1 ;


Machine 2: xy1 +1 → xy1 +2 → · · · → xy2 ;
(3.19)
···
Machine m: xym−1 +1 → xym−1 +2 → · · · → xym .

y0 y1 y2 y3
... ... ... ...
... ....... ... ... ..
... ...... ........ ................... ... .......
...... ........ ................... ... .......
...... ........
.......
...... ........
.......
...... ........ ....
... .... ... ... ... .... ... ... ... .... ... ... .. ...
... ... x ..
. ..... x . ... ... x ..
. ..... x . ... .. x . ..
. .
.
... 6 .. x ..
. .
.
... 7 .. ... x .
... ..... 1...... ... 2 .... ... ..... 3...... ... 4 .... ... ..... 5...... .... . .
...... ...... . ...
... ............. ................. ... ............. ................. ... ............. ................ ....... ...
... ... ... ...
.................................. ........................................................................... . . ... . .
... ........................................................
. ....
... ... M-1 . M-2 .
......................................................................................
. M-3 .

Figure 3.2: Formulation of Schedule in which Machine 1 processes Jobs x1 , x2 ,


Machine 2 processes Jobs x3 , x4 and Machine 3 processes Jobs x5 , x6 , x7 .

Completion Times

Let Ci (x, y, ξ) be the completion times of jobs i, i = 1, 2, · · · , n, respectively.


For each k with 1 ≤ k ≤ m, if the machine k is used (i.e., yk > yk−1 ), then
we have
Cxyk−1 +1 (x, y, ξ) = ξxyk−1 +1 k (3.20)

and
Cxyk−1 +j (x, y, ξ) = Cxyk−1 +j−1 (x, y, ξ) + ξxyk−1 +j k (3.21)

for 2 ≤ j ≤ yk − yk−1 .
If the machine k is used, then the completion time Cxyk−1 +1 (x, y, ξ) of
job xyk−1 +1 is an uncertain variable whose inverse uncertainty distribution is

Ψ−1
xy (x, y, α) = Φ−1
xy k (α). (3.22)
k−1 +1 k−1 +1

Generally, suppose the completion time Cxyk−1 +j−1 (x, y, ξ) has an in-
verse uncertainty distribution Ψ−1xyk−1 +j−1 (x, y, α). Then the completion time
Cxyk−1 +j (x, y, ξ) has an inverse uncertainty distribution

Ψ−1
xy (x, y, α) = Ψ−1
xy (x, y, α) + Φ−1
xy k (α). (3.23)
k−1 +j k−1 +j−1 k−1 +j

This recursive process may produce all inverse uncertainty distributions of


completion times of jobs.
120 Chapter 3 - Uncertain Programming

Makespan
Note that, for each k (1 ≤ k ≤ m), the value Cxyk (x, y, ξ) is just the time
that the machine k finishes all jobs assigned to it. Thus the makespan of the
schedule (x, y) is determined by

f (x, y, ξ) = max Cxyk (x, y, ξ) (3.24)


1≤k≤m

whose inverse uncertainty distribution is

Υ−1 (x, y, α) = max Ψ−1


xy (x, y, α). (3.25)
1≤k≤m k

Machine Scheduling Model


In order to minimize the expected makespan E[f (x, y, ξ)], we have the fol-
lowing machine scheduling model,

 min

 x,y
E[f (x, y, ξ)]

 subject to:




1 ≤ xi ≤ n, i = 1, 2, · · · , n

(3.26)


 xi 6= xj , i 6= j, i, j = 1, 2, · · · , n

0 ≤ y1 ≤ y2 · · · ≤ ym−1 ≤ n





 xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers.

Since Υ−1 (x, y, α) is the inverse uncertainty distribution of f (x, y, ξ), the
machine scheduling model is simplified as follows,
 Z 1
Υ−1 (x, y, α)dα


 min
x,y 0





 subject to:



1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.27)

xi 6= xj , i 6= j, i, j = 1, 2, · · · , n





0 ≤ y1 ≤ y2 · · · ≤ ym−1 ≤ n





xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers.

Numerical Experiment
Assume that there are 3 machines and 7 jobs with the following linear un-
certain processing times

ξik ∼ L(i, i + k), i = 1, 2, · · · , 7, k = 1, 2, 3

where i is the index of jobs and k is the index of machines. The Matlab
Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) yields that the
Section 3.4 - Vehicle Routing Problem 121

optimal solution is

x∗ = (1, 4, 5, 3, 7, 2, 6), y ∗ = (3, 5). (3.28)

In other words, the optimal machine schedule is


Machine 1: 1 → 4 → 5
Machine 2: 3 → 7
Machine 3: 2 → 6
whose expected makespan is 12.

3.4 Vehicle Routing Problem


Vehicle routing problem (VRP) is concerned with finding efficient routes,
beginning and ending at a central depot, for a fleet of vehicles to serve a
number of customers.
...................
..... ... ..........
... ... ...... .........
.... ...
......
...................
.... ......
7 ...
.
..................................
... 6
...
..
.... .. ..................... ... ...
... 1 .. .
. ...... ........
... .. .
. ............
...... ......... . . ...
........... ..... ... ...
... .....
..... ... ...
... ..... ... ...
... ..... ... ...
... ..... .
.. ...
.. ..... .
. ..
. .
..... ............ . ................
... .......................... ... . . . . ...
... ....... .... ..
...... ..
. ...
.. ... .... .. ..
.............
.
. .
.
...
.
.. ... ... ....
. . 0
............................ ............
.
. .
......................................................
.. ..
.
...
...
5 ..
.....
.... ..... . ......................... ......... ........
.
. ... ........................ . .. .... ..
.. ........ .....
.... ... .....
...
.....
2 ...
...
...
... .....
.....
................... . .. .....
.....
... .....
... .....
.....
... .....
... .. .
. ......
.
. .......................
. .....
...... ... .... ...
...
.... ... ... ..
...
... 3
...... ...... ......................................................................
... 4
...... ........
..
.
........... ...........

Figure 3.3: A Vehicle Routing Plan with Single Depot and 7 Customers

Due to its wide applicability and economic importance, vehicle routing


problem has been extensively studied. Liu [84] first introduced uncertainty
theory into the research area of vehicle routing problem in 2010. In this
section, vehicle routing problem will be modelled by uncertain programming
in which the travel times are assumed to be uncertain variables with known
uncertainty distributions.
We assume that (a) a vehicle will be assigned for only one route on which
there may be more than one customer; (b) a customer will be visited by one
and only one vehicle; (c) each route begins and ends at the depot; and (d) each
customer specifies its time window within which the delivery is permitted or
preferred to start.
Let us first introduce the following indices and model parameters:
i = 0: depot;
i = 1, 2, · · · , n: customers;
122 Chapter 3 - Uncertain Programming

k = 1, 2, · · · , m: vehicles;
Dij : travel distance from customers i to j, i, j = 0, 1, 2, · · · , n;
Tij : uncertain travel time from customers i to j, i, j = 0, 1, 2, · · · , n;
Φij : uncertainty distribution of Tij , i, j = 0, 1, 2, · · · , n;
[ai , bi ]: time window of customer i, i = 1, 2, · · · , n.

Operational Plan

Liu [75] suggested that an operational plan should be represented by three


decision vectors x, y and t, where
x = (x1 , x2 , · · · , xn ): integer decision vector representing n customers
with 1 ≤ xi ≤ n and xi 6= xj for all i 6= j, i, j = 1, 2, · · · , n. That is, the
sequence {x1 , x2 , · · · , xn } is a rearrangement of {1, 2, · · · , n};
y = (y1 , y2 , · · · , ym−1 ): integer decision vector with y0 ≡ 0 ≤ y1 ≤ y2 ≤
· · · ≤ ym−1 ≤ n ≡ ym ;
t = (t1 , t2 , · · · , tm ): each tk represents the starting time of vehicle k at
the depot, k = 1, 2, · · · , m.
We note that the operational plan is fully determined by the decision
vectors x, y and t in the following way. For each k (1 ≤ k ≤ m), if yk = yk−1 ,
then vehicle k is not used; if yk > yk−1 , then vehicle k is used and starts from
the depot at time tk , and the tour of vehicle k is 0 → xyk−1 +1 → xyk−1 +2 →
· · · → xyk → 0. Thus the tours of all vehicles are as follows:

Vehicle 1: 0 → xy0 +1 → xy0 +2 → · · · → xy1 → 0;


Vehicle 2: 0 → xy1 +1 → xy1 +2 → · · · → xy2 → 0;
···
Vehicle m: 0 → xym−1 +1 → xym−1 +2 → · · · → xym → 0.

y0 y1 y2 y3
... ... ... ...
... ..... ..... ... ..... ..... ... ..... ..... ..... ..
... ....... ......... ....... ......... ... ....... ......... ....... ......... ... ....... ......... ....... ......... ....... ......... ....
... .... ... ... .... ... ... .... ... ... .. ...
... ... x
... ..... 1......
..
. .... x
... 2 ....
..
. ... ... x
... ..... 3......
..
. .... x
... 4 ....
..
. ... ... x
... ..... 5.....
..
.. .... x
... 6 ...
..
.. .... x
... 7 ... .... .. .
... ............. ................. ... ............. ................. ... ............. ................. ................. ...
... ... ... ...
.
.......................................... .............................................................................. ................................................................................................ ............................................................
. V-1 . V-2 . V-3 .

Figure 3.4: Formulation of Operational Plan in which Vehicle 1 visits Cus-


tomers x1 , x2 , Vehicle 2 visits Customers x3 , x4 and Vehicle 3 visits Customers
x5 , x6 , x7 .

It is clear that this type of representation is intuitive, and the total number
of decision variables is n + 2m − 1. We also note that the above decision
variables x, y and t ensure that: (a) each vehicle will be used at most one
time; (b) all tours begin and end at the depot; (c) each customer will be
visited by one and only one vehicle; and (d) there is no subtour.
Section 3.4 - Vehicle Routing Problem 123

Arrival Times
Let fi (x, y, t) be the arrival time function of some vehicles at customers i
for i = 1, 2, · · · , n. We remind readers that fi (x, y, t) are determined by the
decision variables x, y and t, i = 1, 2, · · · , n. Since unloading can start either
immediately, or later, when a vehicle arrives at a customer, the calculation of
fi (x, y, t) is heavily dependent on the operational strategy. Here we assume
that the customer does not permit a delivery earlier than the time window.
That is, the vehicle will wait to unload until the beginning of the time window
if it arrives before the time window. If a vehicle arrives at a customer after
the beginning of the time window, unloading will start immediately. For each
k with 1 ≤ k ≤ m, if vehicle k is used (i.e., yk > yk−1 ), then we have

fxyk−1 +1 (x, y, t) = tk + T0xyk−1 +1

and

fxyk−1 +j (x, y, t) = fxyk−1 +j−1 (x, y, t) ∨ axyk−1 +j−1 + Txyk−1 +j−1 xyk−1 +j

for 2 ≤ j ≤ yk − yk−1 . If the vehicle k is used, i.e., yk > yk−1 , then the arrival
time fxyk−1 +1 (x, y, t) at the customer xyk−1 +1 is an uncertain variable whose
inverse uncertainty distribution is

Ψ−1
xy (x, y, t, α) = tk + Φ−1
0xy (α).
k−1 +1 k−1 +1

Generally, suppose the arrival time fxyk−1 +j−1 (x, y, t) has an inverse uncer-
tainty distribution Ψ−1
xyk−1 +j−1 (x, y, t, α). Then fxyk−1 +j (x, y, t) has an in-
verse uncertainty distribution

Ψ−1
xy (x, y, t, α) = Ψ−1
xy (x, y, t, α)∨axyk−1 +j−1 +Φ−1
xy xyk−1 +j (α)
k−1 +j k−1 +j−1 k−1 +j−1

for 2 ≤ j ≤ yk − yk−1 . This recursive process may produce all inverse


uncertainty distributions of arrival times at customers.

Travel Distance
Let g(x, y) be the total travel distance of all vehicles. Then we have
m
X
g(x, y) = gk (x, y) (3.29)
k=1

where
k −1
yP

 D
0xyk−1 +1 + Dxj xj+1 + Dxyk 0 , if yk > yk−1
gk (x, y) = j=yk−1 +1

0, if yk = yk−1
for k = 1, 2, · · · , m.
124 Chapter 3 - Uncertain Programming

Vehicle Routing Model


If we hope that each customer i (1 ≤ i ≤ n) is visited within its time window
[ai , bi ] with confidence level αi (i.e., the vehicle arrives at customer i before
time bi ), then we have the following chance constraint,

M{fi (x, y, t) ≤ bi } ≥ αi . (3.30)

If we want to minimize the total travel distance of all vehicles subject to the
time window constraint, then we have the following vehicle routing model,

 min g(x, y)

 x,y,t




 subject to:
M{fi (x, y, t) ≤ bi } ≥ αi , i = 1, 2, · · · , n




1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.31)

xi 6= xj , i 6= j, i, j = 1, 2, · · · , n





0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n





xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1, integers

which is equivalent to

min g(x, y)
x,y,t






 subject to:

Ψ−1

i (x, y, t, αi ) ≤ bi , i = 1, 2, · · · , n



1 ≤ xi ≤ n, i = 1, 2, · · · , n (3.32)

xi 6= xj , i 6= j, i, j = 1, 2, · · · , n





0 ≤ y1 ≤ y2 ≤ · · · ≤ ym−1 ≤ n





xi , yj , i = 1, 2, · · · , n, j = 1, 2, · · · , m − 1,

integers

where Ψ−1i (x, y, t, α) are the inverse uncertainty distributions of fi (x, y, t)


for i = 1, 2, · · · , n, respectively.

Numerical Experiment
Assume that there are 3 vehicles and 7 customers with time windows shown in
Table 3.1, and each customer is visited within time windows with confidence
level 0.90.
We also assume that the distances are Dij = |i − j| for i, j = 0, 1, 2, · · · , 7,
and the travel times are normal uncertain variables

Tij ∼ N (2|i − j|, 1), i, j = 0, 1, 2, · · · , 7.

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


Section 3.5 - Project Scheduling Problem 125

Table 3.1: Time Windows of Customers

Node Window Node Window


1 [7 : 00, 9 : 00] 5 [15 : 00, 17 : 00]
2 [7 : 00, 9 : 00] 6 [19 : 00, 21 : 00]
3 [15 : 00, 17 : 00] 7 [19 : 00, 21 : 00]
4 [15 : 00, 17 : 00]

yield that the optimal solution is

x∗ = (1, 3, 2, 5, 7, 4, 6),
y ∗ = (2, 5), (3.33)
t∗ = (6 : 18, 4 : 18, 8 : 18).

In other words, the optimal operational plan is


Vehicle 1: depot → 1 → 3 → depot (the latest starting time is 6:18)
Vehicle 2: depot → 2 → 5 → 7 → depot (the latest starting time is 4:18)
Vehicle 3: depot → 4 → 6 → depot (the latest starting time is 8:18)
whose total travel distance is 32.

3.5 Project Scheduling Problem


Project scheduling problem is to determine the schedule of allocating re-
sources so as to balance the total cost and the completion time. The study
of project scheduling problem with uncertain factors was started by Liu [84]
in 2010. This section presents an uncertain programming model for project
scheduling problem in which the duration times are assumed to be uncertain
variables with known uncertainty distributions.
Project scheduling is usually represented by a directed acyclic network
where nodes correspond to milestones, and arcs to activities which are basi-
cally characterized by the times and costs consumed.
Let (V, A) be a directed acyclic graph, where V = {1, 2, · · · , n, n + 1} is
the set of nodes, A is the set of arcs, (i, j) ∈ A is the arc of the graph (V, A)
from nodes i to j. It is well-known that we can rearrange the indexes of the
nodes in V such that i < j for all (i, j) ∈ A.
Before we begin to study project scheduling problem with uncertain ac-
tivity duration times, we first make some assumptions: (a) all of the costs
needed are obtained via loans with some given interest rate; and (b) each
activity can be processed only if the loan needed is allocated and all the
foregoing activities are finished.
In order to model the project scheduling problem, we introduce the fol-
lowing indices and parameters:
126 Chapter 3 - Uncertain Programming

.................... ....
.................
... ... .. ...
...
..... ..
.. 2
................................ .......................................................................... 5 ..
..........................................
..... . .... .. ......
..... ..... ......
.... .... ......
...... ...... ......
...... ...... ......
....
....... ...
. ....... ......
......
.
. .
. ....
. .
........ ......
........
. ...
. ..... ......
...... .
..
................ .......... . ................. .......... .................. .......... .................
...
. .... . .... . ... ....
.... ....................................................................... ....................................................................... ....................................................................... ...
...
.... 1 .. .
..
.
. .. . . ...
.
3 .. .
.... .. . .
...
.
6 ..... ... ...
... 8 ...
..
. .
............... ........
......
.
................. .......
......
..................
...................................
...... ......
. .
........
...... ...... .
...... ...... ......
...... ...... ......
...... ...... ......
...... ...... ......
......
......
......
...... ...
........
........ ......... ....... ......... ..........
................ ...... .................. .............
. ... . .. ...
..... ..........................................................................
... 4
...................
.. ... 7
...................
...
.

Figure 3.5: A Project with 8 Milestones and 11 Activities

ξij : uncertain duration time of activity (i, j) in A;


Φij : uncertainty distribution of ξij ;
cij : cost of activity (i, j) in A;
r: interest rate;
xi : integer decision variable representing the allocating time of all loans
needed for all activities (i, j) in A.

Starting Times
For simplicity, we write ξ = {ξij : (i, j) ∈ A} and x = (x1 , x2 , · · · , xn ). Let
Ti (x, ξ) denote the starting time of all activities (i, j) in A. According to the
assumptions, the starting time of the total project (i.e., the starting time of
of all activities (1, j) in A) should be

T1 (x, ξ) = x1 (3.34)

whose inverse uncertainty distribution may be written as

Ψ−1
1 (x, α) = x1 . (3.35)

From the starting time T1 (x, ξ), we deduce that the starting time of activity
(2, 5) is
T2 (x, ξ) = x2 ∨ (x1 + ξ12 ) (3.36)
whose inverse uncertainty distribution may be written as

Ψ−1 −1
2 (x, α) = x2 ∨ (x1 + Φ12 (α)). (3.37)

Generally, suppose that the starting time Tk (x, ξ) of all activities (k, i) in A
has an inverse uncertainty distribution Ψ−1 k (x, α). Then the starting time
Ti (x, ξ) of all activities (i, j) in A should be

Ti (x, ξ) = xi ∨ max (Tk (x, ξ) + ξki ) (3.38)


(k,i)∈A
Section 3.5 - Project Scheduling Problem 127

whose inverse uncertainty distribution is

Ψ−1 Ψ−1 −1

i (x, α) = xi ∨ max k (x, α) + Φki (α) . (3.39)
(k,i)∈A

This recursive process may produce all inverse uncertainty distributions of


starting times of activities.

Completion Time
The completion time T (x, ξ) of the total project (i.e, the finish time of all
activities (k, n + 1) in A) is

T (x, ξ) = max (Tk (x, ξ) + ξk,n+1 ) (3.40)


(k,n+1)∈A

whose inverse uncertainty distribution is


 
Ψ−1 (x, α) = max Ψ−1
k (x, α) + Φ −1
k,n+1 (α) . (3.41)
(k,n+1)∈A

Total Cost
Based on the completion time T (x, ξ), the total cost of the project can be
written as
dT (x,ξ)−xi e
X
C(x, ξ) = cij (1 + r) (3.42)
(i,j)∈A

where dae represents the minimal integer greater than or equal to a. Note that
C(x, ξ) is a discrete uncertain variable whose inverse uncertainty distribution
is
dΨ−1 (x;α)−xi e
X
Υ−1 (x, α) = cij (1 + r) (3.43)
(i,j)∈A

for 0 < α < 1.

Project Scheduling Model


In order to minimize the expected cost of the project under the completion
time constraint, we may construct the following project scheduling model,

 min E[C(x, ξ)]


 x
subject to:

(3.44)


 M{T (x, ξ) ≤ T0 } ≥ α0

x ≥ 0, integer vector

where T0 is a due date of the project, α0 is a predetermined confidence level,


T (x, ξ) is the completion time defined by (3.40), and C(x, ξ) is the total cost
128 Chapter 3 - Uncertain Programming

defined by (3.42). This model is equivalent to


 Z 1

 min Υ−1 (x, α)dα
x



 0
subject to: (3.45)
Ψ−1 (x, α0 ) ≤ T0






x ≥ 0, integer vector

where Ψ−1 (x, α) is the inverse uncertainty distribution of T (x, ξ) determined


by (3.41) and Υ−1 (x, α) is the inverse uncertainty distribution of C(x, ξ)
determined by (3.43).

Numerical Experiment

Consider a project scheduling problem shown by Figure 3.5 in which there are
8 milestones and 11 activities. Assume that all duration times of activities
are linear uncertain variables,

ξij ∼ L(3i, 3j), ∀(i, j) ∈ A

and the costs of activities are

cij = i + j, ∀(i, j) ∈ A.

In addition, we also suppose that the interest rate is r = 0.02, the due date is
T0 = 60, and the confidence level is α0 = 0.85. The Matlab Uncertainty Tool-
box (http://orsc.edu.cn/liu/resources.htm) yields that the optimal solution
is
x∗ = (7, 24, 17, 16, 35, 33, 30). (3.46)

In other words, the optimal allocating times of all loans needed for all activ-
ities are shown in Table 3.2 whose expected total cost is 190.6, and

M{T (x∗ , ξ) ≤ 60} = 0.88.

Table 3.2: Optimal Allocating Times of Loans

Date 7 16 17 24 30 33 35
Node 1 4 3 2 7 6 5
Loan 12 11 27 7 15 14 13
Section 3.6 - Uncertain Multiobjective Programming 129

3.6 Uncertain Multiobjective Programming


It has been increasingly recognized that many real decision-making problems
involve multiple, noncommensurable, and conflicting objectives which should
be considered simultaneously. In order to optimize multiple objectives, mul-
tiobjective programming has been well developed and applied widely. For
modelling multiobjective decision-making problems with uncertain parame-
ters, Liu-Chen [96] presented the following uncertain multiobjective program-
ming, 
 min (E[f1 (x, ξ)], E[f2 (x, ξ)], · · · , E[fm (x, ξ)])

x
subject to: (3.47)
M{gj (x, ξ) ≤ 0} ≥ αj ,

j = 1, 2, · · · , p

where fi (x, ξ) are objective functions for i = 1, 2, · · · , m, gj (x, ξ) are con-


straint functions, and αj are confidence levels for j = 1, 2, · · · , p.
Since the objectives are usually in conflict, there is no optimal solution
that simultaneously minimizes all the objective functions. In this case, we
have to introduce the concept of Pareto solution, which means that it is
impossible to improve any one objective without sacrificing on one or more
of the other objectives.
Definition 3.3 A feasible solution x∗ is said to be Pareto to the uncertain
multiobjective programming (3.47) if there is no feasible solution x such that
E[fi (x, ξ)] ≤ E[fi (x∗ , ξ)], i = 1, 2, · · · , m (3.48)
and E[fj (x, ξ)] < E[fj (x∗ , ξ)] for at least one index j.
If the decision maker has a real-valued preference function aggregating
the m objective functions, then we may minimize the aggregating preference
function subject to the same set of chance constraints. This model is referred
to as a compromise model whose solution is called a compromise solution.
It has been proved that the compromise solution is Pareto to the original
multiobjective model.
The first well-known compromise model is set up by weighting the objec-
tive functions, i.e.,
 Pm
 min λi E[fi (x, ξ)]


x i=1
(3.49)

 subject to:
M{gj (x, ξ) ≤ 0} ≥ αj ,

j = 1, 2, · · · , p
where the weights λ1 , λ2 , · · · , λm are nonnegative numbers with λ1 + λ2 +
· · · + λm = 1, for example, λi ≡ 1/m for i = 1, 2, · · · , m.
The second way is related to minimizing the distance function from a
solution
(E[f1 (x, ξ)], E[f2 (x, ξ)], · · · , E[fm (x, ξ)]) (3.50)
130 Chapter 3 - Uncertain Programming

to an ideal vector (f1∗ , f2∗ , · · · , fm



), where fi∗ are the optimal values of the
ith objective functions without considering other objectives, i = 1, 2, · · · , m,
respectively. That is,
m

λi (E[fi (x, ξ)] − fi∗ )2
P
 min


x i=1

(3.51)

 subject to:
M{gj (x, ξ) ≤ 0} ≥ αj ,

j = 1, 2, · · · , p

where the weights λ1 , λ2 , · · · , λm are nonnegative numbers with λ1 + λ2 +


· · · + λm = 1, for example, λi ≡ 1/m for i = 1, 2, · · · , m.
By the third way a compromise solution can be found via an interactive
approach consisting of a sequence of decision phases and computation phases.
Various interactive approaches have been developed.

3.7 Uncertain Goal Programming


The concept of goal programming was presented by Charnes-Cooper [4] in
1961 and subsequently studied by many researchers. Goal programming can
be regarded as a special compromise model for multiobjective optimization
and has been applied in a wide variety of real-world problems. In multiob-
jective decision-making problems, we assume that the decision-maker is able
to assign a target level for each goal and the key idea is to minimize the de-
viations (positive, negative, or both) from the target levels. In the real-world
situation, the goals are achievable only at the expense of other goals and
these goals are usually incompatible. In order to balance multiple conflicting
objectives, a decision-maker may establish a hierarchy of importance among
these incompatible goals so as to satisfy as many goals as possible in the
order specified. For multiobjective decision-making problems with uncertain
parameters, Liu-Chen [96] proposed an uncertain goal programming,

l m


(uij d+
P P

 min Pj i + vij di )
x j=1



 i=1


 subject to:
(3.52)


 E[fi (x, ξ)] + d− +
i − di = bi , i = 1, 2, · · · , m
M{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p





+ −
di , di ≥ 0, i = 1, 2, · · · , m

where Pj are the preemptive priority factors, uij and vij are the weighting

factors, d+
i are the positive deviations, di are the negative deviations, fi are
the functions in goal constraints, gj are the functions in real constraints, bi
are the target values, αj are the confidence levels, l is the number of priorities,
m is the number of goal constraints, and p is the number of real constraints.
Section 3.8 - Uncertain Multilevel Programming 131

Note that the positive and negative deviations are calculated by


(
+
E[fi (x, ξ)] − bi , if E[fi (x, ξ)] > bi
di = (3.53)
0, otherwise

and (
bi − E[fi (x, ξ)], if E[fi (x, ξ)] < bi
d−
i = (3.54)
0, otherwise
for each i. Sometimes, the objective function in the goal programming model
is written as follows,
(m m m
)
X X X
+ − + − + −
lexmin (ui1 di + vi1 di ), (ui2 di + vi2 di ), · · · , (uil di + vil di )
i=1 i=1 i=1

where lexmin represents lexicographically minimizing the objective vector.

3.8 Uncertain Multilevel Programming


Multilevel programming offers a means of studying decentralized decision
systems in which we assume that the leader and followers may have their
own decision variables and objective functions, and the leader can only influ-
ence the reactions of followers through his own decision variables, while the
followers have full authority to decide how to optimize their own objective
functions in view of the decisions of the leader and other followers.
Assume that in a decentralized two-level decision system there is one
leader and m followers. Let x and y i be the control vectors of the leader
and the ith followers, i = 1, 2, · · · , m, respectively. We also assume that the
objective functions of the leader and ith followers are F (x, y 1 , · · · , y m , ξ) and
fi (x, y 1 , · · · , y m , ξ), i = 1, 2, · · · , m, respectively, where ξ is an uncertain
vector.
Let the feasible set of control vector x of the leader be defined by the
chance constraint
M{G(x, ξ) ≤ 0} ≥ α (3.55)
where G is a constraint function, and α is a predetermined confidence level.
Then for each decision x chosen by the leader, the feasibility of control vec-
tors y i of the ith followers should be dependent on not only x but also
y 1 , · · · , y i−1 , y i+1 , · · · , y m , and generally represented by the chance con-
straints,
M{gi (x, y 1 , y 2 , · · · , y m , ξ) ≤ 0} ≥ αi (3.56)
where gi are constraint functions, and αi are predetermined confidence levels,
i = 1, 2, · · · , m, respectively.
Assume that the leader first chooses his control vector x, and the fol-
lowers determine their control array (y 1 , y 2 , · · · , y m ) after that. In order
132 Chapter 3 - Uncertain Programming

to minimize the expected objective of the leader, Liu-Yao [97] proposed the
following uncertain multilevel programming,

min E[F (x, y ∗1 , y ∗2 , · · · , y ∗m , ξ)]



x






 subject to:
M{G(x,




 ξ) ≤ 0} ≥ α
(y 1 , y 2 , · · · , y ∗m ) solves problems (i = 1, 2, · · · , m)
∗ ∗
(3.57)
 


 min E[fi (x, y 1 , y 2 , · · · , y m , ξ)]
yi

 




 subject to:
 
M{gi (x, y 1 , y 2 , · · · , y m , ξ) ≤ 0} ≥ αi .
 

Definition 3.4 Let x be a feasible control vector of the leader. A Nash


equilibrium of followers is the feasible array (y ∗1 , y ∗2 , · · · , y ∗m ) with respect to
x if
E[fi (x, y ∗1 , · · · , y ∗i−1 , y i , y ∗i+1 , · · · , y ∗m , ξ)]
(3.58)
≥ E[fi (x, y ∗1 , · · · , y ∗i−1 , y ∗i , y ∗i+1 , · · · , y ∗m , ξ)]
for any feasible array (y ∗1 , · · · , y ∗i−1 , y i , y ∗i+1 , · · · , y ∗m ) and i = 1, 2, · · · , m.

Definition 3.5 Suppose that x∗ is a feasible control vector of the leader and
(y ∗1 , y ∗2 , · · · , y ∗m ) is a Nash equilibrium of followers with respect to x∗ . We call
the array (x∗ , y ∗1 , y ∗2 , · · · , y ∗m ) a Stackelberg-Nash equilibrium to the uncertain
multilevel programming (3.57) if

E[F (x, y 1 , y 2 , · · · , y m , ξ)] ≥ E[F (x∗ , y ∗1 , y ∗2 , · · · , y ∗m , ξ)] (3.59)

for any feasible control vector x and the Nash equilibrium (y 1 , y 2 , · · · , y m )


with respect to x.

3.9 Bibliographic Notes


Uncertain programming was founded by Liu [79] in 2009 and was applied to
machine scheduling problem, vehicle routing problem and project scheduling
problem by Liu [84] in 2010.
As extensions of uncertain programming theory, Liu-Chen [96] developed
an uncertain multiobjective programming and an uncertain goal program-
ming. In addition, Liu-Yao [97] suggested an uncertain multilevel program-
ming for modeling decentralized decision systems with uncertain factors.
After that, the uncertain programming has obtained fruitful results in
both theory and practice. For exploring more books and papers, the inter-
ested reader may visit the website at http://orsc.edu.cn/online.
Chapter 4

Uncertain Risk Analysis

The term risk has been used in different ways in literature. Here the risk
is defined as the “accidental loss” plus “uncertain measure of such loss”.
Uncertain risk analysis is a tool to quantify risk via uncertainty theory. One
main feature of this topic is to model events that almost never occur. This
chapter will introduce a definition of risk index and provide some useful
formulas for calculating risk index. This chapter will also discuss structural
risk analysis and investment risk analysis in uncertain environments.

4.1 Loss Function


A system usually contains some factors ξ1 , ξ2 , · · · , ξn that may be under-
stood as lifetime, strength, demand, production rate, cost, profit, and re-
source. Generally speaking, some specified loss is dependent on those factors.
Although loss is a problem-dependent concept, usually such a loss may be
represented by a loss function.

Definition 4.1 Consider a system with factors ξ1 , ξ2 , · · · , ξn . A function f


is called a loss function if some specified loss occurs if and only if

f (ξ1 , ξ2 , · · · , ξn ) > 0. (4.1)

Example 4.1: Consider a series system in which there are n elements whose
lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works whenever
all elements work. Thus the system lifetime is

ξ = ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (4.2)

If the loss is understood as the case that the system fails before the time T ,
then we have a loss function

f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn . (4.3)
134 Chapter 4 - Uncertain Risk Analysis

................................. ................................. .................................


... ... ... ... ... ...
Input ........................................... 1 ..................................
...
.
.
..
.
.
2 ..................................
...
.
.
...
.
....................................
3
...
..
. Output
............................... ............................... ...............................

Figure 4.1: A Series System

Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.

Example 4.2: Consider a parallel system in which there are n elements


whose lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works
whenever at least one element works. Thus the system lifetime is
ξ = ξ1 ∨ ξ2 ∨ · · · ∨ ξn . (4.4)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∨ ξ2 ∨ · · · ∨ ξn . (4.5)
Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.
.................................
.. ...
.................................. 1 ................................
... ................................. ....
... ...
... ................................. ...
..
. .
...................................................................
. ................................................................
.
Input ... ... 2
.............................
.
... ....
... Output
... ...
... ...
... ..................................... ...
............................... ...............................
3...
................................
..

Figure 4.2: A Parallel System

Example 4.3: Consider a k-out-of-n system in which there are n elements


whose lifetimes are uncertain variables ξ1 , ξ2 , · · · , ξn . Such a system works
whenever at least k of n elements work. Thus the system lifetime is
ξ = k-max [ξ1 , ξ2 , · · · , ξn ]. (4.6)
If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − k-max [ξ1 , ξ2 , · · · , ξn ]. (4.7)
Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0. Note that a series
system is an n-out-of-n system, and a parallel system is a 1-out-of-n system.

Example 4.4: Consider a standby system in which there are n redundant


elements whose lifetimes are ξ1 , ξ2 , · · · , ξn . For this system, only one element
is active, and one of the redundant elements begins to work only when the
active element fails. Thus the system lifetime is
ξ = ξ1 + ξ2 + · · · + ξn . (4.8)
Section 4.2 - Risk Index 135

If the loss is understood as the case that the system fails before the time T ,
then the loss function is
f (ξ1 , ξ2 , · · · , ξn ) = T − (ξ1 + ξ2 + · · · + ξn ). (4.9)
Hence the system fails if and only if f (ξ1 , ξ2 , · · · , ξn ) > 0.
..... ................................
....... .. ...
............................. ..................................
... .. 1
...............................
.................................
.
....
..... ...
..
... ....
. .. . . ...
......................... . ...
..... .
. .
................................................................
.
Input ............................................................. .................................. Output
... .
.
.
2 .
... ..
...
...
.... ............................. ...
.... ..... .............................. .
.
... ....... ... .. ...
........................... ................................... .................................
3
.
...............................
..

Figure 4.3: A Standby System

4.2 Risk Index


In practice, the factors ξ1 , ξ2 , · · · , ξn of a system are usually uncertain vari-
ables rather than known constants. Thus the risk index is defined as the
uncertain measure that some specified loss occurs.
Definition 4.2 (Liu [83]) Assume that a system contains uncertain factors
ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the risk index is the uncertain
measure that the system is loss-positive, i.e.,
Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (4.10)
Theorem 4.1 Assume that a system contains uncertain factors ξ1 , ξ2 , · · · , ξn ,
and has a loss function f . If f (ξ1 , ξ2 , · · · , ξn ) has an uncertainty distribution
Φ, then the risk index is
Risk = 1 − Φ(0). (4.11)
Proof: It follows from the definition of risk index and the duality axiom that
Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}
= 1 − M{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0}
= 1 − Φ(0).
The theorem is proved.
Theorem 4.2 (Liu [83], Risk Index Theorem) Assume a system contains
independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty distri-
butions Φ1 , Φ2 , · · · , Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn ) is
strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with
respect to ξm+1 , ξm+2 , · · · , ξn , then the risk index is just the root α of the
equation
f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) = 0. (4.12)
136 Chapter 4 - Uncertain Risk Analysis

Proof: It follows from Theorem 2.14 that f (ξ1 , ξ2 , · · · , ξn ) has an inverse


uncertainty distribution

Φ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

Since Risk = 1 − Φ(0), it is the solution α of the equation Φ−1 (1 − α) = 0.


The theorem is thus proved.

Remark 4.1: Since f (Φ−1 −1 −1 −1


1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) is
a strictly decreasing function with respect to α, its root α may be estimated
by the bisection method.

Remark 4.2: Keep in mind that sometimes the equation (4.12) may not
have a root. In this case, if

f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) < 0 (4.13)

for all α, then we set the root α = 0; and if

f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)) > 0 (4.14)

for all α, then we set the root α = 1.

4.3 Series System

Consider a series system in which there are n elements whose lifetimes are
independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty dis-
tributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the case
that the system fails before the time T , then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∧ ξ2 ∧ · · · ∧ ξn (4.15)

and the risk index is

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (4.16)

Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk


index theorem says that the risk index is just the root α of the equation

Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = T. (4.17)

It is easy to verify that

Risk = Φ1 (T ) ∨ Φ2 (T ) ∨ · · · ∨ Φn (T ). (4.18)
Section 4.6 - Standby System 137

4.4 Parallel System


Consider a parallel system in which there are n elements whose lifetimes
are independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the
case that the system fails before the time T , then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − ξ1 ∨ ξ2 ∨ · · · ∨ ξn (4.19)

and the risk index is

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (4.20)

Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk


index theorem says that the risk index is just the root α of the equation

Φ−1 −1 −1
1 (α) ∨ Φ2 (α) ∨ · · · ∨ Φn (α) = T. (4.21)

It is easy to verify that

Risk = Φ1 (T ) ∧ Φ2 (T ) ∧ · · · ∧ Φn (T ). (4.22)

4.5 k-out-of-n System


Consider a k-out-of-n system in which there are n elements whose lifetimes
are independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the
case that the system fails before the time T , then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − k-max [ξ1 , ξ2 , · · · , ξn ] (4.23)

and the risk index is

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (4.24)

Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk


index theorem says that the risk index is just the root α of the equation

k-max [Φ−1 −1 −1
1 (α), Φ2 (α), · · · , Φn (α)] = T. (4.25)

It is easy to verify that

Risk = k-min [Φ1 (T ), Φ2 (T ), · · · , Φn (T )]. (4.26)

Note that a series system is essentially an n-out-of-n system. In this case,


the risk index formula (4.26) becomes (4.18). In addition, a parallel system
is essentially a 1-out-of-n system. In this case, the risk index formula (4.26)
becomes (4.22).
138 Chapter 4 - Uncertain Risk Analysis

4.6 Standby System


Consider a standby system in which there are n elements whose lifetimes
are independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss is understood as the
case that the system fails before the time T , then the loss function is

f (ξ1 , ξ2 , · · · , ξn ) = T − (ξ1 + ξ2 + · · · + ξn ) (4.27)

and the risk index is

Risk = M{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (4.28)

Since f is a strictly decreasing function with respect to ξ1 , ξ2 , · · · , ξn , the risk


index theorem says that the risk index is just the root α of the equation

Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = T. (4.29)

4.7 Structural Risk Analysis


Uncertain structural risk analysis was first investigated by Liu [95]. Consider
a structural system in which the strengths and loads are assumed to be
uncertain variables. We will suppose that a structural system fails whenever
for each rod, the load variable exceeds its strength variable. If the structural
risk index is defined as the uncertain measure that the structural system fails,
then ( n )
[
Risk = M (ξi < ηi ) (4.30)
i=1

where ξ1 , ξ2 , · · · , ξn are strength variables, and η1 , η2 , · · · , ηn are load vari-


ables of the n rods.

Example 4.5: (The Simplest Case) Assume there is only a single strength
variable ξ and a single load variable η with regular uncertainty distributions
Φ and Ψ, respectively. In this case, the structural risk index is

Risk = M{ξ < η}.

It follows from the risk index theorem that the risk index is just the root α
of the equation
Φ−1 (α) = Ψ−1 (1 − α). (4.31)
Especially, if the strength variable ξ has a normal uncertainty distribution
N (es , σs ) and the load variable η has a normal uncertainty distribution
N (el , σl ), then the structural risk index is
  −1
π(es − el )
Risk = 1 + exp √ . (4.32)
3(σs + σl )
Section 4.7 - Structural Risk Analysis 139

Example 4.6: (Constant Loads) Assume the uncertain strength variables


ξ1 , ξ2 , · · · , ξn are independent and have continuous uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. In many cases, the load variables η1 , η2 , · · · , ηn
degenerate to crisp values c1 , c2 , · · · , cn (for example, weight limits allowed
by the legislation), respectively. In this case, it follows from (4.30) and inde-
pendence that the structural risk index is
( n ) n
[ _
Risk = M (ξi < ci ) = M{ξi < ci }.
i=1 i=1

That is,
Risk = Φ1 (c1 ) ∨ Φ2 (c2 ) ∨ · · · ∨ Φn (cn ). (4.33)

Example 4.7: (Independent Load Variables) Assume the uncertain strength


variables ξ1 , ξ2 , · · · , ξn are independent and have regular uncertainty distri-
butions Φ1 , Φ2 , · · · , Φn , respectively. Also assume the uncertain load vari-
ables η1 , η2 , · · · , ηn are independent and have regular uncertainty distribu-
tions Ψ1 , Ψ2 , · · · , Ψn , respectively. In this case, it follows from (4.30) and
independence that the structural risk index is
( n ) n
[ _
Risk = M (ξi < ηi ) = M{ξi < ηi }.
i=1 i=1

That is,
Risk = α1 ∨ α2 ∨ · · · ∨ αn (4.34)
where αi are the roots of the equations

Φ−1 −1
i (α) = Ψi (1 − α) (4.35)

for i = 1, 2, · · · , n, respectively.
However, generally speaking, the load variables η1 , η2 , · · · , ηn are neither
constants nor independent. For examples, the load variables η1 , η2 , · · · , ηn
may be functions of independent uncertain variables τ1 , τ2 , · · · , τm . In this
case, the formula (4.34) is no longer valid. Thus we have to deal with those
structural systems case by case.

Example 4.8: (Series System) Consider a structural system shown in Fig-


ure 4.4 that consists of n rods in series and an object. Assume that the
strength variables of the n rods are uncertain variables ξ1 , ξ2 , · · · , ξn with
regular uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. We also as-
sume that the gravity of the object is an uncertain variable η with regular
uncertainty distribution Ψ. For each i (1 ≤ i ≤ n), the load variable of the
rod i is just the gravity η of the object. Thus the structural system fails
140 Chapter 4 - Uncertain Risk Analysis

whenever the load variable η exceeds at least one of the strength variables
ξ1 , ξ2 , · · · , ξn . Hence the structural risk index is
( n )
[
Risk = M (ξi < η) = M{ξ1 ∧ ξ2 ∧ · · · ∧ ξn < η}.
i=1

Define the loss function as

f (ξ1 , ξ2 , · · · , ξn , η) = η − ξ1 ∧ ξ2 ∧ · · · ∧ ξn .

Then
Risk = M{f (ξ1 , ξ2 , · · · , ξn , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , · · · , ξn , it follows from the risk index theo-
rem that the risk index is just the root α of the equation

Ψ−1 (1 − α) − Φ−1 −1 −1
1 (α) ∧ Φ2 (α) ∧ · · · ∧ Φn (α) = 0. (4.36)

Or equivalently, let αi be the roots of the equations

Ψ−1 (1 − α) = Φ−1
i (α) (4.37)

for i = 1, 2, · · · , n, respectively. Then the structural risk index is

Risk = α1 ∨ α2 ∨ · · · ∨ αn . (4.38)

////////////////
....................................................................................................................................................................................
...
...
...
...
...
...
.
............
........
...
...
...
...
...
.
............
........
...
...
...
...
...
............
........
...
...
...
...
...
...
.
.........................................
..
····
....
....
...
····
.... ...
···· ... ...
···· ...
.........................................
..

Figure 4.4: A Structural System with n Rods and an Object

Example 4.9: Consider a structural system shown in Figure 4.5 that consists
of 2 rods and an object. Assume that the strength variables of the left and
Section 4.8 - Investment Risk Analysis 141

right rods are uncertain variables ξ1 and ξ2 with uncertainty distributions


Φ1 and Φ2 , respectively. We also assume that the gravity of the object is an
uncertain variable η with regular uncertainty distribution Ψ. In this case,
the load variables of left and right rods are respectively equal to
η sin θ2 η sin θ1
, .
sin(θ1 + θ2 ) sin(θ1 + θ2 )

Thus the structural system fails whenever for any one rod, the load variable
exceeds its strength variable. Hence the structural risk index is
   
η sin θ2 η sin θ1
Risk = M ξ1 < ∪ ξ2 <
sin(θ1 + θ2 ) sin(θ1 + θ2 )
   
ξ1 η ξ2 η
=M < ∪ <
sin θ2 sin(θ1 + θ2 ) sin θ1 sin(θ1 + θ2 )
 
ξ1 ξ2 η
=M ∧ <
sin θ2 sin θ1 sin(θ1 + θ2 )

Define the loss function as


η ξ1 ξ2
f (ξ1 , ξ2 , η) = − ∧ .
sin(θ1 + θ2 ) sin θ2 sin θ1

Then
Risk = M{f (ξ1 , ξ2 , η) > 0}.
Since the loss function f is strictly increasing with respect to η and strictly
decreasing with respect to ξ1 , ξ2 , it follows from the risk index theorem that
the risk index is just the root α of the equation

Ψ−1 (1 − α) Φ−1 (α) Φ−1 (α)


− 1 ∧ 2 = 0. (4.39)
sin(θ1 + θ2 ) sin θ2 sin θ1

Or equivalently, let α1 be the root of the equation

Ψ−1 (1 − α) Φ−1 (α)


= 1 (4.40)
sin(θ1 + θ2 ) sin θ2

and let α2 be the root of the equation

Ψ−1 (1 − α) Φ−1 (α)


= 2 . (4.41)
sin(θ1 + θ2 ) sin θ1

Then the structural risk index is

Risk = α1 ∨ α2 . (4.42)
142 Chapter 4 - Uncertain Risk Analysis

////////////////
.......................................................................................................................................................................................
... .. .
... ...
... ... ...
...
.. ...
...
... . ...
.
... ... ..
... ...
... .. ...
... ..
... ..
. ....
.
...
... ... ...
..
...
... .. ...
... .. ...
... .. ...
.
... .. ..
... ...
. ...
...
...
θ
... 1 ... 2 .... θ . .
..
... .
... .. .....
... .. ..
... . ...
... . ...
........
.......................................
...
...···· ...
...
····
.... ...
····
... ...
...
····...
...
......................................
..

Figure 4.5: A Structural System with 2 Rods and an Object

4.8 Investment Risk Analysis


Uncertain investment risk analysis was first studied by Liu [95]. Assume that
an investor has n projects whose returns are uncertain variables ξ1 , ξ2 , · · · , ξn .
If the loss is understood as the case that total return ξ1 + ξ2 + · · · + ξn is
below a predetermined value c (e.g., the interest rate), then the investment
risk index is
Risk = M{ξ1 + ξ2 + · · · + ξn < c}. (4.43)
If ξ1 , ξ2 , · · · , ξn are independent uncertain variables with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively, then the investment risk index is
just the root α of the equation

Φ−1 −1 −1
1 (α) + Φ2 (α) + · · · + Φn (α) = c. (4.44)

4.9 Value-at-Risk
As a substitute of risk index (4.10), a concept of value-at-risk is given by the
following definition.

Definition 4.3 (Peng [121]) Assume that a system contains uncertain fac-
tors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the value-at-risk is defined
as
VaR(α) = sup{x | M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x} ≥ α}. (4.45)

Note that VaR(α) represents the maximum possible loss when α percent of
the right tail distribution is ignored. In other words, the loss f (ξ1 , ξ2 , · · · , ξn )
will exceed VaR(α) with uncertain measure α. See Figure 4.6. If the uncer-
tainty distribution Φ(x) of f (ξ1 , ξ2 , · · · , ξn ) is continuous, then

VaR(α) = sup {x | Φ(x) ≤ 1 − α} . (4.46)


Section 4.10 - Expected Loss 143

If its inverse uncertainty distribution Φ−1 (α) exists, then

VaR(α) = Φ−1 (1 − α). (4.47)

It is also easy to show that VaR(α) is a monotone decreasing function with


respect to α.

Φ(x)
...
..........
...
..
1 .... ....
......................................................................
......... ...........................
...............
... ..........
...
. ........
α ... .
...
........
... .....
.
......... ......
... ......
... ... .....
.................................
.... ..
.......
.
.... .
.. ..... ...
... .....
... ..... ...
... ...
...... ..
... ...... ..
... ...
....... ..
... .
...
....... ..
..
.... .................... ..
............. ...
.. .
........................................................................................................................................................................................................................................................................ x
...
0 ...
... VaR(α)

Figure 4.6: Value-at-Risk

Theorem 4.3 (Peng [121], Value-at-Risk Theorem) Assume a system con-


tains independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss function f (ξ1 , ξ2 , · · · , ξn )
is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with
respect to ξm+1 , ξm+2 , · · · , ξn , then

VaR(α) = f (Φ−1 −1 −1 −1
1 (1 − α), · · · , Φm (1 − α), Φm+1 (α), · · · , Φn (α)). (4.48)

Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution

Φ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

The theorem follows from (4.47) immediately.

4.10 Expected Loss


Liu-Ralescu [112] proposed a concept of expected loss that is the expected
value of the loss f (ξ1 , ξ2 , · · · , ξn ) given f (ξ1 , ξ2 , · · · , ξn ) > 0. A formal defi-
nition is given below.

Definition 4.4 (Liu-Ralescu [112]) Assume that a system contains uncer-


tain factors ξ1 , ξ2 , · · ·, ξn and has a loss function f . Then the expected loss is
defined as Z +∞
L= M{f (ξ1 , ξ2 , · · · , ξn ) ≥ x}dx. (4.49)
0
144 Chapter 4 - Uncertain Risk Analysis

If Φ(x) is the uncertainty distribution of the loss f (ξ1 , ξ2 , · · · , ξn ), then


we immediately have
Z +∞
L= (1 − Φ(x))dx. (4.50)
0

If its inverse uncertainty distribution Φ−1 (α) exists, then the expected loss
is Z 1
+
L= Φ−1 (α) dα. (4.51)
0

Theorem 4.4 (Liu-Ralescu [112], Expected Loss Theorem) Assume that a


system contains independent uncertain variables ξ1 , ξ2 , · · · , ξn with regular
uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. If the loss function
f (ξ1 , ξ2 , · · · , ξn ) is strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly
decreasing with respect to ξm+1 , ξm+2 , · · · , ξn , then the expected loss is
Z 1
L= f + (Φ−1 −1 −1 −1
1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α))dα. (4.52)
0

Proof: It follows from the operational law of uncertain variables that the
loss f (ξ1 , ξ2 , · · · , ξn ) has an inverse uncertainty distribution

Φ−1 (α) = f (Φ−1 −1 −1 −1


1 (α), · · · , Φm (α), Φm+1 (1 − α), · · · , Φn (1 − α)).

The theorem follows from (4.51) immediately.

4.11 Hazard Distribution


Suppose that ξ is the lifetime of some element. Here we assume it is an
uncertain variable with a prior uncertainty distribution Φ. At some time t,
it is observed that the element is working. What is the residual lifetime of
the element? The following definition answers this question.

Definition 4.5 (Liu [83]) Let ξ be a nonnegative uncertain variable repre-


senting lifetime of some element. If ξ has a prior uncertainty distribution Φ,
then the hazard distribution at time t is

0, if Φ(x) ≤ Φ(t)




 Φ(x)


∧ 0.5, if Φ(t) < Φ(x) ≤ (1 + Φ(t))/2
Φ(x|t) = 1 − Φ(t) (4.53)


 Φ(x) − Φ(t) , if (1 + Φ(t))/2 ≤ Φ(x)



1 − Φ(t)

that is just the conditional uncertainty distribution of ξ given ξ > t.


Section 4.12 - Bibliographic Notes 145

The hazard distribution is essentially the posterior uncertainty distribu-


tion just after time t given that it is working at time t.

Exercise 4.1: Let ξ be a linear uncertain variable L(a, b), and t a real
number with a < t < b. Show that the hazard distribution at time t is


 0, if x ≤ t

 x−a

∧ 0.5, if t < x ≤ (b + t)/2

Φ(x|t) = b−t

x−t




 ∧ 1, if (b + t)/2 ≤ x.
b−t
Theorem 4.5 (Liu [83], Conditional Risk Index Theorem) Assume that a
system contains uncertain factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f .
Suppose ξ1 , ξ2 , · · · , ξn are independent uncertain variables with regular un-
certainty distributions Φ1 , Φ2 , · · · , Φn , respectively, and f (ξ1 , ξ2 , · · · , ξn ) is
strictly increasing with respect to ξ1 , ξ2 , · · · , ξm and strictly decreasing with
respect to ξm+1 , ξm+2 , · · · , ξn . If it is observed that all elements are working
at some time t, then the risk index is just the root α of the equation

f (Φ−1 −1 −1 −1
1 (1 − α|t), · · · , Φm (1 − α|t), Φm+1 (α|t), · · · , Φn (α|t)) = 0 (4.54)

where Φi (x|t) are hazard distributions determined by

0, if Φi (x) ≤ Φi (t)





 Φi (x)
∧ 0.5, if Φi (t) < Φi (x) ≤ (1 + Φi (t))/2

Φi (x|t) = 1 − Φi (t) (4.55)


 Φi (x) − Φi (t) ,


 if (1 + Φi (t))/2 ≤ Φi (x)
1 − Φi (t)

for i = 1, 2, · · · , n.

Proof: It follows from Definition 4.5 that each hazard distribution of ele-
ment is determined by (4.55). Thus the conditional risk index is obtained by
Theorem 4.2 immediately.

Exercise 4.2: State and prove conditional value-at-risk theorem and condi-
tional expected loss theorem.

4.12 Bibliographic Notes


Uncertain risk analysis was proposed by Liu [83] in 2010 in which the risk
index was defined as the uncertain measure that some specified loss occurs,
and a risk index theorem was proved. This tool was also successfully applied
by Liu [95] to structural risk analysis and investment risk analysis.
146 Chapter 4 - Uncertain Risk Analysis

As a substitute of risk index, Peng [121] suggested the concept of value-


at-risk that is the maximum possible loss when the right tail distribution is
ignored. In addition, Liu-Ralescu [112] investigated the concept of expected
loss that takes into account not only the uncertain measure of the loss but
also its severity.
Chapter 5

Uncertain Reliability
Analysis

Uncertain reliability analysis is a tool to deal with system reliability via


uncertainty theory. This chapter will introduce a definition of reliability
index and provide some useful formulas for calculating the reliability index.

5.1 Structure Function


Many real systems may be simplified to a Boolean system in which each
element (including the system itself) has two states: working and failure.
We denote the states of elements i by the Boolean variables
(
1, if element i works
xi = (5.1)
0, if element i fails,

i = 1, 2, · · · , n, respectively. We also denote the state of the system by the


Boolean variable (
1, if the system works
X= (5.2)
0, if the system fails.
Usually, the state of the system is completely determined by the states of its
elements via the so-called structure function.

Definition 5.1 Assume that X is a Boolean system containing elements


x1 , x2 , · · · , xn . A Boolean function f is called a structure function of X
if
X = 1 if and only if f (x1 , x2 , · · · , xn ) = 1. (5.3)

It is obvious that X = 0 if and only if f (x1 , x2 , · · · , xn ) = 0 whenever f is


indeed the structure function of the system.
148 Chapter 5 - Uncertain Reliability Analysis

Example 5.1: For a series system, the structure function is a mapping from
{0, 1}n to {0, 1}, i.e.,

f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (5.4)

................................ ................................ ................................


... .... ... .... ... ....
Input ......................................... 1 ..................................
... ...
..
2 ...................................
... ..
..
....................................
3
... Output
................................
. .............................
.
. ..............................
.

Figure 5.1: A Series System

Example 5.2: For a parallel system, the structure function is a mapping


from {0, 1}n to {0, 1}, i.e.,

f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (5.5)

.................................
.. ...
................................ 1 ...............................
... ................................... ...
... ...
... ................................ ...
... . ... ...
. . . .....................................................................
Input .... .
............................................................
..
.
... 2
..............................
.
. .. Output
... ...
... ...
... ................................ ..
.................................. ...................................
... 3
...............................
...

Figure 5.2: A Parallel System

Example 5.3: For a k-out-of-n system that works whenever at least k of the
n elements work, the structure function is a mapping from {0, 1}n to {0, 1},
i.e.,
f (x1 , x2 , · · · , xn ) = k-max [x1 , x2 , · · · , xn ]. (5.6)
Especially, when k = 1, it is a parallel system; when k = n, it is a series
system.

5.2 Reliability Index


The element in a Boolean system is usually represented by a Boolean uncer-
tain variable, i.e.,
(
1 with uncertain measure a
ξ= (5.7)
0 with uncertain measure 1 − a.

In this case, we will say ξ is an uncertain element with reliability a. Reliability


index is defined as the uncertain measure that the system is working.
Section 5.4 - Parallel System 149

Definition 5.2 (Liu [83]) Assume a Boolean system has uncertain elements
ξ1 , ξ2 , · · · , ξn and a structure function f . Then the reliability index is the
uncertain measure that the system is working, i.e.,
Reliability = M{f (ξ1 , ξ2 , · · · , ξn ) = 1}. (5.8)
Theorem 5.1 (Liu [83], Reliability Index Theorem) Assume that a system
contains uncertain elements ξ1 , ξ2 , · · ·, ξn , and has a structure function f . If
ξ1 , ξ2 , · · · , ξn are independent uncertain elements with reliabilities a1 , a2 , · · · ,
an , respectively, then the reliability index is

 sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n







 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n


Reliability = (5.9)

 1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n





if sup min νi (xi ) ≥ 0.5



f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

where xi take values either 0 or 1, and νi are defined by


(
ai , if xi = 1
νi (xi ) = (5.10)
1 − ai , if xi = 0

for i = 1, 2, · · · , n, respectively.
Proof: Since ξ1 , ξ2 , · · · , ξn are independent Boolean uncertain variables and
f is a Boolean function, the equation (5.9) follows from Definition 5.2 and
Theorem 2.21 immediately.

5.3 Series System


Consider a series system having independent uncertain elements ξ1 , ξ2 , · · · , ξn
with reliabilities a1 , a2 , · · · , an , respectively. Note that the structure function
is
f (x1 , x2 , · · · , xn ) = x1 ∧ x2 ∧ · · · ∧ xn . (5.11)
It follows from the reliability index theorem that the reliability index is
Reliability = M{ξ1 ∧ ξ2 ∧ · · · ∧ ξn = 1} = a1 ∧ a2 ∧ · · · ∧ an . (5.12)

5.4 Parallel System


Consider a parallel system having independent uncertain elements ξ1 , ξ2 , · · · ,
ξn with reliabilities a1 , a2 , · · · , an , respectively. Note that the structure func-
tion is
f (x1 , x2 , · · · , xn ) = x1 ∨ x2 ∨ · · · ∨ xn . (5.13)
150 Chapter 5 - Uncertain Reliability Analysis

It follows from the reliability index theorem that the reliability index is

Reliability = M{ξ1 ∨ ξ2 ∨ · · · ∨ ξn = 1} = a1 ∨ a2 ∨ · · · ∨ an . (5.14)

5.5 k-out-of-n System


Consider a k-out-of-n system having independent uncertain elements ξ1 , ξ2 ,
· · · , ξn with reliabilities a1 , a2 , · · · , an , respectively. Note that the structure
function has a Boolean form,

f (x1 , x2 , · · · , xn ) = k-max [x1 , x2 , · · · , xn ]. (5.15)

It follows from the reliability index theorem that the reliability index is the
kth largest value of a1 , a2 , · · · , an , i.e.,

Reliability = k-max[a1 , a2 , · · · , an ]. (5.16)

Note that a series system is essentially an n-out-of-n system. In this case,


the reliability index formula (5.16) becomes (5.12). In addition, a parallel
system is essentially a 1-out-of-n system. In this case, the reliability index
formula (5.16) becomes (5.14).

5.6 General System


It is almost impossible to find an analytic formula of reliability risk for general
systems. In this case, we have to employ a numerical method.
................................ .................................
.. ... ... ...
. .. ...
............................... ................................................................
....
...
..
...
1 ...
..
...
...
..
...
4 .................................
...
....
... ............................. .. .............................. ...
... ... ...
... .
.
................................ ...
... .. .. ...
.
. .
. .
... ...
Input .................................... .. ..
.................................. Output
....
.3
....
.
...
. ...
... ................................ ...
... .... ...
... ...............................
...
............................... ...
... . ... . ...
... ... .... ... .... .... ...
................................. . .
...2
...................................
. .
............................................................
.
.
5
....................................
...............................
.
.

Figure 5.3: A Bridge System

Consider a bridge system shown in Figure 5.3 that consists of 5 indepen-


dent uncertain elements whose states are denoted by ξ1 , ξ2 , ξ3 , ξ4 , ξ5 . Assume
each path works if and only if all elements on which are working and the
system works if and only if there is a path of working elements. Then the
structure function of the bridge system is

f (x1 , x2 , x3 , x4 , x5 ) = (x1 ∧ x4 ) ∨ (x2 ∧ x5 ) ∨ (x1 ∧ x3 ∧ x5 ) ∨ (x2 ∧ x3 ∧ x4 ).


Section 5.7 - Bibliographic Notes 151

The Boolean System Calculator, a function in the Matlab Uncertainty Tool-


box (http://orsc.edu.cn/liu/resources.htm), may yield the reliability index.
Assume the 5 independent uncertain elements have reliabilities

0.91, 0.92, 0.93, 0.94, 0.95

in uncertain measure. A run of Boolean System Calculator shows that the


reliability index is

Reliability = M{f (ξ1 , ξ2 , · · · , ξ5 ) = 1} = 0.92

in uncertain measure.

5.7 Bibliographic Notes


Uncertain reliability analysis was proposed by Liu [83] in 2010 in which the
reliability index was defined as the uncertain measure that the system is
working, and a reliability index theorem was proved. After that, Zeng-Wen-
Kang [195] and Gao-Yao [34] introduced some different reliability metrics for
uncertain reliability systems.
Chapter 6

Uncertain Propositional
Logic

Propositional logic, originated from the work of Aristotle (384-322 BC), is a


branch of logic that studies the properties of complex propositions composed
of simpler propositions and logical connectives. Note that the propositions
considered in propositional logic are not arbitrary statements but are the
ones that are either true or false and not both.
Uncertain propositional logic is a generalization of propositional logic in
which every proposition is abstracted into a Boolean uncertain variable and
the truth value is defined as the uncertain measure that the proposition is
true. This chapter will deal with uncertain propositional logic, including
uncertain proposition, truth value definition, and truth value theorem. This
chapter will also introduce uncertain predicate logic.

6.1 Uncertain Proposition


Definition 6.1 (Li-Liu [72]) An uncertain proposition is a statement whose
truth value is quantified by an uncertain measure.
That is, if we use X to express an uncertain proposition and use α to express
its truth value in uncertain measure, then the uncertain proposition X is
essentially a Boolean uncertain variable
(
1 with uncertain measure α
X= (6.1)
0 with uncertain measure 1 − α
where X = 1 means X is true and X = 0 means X is false.

Example 6.1: “Tom is tall with truth value 0.7” is an uncertain proposition,
where “Tom is tall” is a statement, and its truth value is 0.7 in uncertain
measure.
154 Chapter 6 - Uncertain Propositional Logic

Example 6.2: “John is young with truth value 0.8” is an uncertain propo-
sition, where “John is young” is a statement, and its truth value is 0.8 in
uncertain measure.

Example 6.3: “Beijing is a big city with truth value 0.9” is an uncertain
proposition, where “Beijing is a big city” is a statement, and its truth value
is 0.9 in uncertain measure.

Connective Symbols
In addition to the proposition symbols X and Y , we also need the negation
symbol ¬, conjunction symbol ∧, disjunction symbol ∨, conditional symbol
→, and biconditional symbol ↔. Note that

¬X means “not X”; (6.2)

X ∧ Y means “X and Y ”; (6.3)


X ∨ Y means “X or Y ”; (6.4)
X → Y = (¬X) ∨ Y means “if X then Y ”, (6.5)
X ↔ Y = (X → Y ) ∧ (Y → X) means “X if and only if Y ”. (6.6)

Boolean Function of Uncertain Propositions


Assume X1 , X2 , · · · , Xn are uncertain propositions. Then their Boolean func-
tion
Z = f (X1 , X2 , · · · , Xn ) (6.7)
is a Boolean uncertain variable. Thus Z is also an uncertain proposition
provided that it makes sense. Usually, such a Boolean function is a finite
sequence of uncertain propositions and connective symbols. For example,

Z = ¬X1 , Z = X1 ∧ (¬X2 ), Z = X1 → X2 (6.8)

are all uncertain propositions.

Independence of Uncertain Propositions


Uncertain propositions are called independent if they are independent uncer-
tain variables. Assume X1 , X2 , · · · , Xn are independent uncertain proposi-
tions. Then
f1 (X1 ), f2 (X2 ) · · · , fn (Xn ) (6.9)
are also independent uncertain propositions for any Boolean functions f1 , f2 ,
· · · , fn . For example, if X1 , X2 , · · · , X5 are independent uncertain proposi-
tions, then ¬X1 , X2 ∨ X3 , X4 → X5 are also independent.
Section 6.2 - Truth Value 155

6.2 Truth Value


Truth value is a key concept in uncertain propositional logic, and is defined
as the uncertain measure that the uncertain proposition is true.

Definition 6.2 (Li-Liu [72]) Let X be an uncertain proposition. Then the


truth value of X is defined as the uncertain measure that X is true, i.e.,

T (X) = M{X = 1}. (6.10)

Example 6.4: Let X be an uncertain proposition with truth value α. Then

T (¬X) = M{X = 0} = 1 − α. (6.11)

Example 6.5: Let X and Y be two independent uncertain propositions with


truth values α and β, respectively. Then

T (X ∧ Y ) = M{X ∧ Y = 1} = M{(X = 1) ∩ (Y = 1)} = α ∧ β, (6.12)

T (X ∨ Y ) = M{X ∨ Y = 1} = M{(X = 1) ∪ (Y = 1)} = α ∨ β, (6.13)


T (X → Y ) = T (¬X ∨ Y ) = (1 − α) ∨ β. (6.14)

Theorem 6.1 (Law of Excluded Middle) Let X be an uncertain proposition.


Then X ∨ ¬X is a tautology, i.e.,

T (X ∨ ¬X) = 1. (6.15)

Proof: It follows from the definition of truth value and the property of
uncertain measure that

T (X ∨ ¬X) = M{X ∨ ¬X = 1} = M{(X = 1) ∪ (X = 0)} = M{Γ} = 1.

The theorem is proved.

Theorem 6.2 (Law of Contradiction) Let X be an uncertain proposition.


Then X ∧ ¬X is a contradiction, i.e.,

T (X ∧ ¬X) = 0. (6.16)

Proof: It follows from the definition of truth value and the property of
uncertain measure that

T (X ∧ ¬X) = M{X ∧ ¬X = 1} = M{(X = 1) ∩ (X = 0)} = M{∅} = 0.

The theorem is proved.


156 Chapter 6 - Uncertain Propositional Logic

Theorem 6.3 (Law of Truth Conservation) Let X be an uncertain proposi-


tion. Then we have
T (X) + T (¬X) = 1. (6.17)
Proof: It follows from the duality axiom of uncertain measure that
T (¬X) = M{¬X = 1} = M{X = 0} = 1 − M{X = 1} = 1 − T (X).
The theorem is proved.
Theorem 6.4 Let X be an uncertain proposition. Then X → X is a tau-
tology, i.e.,
T (X → X) = 1. (6.18)
Proof: It follows from the definition of conditional symbol and the law of
excluded middle that
T (X → X) = T (¬X ∨ X) = 1.
The theorem is proved.
Theorem 6.5 Let X be an uncertain proposition. Then we have
T (X → ¬X) = 1 − T (X). (6.19)
Proof: It follows from the definition of conditional symbol and the law of
truth conservation that
T (X → ¬X) = T (¬X ∨ ¬X) = T (¬X) = 1 − T (X).
The theorem is proved.
Theorem 6.6 (De Morgan’s Law) For any uncertain propositions X and Y ,
we have
T (¬(X ∧ Y )) = T ((¬X) ∨ (¬Y )), (6.20)
T (¬(X ∨ Y )) = T ((¬X) ∧ (¬Y )). (6.21)
Proof: It follows from the basic properties of uncertain measure that
T (¬(X ∧ Y )) = M{X ∧ Y = 0} = M{(X = 0) ∪ (Y = 0)}
= M{(¬X) ∨ (¬Y ) = 1} = T ((¬X) ∨ (¬Y ))
which proves the first equality. A similar way may verify the second equality.
Theorem 6.7 (Law of Contraposition) For any uncertain propositions X
and Y , we have
T (X → Y ) = T (¬Y → ¬X). (6.22)
Proof: It follows from the definition of conditional symbol and basic prop-
erties of uncertain measure that
T (X → Y ) = M{(¬X) ∨ Y = 1} = M{(X = 0) ∪ (Y = 1)}
= M{Y ∨ (¬X) = 1} = T (¬Y → ¬X).
The theorem is proved.
Section 6.3 - Chen-Ralescu Theorem 157

6.3 Chen-Ralescu Theorem


An important contribution to uncertain propositional logic is the Chen-
Ralescu theorem that provides a numerical method for calculating the truth
values of uncertain propositions.

Theorem 6.8 (Chen-Ralescu Theorem [7]) Assume that X1 , X2 , · · · , Xn are


independent uncertain propositions with truth values α1 , α2 , · · ·, αn , respec-
tively. Then for a Boolean function f , the uncertain proposition

Z = f (X1 , X2 , · · · , Xn ). (6.23)

has a truth value



 sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n








 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n


T (Z) = (6.24)

 1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n






if sup min νi (xi ) ≥ 0.5



f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

where xi take values either 0 or 1, and νi are defined by


(
αi , if xi = 1
νi (xi ) = (6.25)
1 − αi , if xi = 0

for i = 1, 2, · · · , n, respectively.

Proof: Since Z = 1 if and only if f (X1 , X2 , · · · , Xn ) = 1, we immediately


have
T (Z) = M{f (X1 , X2 , · · · , Xn ) = 1}.
Thus the equation (6.24) follows from Theorem 2.21 immediately.

Example 6.6: Let X1 and X2 be independent uncertain propositions with


truth values α1 and α2 , respectively. Then

Z = X1 ↔ X2 (6.26)

is an uncertain proposition. It is clear that Z = f (X1 , X2 ) if we define

f (1, 1) = 1, f (1, 0) = 0, f (0, 1) = 0, f (0, 0) = 1.

At first, we have

sup min νi (xi ) = max{α1 ∧ α2 , (1 − α1 ) ∧ (1 − α2 )},


f (x1 ,x2 )=1 1≤i≤2
158 Chapter 6 - Uncertain Propositional Logic

sup min νi (xi ) = max{(1 − α1 ) ∧ α2 , α1 ∧ (1 − α2 )}.


f (x1 ,x2 )=0 1≤i≤2

When α1 ≥ 0.5 and α2 ≥ 0.5, we have


sup min νi (xi ) = α1 ∧ α2 ≥ 0.5.
f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that


T (Z) = 1 − sup min νi (xi ) = 1 − (1 − α1 ) ∨ (1 − α2 ) = α1 ∧ α2 .
f (x1 ,x2 )=0 1≤i≤2

When α1 ≥ 0.5 and α2 < 0.5, we have


sup min νi (xi ) = (1 − α1 ) ∨ α2 ≤ 0.5.
f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that


T (Z) = sup min νi (xi ) = (1 − α1 ) ∨ α2 .
f (x1 ,x2 )=1 1≤i≤2

When α1 < 0.5 and α2 ≥ 0.5, we have


sup min νi (xi ) = α1 ∨ (1 − α2 ) ≤ 0.5.
f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that


T (Z) = sup min νi (xi ) = α1 ∨ (1 − α2 ).
f (x1 ,x2 )=1 1≤i≤2

When α1 < 0.5 and α2 < 0.5, we have


sup min νi (xi ) = (1 − α1 ) ∧ (1 − α2 ) > 0.5.
f (x1 ,x2 )=1 1≤i≤2

It follows from Chen-Ralescu theorem that


T (Z) = 1 − sup min νi (xi ) = 1 − α1 ∨ α2 = (1 − α1 ) ∧ (1 − α2 ).
f (x1 ,x2 )=0 1≤i≤2

Thus we have


 α1 ∧ α2 , if α1 ≥ 0.5 and α2 ≥ 0.5
(1 − α1 ) ∨ α2 , ≥ 0.5

 if α1 and α2 < 0.5
T (Z) = (6.27)

 α1 ∨ (1 − α2 ), if α1 < 0.5 and α2 ≥ 0.5

(1 − α1 ) ∧ (1 − α2 ),

if α1 < 0.5 and α2 < 0.5.

Example 6.7: The independence condition in Theorem 6.8 cannot be re-


moved. For example, take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with
power set and M{γ1 } = M{γ2 } = 0.5. Then
(
0, if γ = γ1
X1 (γ) = (6.28)
1, if γ = γ2
Section 6.4 - Boolean System Calculator 159

is an uncertain proposition with truth value

T (X1 ) = 0.5, (6.29)

and (
1, if γ = γ1
X2 (γ) = (6.30)
0, if γ = γ2
is also an uncertain proposition with truth value

T (X2 ) = 0.5. (6.31)

Note that X1 and X2 are not independent, and X1 ∨ X2 ≡ 1 from which we


obtain
T (X1 ∨ X2 ) = 1. (6.32)
However, by using (6.24), we get

T (X1 ∨ X2 ) = 0.5. (6.33)

Thus the independence condition cannot be removed.

Exercise 6.1: Let X1 , X2 , · · · , Xn be independent uncertain propositions


with truth values α1 , α2 , · · · , αn , respectively. Then

Z = X1 ∧ X2 ∧ · · · ∧ Xn (6.34)

is an uncertain proposition. Show that the truth value of Z is

T (Z) = α1 ∧ α2 ∧ · · · ∧ αn . (6.35)

Exercise 6.2: Let X1 , X2 , · · · , Xn be independent uncertain propositions


with truth values α1 , α2 , · · · , αn , respectively. Then

Z = X1 ∨ X2 ∨ · · · ∨ Xn (6.36)

is an uncertain proposition. Show that the truth value of Z is

T (Z) = α1 ∨ α2 ∨ · · · ∨ αn . (6.37)

Exercise 6.3: Let X1 and X2 be independent uncertain propositions with


truth values α1 and α2 , respectively. (i) What is the truth value of (X1 ∧
X2 ) → X2 ? (ii) What is the truth value of (X1 ∨ X2 ) → X2 ? (iii) What
is the truth value of X1 → (X1 ∧ X2 )? (iv) What is the truth value of
X1 → (X1 ∨ X2 )?

Exercise 6.4: Let X1 , X2 , X3 be independent uncertain propositions with


truth values α1 , α2 , α3 , respectively. What is the truth value of

X1 ∧ (X1 ∨ X2 ) ∧ (X1 ∨ X2 ∨ X3 )? (6.38)


160 Chapter 6 - Uncertain Propositional Logic

6.4 Boolean System Calculator


Boolean System Calculator is a software that may compute the truth value of
uncertain proposition. This software may be downloaded from the website at
http://orsc.edu.cn/liu/resources.htm. For example, assume X1 , X2 , X3 , X4 ,
X5 are independent uncertain propositions with truth values 0.1, 0.3, 0.5, 0.7,
0.9, respectively. Consider an uncertain proposition,
Z = (X1 ∧ X2 ) ∨ (X2 ∧ X3 ) ∨ (X3 ∧ X4 ) ∨ (X4 ∧ X5 ). (6.39)
It is clear that the corresponding Boolean function of Z has the form


 1, if x1 + x2 = 2

 1, if x2 + x3 = 2



f (x1 , x2 , x3 , x4 , x5 ) = 1, if x3 + x4 = 2

1, if x4 + x5 = 2





0, otherwise.

A run of Boolean System Calculator shows that the truth value of Z is 0.7
in uncertain measure.

6.5 Uncertain Predicate Logic


Consider the following propositions: “Beijing is a big city”, and “Tianjin is a
big city”. Uncertain propositional logic treats them as unrelated propositions.
However, uncertain predicate logic represents them by a predicate proposition
X(a). If a represents Beijing, then
X(a) = “Beijing is a big city”. (6.40)
If a represents Tianjin, then
X(a) = “Tianjin is a big city”. (6.41)
Definition 6.3 (Zhang-Li [201]) Uncertain predicate proposition is a se-
quence of uncertain propositions indexed by one or more parameters.
In order to deal with uncertain predicate propositions, we need a universal
quantifier ∀ and an existential quantifier ∃. If X(a) is an uncertain predicate
proposition defined by (6.40) and (6.41), then
(∀a)X(a) = “Both Beijing and Tianjin are big cities”, (6.42)
(∃a)X(a) = “At least one of Beijing and Tianjin is a big city”. (6.43)
Theorem 6.9 (Zhang-Li [201], Law of Excluded Middle) Let X(a) be an
uncertain predicate proposition. Then
T ((∀a)X(a) ∨ (∃a)¬X(a)) = 1. (6.44)
Section 6.5 - Uncertain Predicate Logic 161

Proof: Since ¬(∀a)X(a) = (∃a)¬X(a), it follows from the definition of truth


value and the property of uncertain measure that

T ((∀a)X(a) ∨ (∃a)¬X(a)) = M{((∀a)X(a) = 1) ∪ ((∀a)X(a) = 0)} = 1.

The theorem is proved.

Theorem 6.10 (Zhang-Li [201], Law of Contradiction) Let X(a) be an un-


certain predicate proposition. Then

T ((∀a)X(a) ∧ (∃a)¬X(a)) = 0. (6.45)

Proof: Since ¬(∀a)X(a) = (∃a)¬X(a), it follows from the definition of truth


value and the property of uncertain measure that

T ((∀a)X(a) ∧ (∃a)¬X(a)) = M{((∀a)X(a) = 1) ∩ ((∀a)X(a) = 0)} = 0.

The theorem is proved.

Theorem 6.11 (Zhang-Li [201], Law of Truth Conservation) Let X(a) be


an uncertain predicate proposition. Then

T ((∀a)X(a)) + T ((∃a)¬X(a)) = 1. (6.46)

Proof: Since ¬(∀a)X(a) = (∃a)¬X(a), it follows from the definition of truth


value and the property of uncertain measure that

T ((∃a)¬X(a)) = 1 − M{(∀a)X(a) = 1} = 1 − T ((∀a)X(a)).

The theorem is proved.

Theorem 6.12 (Zhang-Li [201]) Let X(a) be an uncertain predicate propo-


sition. Then for any given b, we have

T ((∀a)X(a) → X(b)) = 1. (6.47)

Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
(∀a)X(a) = 0 and ¬(∀a)X(a) = 1. Thus

(∀a)X(a) → X(b) = ¬(∀a)X(a) ∨ X(b) = 1.

Case II: If X(b) = 1, then we immediately have

(∀a)X(a) → X(b) = ¬(∀a)X(a) ∨ X(b) = 1.

Thus we always have (6.47). The theorem is proved.

Theorem 6.13 (Zhang-Li [201]) Let X(a) be an uncertain predicate propo-


sition. Then for any given b, we have

T (X(b) → (∃a)X(a)) = 1. (6.48)


162 Chapter 6 - Uncertain Propositional Logic

Proof: The argument breaks into two cases. Case 1: If X(b) = 0, then
¬X(b) = 1 and

X(b) → (∀a)X(a) = ¬X(b) ∨ (∃a)X(a) = 1.

Case II: If X(b) = 1, then (∃a)X(a) = 1 and

X(b) → (∃a)X(a) = ¬X(b) ∨ (∃a)X(a) = 1.

Thus we always have (6.48). The theorem is proved.

Theorem 6.14 (Zhang-Li [201]) Let X(a) be an uncertain predicate propo-


sition. Then
T ((∀a)X(a) → (∃a)X(a)) = 1. (6.49)

Proof: The argument breaks into two cases. Case 1: If (∀a)X(a) = 0, then
¬(∀a)X(a) = 1 and

(∀a)X(a) → (∃a)X(a) = ¬(∀a)X(a) ∨ (∃a)X(a) = 1.

Case II: If (∀a)X(a) = 1, then (∃a)X(a) = 1 and

(∀a)X(a) → (∃a)X(a) = ¬(∀a)X(a) ∨ (∃a)X(a) = 1.

Thus we always have (6.49). The theorem is proved.

Theorem 6.15 (Zhang-Li [201]) Let X(a) be an uncertain predicate proposi-


tion such that {X(a)|a ∈ A} is a class of independent uncertain propositions.
Then
T ((∀a)X(a)) = inf T (X(a)), (6.50)
a∈A

T ((∃a)X(a)) = sup T (X(a)). (6.51)


a∈A

Proof: For each uncertain predicate proposition X(a), by the meaning of


universal quantifier, we obtain
( )
\
T ((∀a)X(a)) = M{(∀a)X(a) = 1} = M (X(a) = 1) .
a∈A

Since {X(a)|a ∈ A} is a class of independent uncertain propositions, we get

T ((∀a)X(a)) = inf M{X(a) = 1} = inf T (X(a)).


a∈A a∈A

The first equation is verified. Similarly, by the meaning of existential quan-


tifier, we obtain
( )
[
T ((∃a)X(a)) = M{(∃a)X(a) = 1} = M (X(a) = 1) .
a∈A
Section 6.6 - Bibliographic Notes 163

Since {X(a)|a ∈ A} is a class of independent uncertain propositions, we get

T ((∃a)X(a)) = sup M{X(a) = 1} = sup T (X(a)).


a∈A a∈A

The second equation is proved.

Theorem 6.16 (Zhang-Li [201]) Let X(a, b) be an uncertain predicate propo-


sition such that {X(a, b)|a ∈ A, b ∈ B} is a class of independent uncertain
propositions. Then

T ((∀a)(∃b)X(a, b)) = inf sup T (X(a, b)), (6.52)


a∈A b∈B

T ((∃a)(∀b)X(a, b)) = sup inf T (X(a, b)). (6.53)


a∈A b∈B

Proof: Since {X(a, b)|a ∈ A, b ∈ B} is a class of independent uncertain


propositions, both {(∃b)X(a, b)|a ∈ A} and {(∀b)X(a, b)|a ∈ A} are two
classes of independent uncertain propositions. It follows from Theorem 6.15
that

T ((∀a)(∃b)X(a, b)) = inf T ((∃b)X(a, b)) = inf sup T (X(a, b)),


a∈A a∈A b∈B

T ((∃a)(∀b)X(a, b)) = sup T ((∀b)X(a, b)) = sup inf T (X(a, b)).


a∈A a∈A b∈B

The theorem is proved.

6.6 Bibliographic Notes


Uncertain propositional logic was designed by Li-Liu [72] in which every
proposition is abstracted into a Boolean uncertain variable and the truth
value is defined as the uncertain measure that the proposition is true. An
important contribution is Chen-Ralescu theorem [7] that provides a numerical
method for calculating the truth value of uncertain propositions.
Another topic is the uncertain predicate logic developed by Zhang-Li [201]
in which an uncertain predicate proposition is defined as a sequence of un-
certain propositions indexed by one or more parameters.
Chapter 7

Uncertain Entailment

Uncertain entailment is a methodology for calculating the truth value of an


uncertain formula via the maximum uncertainty principle when the truth
values of other uncertain formulas are given. In some sense, uncertain propo-
sitional logic and uncertain entailment are mutually inverse, the former at-
tempts to compose a complex proposition from simpler ones, while the latter
attempts to decompose a complex proposition into simpler ones.
This chapter will present an uncertain entailment model. In addition,
uncertain modus ponens, uncertain modus tollens and uncertain hypothetical
syllogism are deduced from the uncertain entailment model.

7.1 Uncertain Entailment Model


Assume X1 , X2 , · · · , Xn are independent uncertain propositions with un-
known truth values α1 , α2 , · · · , αn , respectively. Also assume that

Yj = fj (X1 , X2 , · · · , Xn ) (7.1)

are uncertain propositions with known truth values cj , j = 1, 2, · · · , m, re-


spectively. Now let
Z = f (X1 , X2 , · · · , Xn ) (7.2)
be an additional uncertain proposition. What is the truth value of Z? This
is just the uncertain entailment problem. In order to solve it, let us consider
what values α1 , α2 , · · · , αn may take. The first constraint is

0 ≤ αi ≤ 1, i = 1, 2, · · · , n. (7.3)

The second type of constraints is represented by

T (Yj ) = cj (7.4)
166 Chapter 7 - Uncertain Entailment

where T (Yj ) are determined by α1 , α2 , · · · , αn via



 sup min νi (xi ),
fj (x1 ,x2 ,··· ,xn )=1 1≤i≤n







 if sup min νi (xi ) < 0.5
fj (x1 ,x2 ,··· ,xn )=1 1≤i≤n

T (Yj ) = (7.5)
 1− sup min νi (xi ),
fj (x1 ,x2 ,··· ,xn )=0 1≤i≤n





if sup min νi (xi ) ≥ 0.5



fj (x1 ,x2 ,··· ,xn )=1 1≤i≤n

for j = 1, 2, · · · , m and
(
αi , if xi = 1
νi (xi ) = (7.6)
1 − αi , if xi = 0

for i = 1, 2, · · · , n. Please note that the additional uncertain proposition


Z = f (X1 , X2 , · · · , Xn ) has a truth value

 sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n







 if sup min νi (xi ) < 0.5
f (x1 ,x2 ,··· ,xn )=1 1≤i≤n

T (Z) = (7.7)

 1− sup min νi (xi ),
f (x1 ,x2 ,··· ,xn )=0 1≤i≤n




if sup min νi (xi ) ≥ 0.5.



1≤i≤n

f (x1 ,x2 ,··· ,xn )=1

Since the truth values α1 , α2 , · · · , αn are not uniquely determined, the truth
value T (Z) is not unique too. In this case, we have to use the maximum
uncertainty principle to determine the truth value T (Z). That is, T (Z)
should be assigned the value as close to 0.5 as possible. In other words,
we should minimize the value |T (Z) − 0.5| via choosing appreciate values of
α1 , α2 , · · · , αn . The uncertain entailment model is thus written by Liu [81]
as follows,


 min |T (Z) − 0.5|

 subject to:
(7.8)

 0 ≤ αi ≤ 1, i = 1, 2, · · · , n

T (Yj ) = cj , j = 1, 2, · · · , m

where T (Z), T (Yj ), j = 1, 2, · · · , m are functions of unknown truth values


α1 , α2 , · · · , αn .

Example 7.1: Let A and B be independent uncertain propositions. It is


known that
T (A ∨ B) = a, T (A ∧ B) = b. (7.9)
Section 7.1 - Uncertain Entailment Model 167

What is the truth value of A → B? Denote the truth values of A and B by


α1 and α2 , respectively, and write

Y1 = A ∨ B, Y2 = A ∧ B, Z = A → B.

It is clear that
T (Y1 ) = α1 ∨ α2 = a,
T (Y2 ) = α1 ∧ α2 = b,
T (Z) = (1 − α1 ) ∨ α2 .
In this case, the uncertain entailment model (7.8) becomes


 min |(1 − α1 ) ∨ α2 − 0.5|




 subject to:
0 ≤ α1 ≤ 1


(7.10)

 0 ≤ α2 ≤ 1

α1 ∨ α2 = a





α1 ∧ α2 = b.

When a ≥ b, there are only two feasible solutions (α1 , α2 ) = (a, b) and
(α1 , α2 ) = (b, a). If a + b < 1, the optimal solution produces

T (Z) = (1 − α1∗ ) ∨ α2∗ = 1 − a;

if a + b = 1, the optimal solution produces

T (Z) = (1 − α1∗ ) ∨ α2∗ = a or b;

if a + b > 1, the optimal solution produces

T (Z) = (1 − α1∗ ) ∨ α2∗ = b.

When a < b, there is no feasible solution and the truth values are ill-assigned.
In summary, from T (A ∨ B) = a and T (A ∧ B) = b we entail


 1 − a, if a ≥ b and a + b < 1
 a or b, if a ≥ b and a + b = 1

T (A → B) = (7.11)

 b, if a ≥ b and a + b > 1


illness, if a < b.

Exercise 7.1: Let A, B, C be independent uncertain propositions. It is


known that

T (A → C) = a, T (B → C) = b, T (A ∨ B) = c. (7.12)
168 Chapter 7 - Uncertain Entailment

What is the truth value of C?

Exercise 7.2: Let A, B, C, D be independent uncertain propositions. It is


known that

T (A → C) = a, T (B → D) = b, T (A ∨ B) = c. (7.13)

What is the truth value of C ∨ D?

Exercise 7.3: Let A, B, C be independent uncertain propositions. It is


known that
T (A ∨ B) = a, T (¬A ∨ C) = b. (7.14)
What is the truth value of B ∨ C?

7.2 Uncertain Modus Ponens


Uncertain modus ponens was presented by Liu [81]. Let A and B be inde-
pendent uncertain propositions. Assume A and A → B have truth values a
and b, respectively. What is the truth value of B? Denote the truth values
of A and B by α1 and α2 , respectively, and write

Y1 = A, Y2 = A → B, Z = B.

It is clear that
T (Y1 ) = α1 = a,
T (Y2 ) = (1 − α1 ) ∨ α2 = b,
T (Z) = α2 .
In this case, the uncertain entailment model (7.8) becomes


 min |α2 − 0.5|




 subject to:
0 ≤ α1 ≤ 1


(7.15)

 0 ≤ α2 ≤ 1




 α1 = a

(1 − α1 ) ∨ α2 = b.

When a + b > 1, there is a unique feasible solution and then the optimal
solution is
α1∗ = a, α2∗ = b.
Thus T (B) = α2∗ = b. When a + b = 1, the feasible set is {a} × [0, b] and the
optimal solution is
α1∗ = a, α2∗ = 0.5 ∧ b.
Section 7.3 - Uncertain Modus Tollens 169

Thus T (B) = α2∗ = 0.5 ∧ b. When a + b < 1, there is no feasible solution and
the truth values are ill-assigned. In summary, from

T (A) = a, T (A → B) = b (7.16)

we entail 

 b, if a + b > 1
T (B) = 0.5 ∧ b, if a + b = 1 (7.17)

illness, if a + b < 1.

This result coincides with the classical modus ponens that if both A and
A → B are true, then B is true.

7.3 Uncertain Modus Tollens


Uncertain modus tollens was presented by Liu [81]. Let A and B be inde-
pendent uncertain propositions. Assume A → B and B have truth values a
and b, respectively. What is the truth value of A? Denote the truth values
of A and B by α1 and α2 , respectively, and write

Y1 = A → B, Y2 = B, Z = A.

It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = α2 = b,
T (Z) = α1 .
In this case, the uncertain entailment model (7.8) becomes


 min |α1 − 0.5|




 subject to:
0 ≤ α1 ≤ 1


(7.18)

 0 ≤ α2 ≤ 1

− α1 ) ∨ α2 = a



 (1

α2 = b.

When a > b, there is a unique feasible solution and then the optimal solution
is
α1∗ = 1 − a, α2∗ = b.
Thus T (A) = α1∗ = 1 − a. When a = b, the feasible set is [1 − a, 1] × {b} and
the optimal solution is

α1∗ = (1 − a) ∨ 0.5, α2∗ = b.


170 Chapter 7 - Uncertain Entailment

Thus T (A) = α1∗ = (1 − a) ∨ 0.5. When a < b, there is no feasible solution


and the truth values are ill-assigned. In summary, from

T (A → B) = a, T (B) = b (7.19)

we entail 

 1 − a, if a > b
T (A) = (1 − a) ∨ 0.5, if a = b (7.20)

illness, if a < b.

This result coincides with the classical modus tollens that if A → B is true
and B is false, then A is false.

7.4 Uncertain Hypothetical Syllogism


Uncertain hypothetical syllogism was presented by Liu [81]. Let A, B, C be
independent uncertain propositions. Assume A → B and B → C have truth
values a and b, respectively. What is the truth value of A → C? Denote the
truth values of A, B, C by α1 , α2 , α3 , respectively, and write

Y1 = A → B, Y2 = B → C, Z = A → C.

It is clear that
T (Y1 ) = (1 − α1 ) ∨ α2 = a,
T (Y2 ) = (1 − α2 ) ∨ α3 = b,
T (Z) = (1 − α1 ) ∨ α3 .
In this case, the uncertain entailment model (7.8) becomes


 min |(1 − α1 ) ∨ α3 − 0.5|

subject to:





0 ≤ α1 ≤ 1




0 ≤ α2 ≤ 1 (7.21)

0 ≤ α3 ≤ 1





(1 − α1 ) ∨ α2 = a





 (1 − α2 ) ∨ α3 = b.

Write the optimal solution by (α1∗ , α2∗ , α3∗ ). When a ∧ b ≥ 0.5, we have

T (A → C) = (1 − α1∗ ) ∨ α3∗ = a ∧ b.

When a + b ≥ 1 and a ∧ b < 0.5, we have

T (A → C) = (1 − α1∗ ) ∨ α3∗ = 0.5.


Section 7.5 - Bibliographic Notes 171

When a + b < 1, there is no feasible solution and the truth values are ill-
assigned. In summary, from

T (A → B) = a, T (B → C) = b (7.22)

we entail


 a ∧ b, if a ≥ 0.5 and b ≥ 0.5

T (A → C) = 0.5, if a + b ≥ 1 and a ∧ b < 0.5 (7.23)


illness, if a + b < 1.

This result coincides with the classical hypothetical syllogism that if both
A → B and B → C are true, then A → C is true.

7.5 Bibliographic Notes


Uncertain entailment was proposed by Liu [81] for determining the truth
value of an uncertain proposition via the maximum uncertainty principle
when the truth values of other uncertain propositions are given.
From the uncertain entailment model, Liu [81] deduced uncertain modus
ponens, uncertain modus tollens, and uncertain hypothetical syllogism. After
that, Yang-Gao-Ni [165] investigated the uncertain resolution principle.
Chapter 8

Uncertain Set

Uncertain set was first proposed by Liu [82] in 2010 for modelling unsharp
concepts. This chapter will introduce the concepts of uncertain set, member-
ship function, independence, expected value, variance, distance, and entropy.
This chapter will also introduce the operational law for uncertain sets via
membership functions or inverse membership functions. Finally, conditional
uncertain set and conditional membership function are documented.

8.1 Uncertain Set


Roughly speaking, an uncertain set is a set-valued function on an uncertainty
space, and attempts to model “unsharp concepts” that are essentially sets
but their boundaries are not sharply described (because of the ambiguity of
human language). Some typical examples include “young”, “tall”, “warm”,
and “most”. A formal definition is given as follows.
Definition 8.1 (Liu [82]) An uncertain set is a function ξ from an uncer-
tainty space (Γ, L, M) to a collection of sets of real numbers such that both
{B ⊂ ξ} and {ξ ⊂ B} are events for any Borel set B of real numbers.

Remark 8.1: Note that the events {B ⊂ ξ} and {ξ ⊂ B} are subsets of the
universal set Γ, i.e.,
{B ⊂ ξ} = {γ ∈ Γ | B ⊂ ξ(γ)}, (8.1)
{ξ ⊂ B} = {γ ∈ Γ | ξ(γ) ⊂ B}. (8.2)

Remark 8.2: It is clear that uncertain set (Liu [82]) is very different from
random set (Robbins [130] and Matheron [113]) and fuzzy set (Zadeh [192]).
The essential difference among them is that different measures are used, i.e.,
random set uses probability measure, fuzzy set uses possibility measure and
uncertain set uses uncertain measure.
174 Chapter 8 - Uncertain Set

Remark 8.3: What is the difference between uncertain variable and un-
certain set? Both of them belong to the same broad category of uncertain
concepts. However, they are differentiated by their mathematical definitions:
the former refers to one value, while the latter to a collection of values. Es-
sentially, the difference between uncertain variable and uncertain set focuses
on the property of exclusivity. If the concept has exclusivity, then it is an
uncertain variable. Otherwise, it is an uncertain set. Consider the statement
“John is a young man”. If we are interested in John’s real age, then “young”
is an uncertain variable because it is an exclusive concept (John’s age can-
not be more than one value). For example, if John is 20 years old, then it
is impossible that John is 25 years old. In other words, “John is 20 years
old” does exclude the possibility that “John is 25 years old”. By contrast,
if we are interested in what ages can be regarded “young”, then “young” is
an uncertain set because the concept now has no exclusivity. For example,
both 20-year-old and 25-year-old men can be considered “young”. In other
words, “a 20-year-old man is young” does not exclude the possibility that “a
25-year-old man is young”.

Example 8.1: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2. Then

 [1, 3], if γ = γ1

ξ(γ) = [2, 4], if γ = γ2 (8.3)

[3, 5], if γ = γ3

is an uncertain set. See Figure 8.1. Furthermore, we have

M{2 ∈ ξ} = M{γ | 2 ∈ ξ(γ)} = M{γ1 , γ2 } = 0.8, (8.4)

M{[3, 4] ⊂ ξ} = M{γ | [3, 4] ⊂ ξ(γ)} = M{γ2 , γ3 } = 0.4, (8.5)


M{ξ ⊂ [1, 5]} = M{γ | ξ(γ) ⊂ [1, 5]} = M{γ1 , γ2 , γ3 } = 1. (8.6)

<..
.
........
...
...
5 ...
......................................................... ..........
... ... ...
... ... ...
... ... ...
4 ...
...................................
...
..........
... ..
... ...
... ...
... ...
... ... ... ... ....
... ... ... ... ..
3 ...
.........................................................
...
..........
... ... ... ...
.......
..
... .... ... ..... .... ..
... ... ... ... ... ..
2 ...................................
...
...
... ...
... ...
... ..
........
..
..
..
... .. ..
... .... .... .. ..
1 ............ ...
...
.......
..
..
..
..
..
... .. .. ..
..
. .. .. .
.........................................................................................................................................................................................................
γ .... γ γ Γ
. 1 2 3

Figure 8.1: An Uncertain Set


Section 8.1 - Uncertain Set 175

Example 8.2: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Then

ξ(γ) = [0, 3γ], ∀γ ∈ Γ (8.7)

is an uncertain set. Furthermore, we have

M{2 ∈ ξ} = M{γ | 2 ∈ ξ(γ)} = M{[2/3, 1]} = 1/3, (8.8)

M{[0, 1] ⊂ ξ} = M{γ | [0, 1] ⊂ ξ(γ)} = M{[1/3, 1]} = 2/3, (8.9)


M{ξ ⊂ [0, 3)} = M{γ | ξ(γ) ⊂ [0, 3)} = M{[0, 1)} = 1. (8.10)

Example 8.3: A crisp set A of real numbers is a special uncertain set on


an uncertainty space (Γ, L, M) defined by

ξ(γ) ≡ A, ∀γ ∈ Γ. (8.11)

Furthermore, for any Borel set B of real numbers, we have

M{B ⊂ ξ} = M{γ | B ⊂ ξ(γ)} = M{Γ} = 1, if B ⊂ A, (8.12)

M{B ⊂ ξ} = M{γ | B ⊂ ξ(γ)} = M{∅} = 0, if B 6⊂ A, (8.13)


M{ξ ⊂ B} = M{γ | ξ(γ) ⊂ B} = M{Γ} = 1, if A ⊂ B, (8.14)
M{ξ ⊂ B} = M{γ | ξ(γ) ⊂ B} = M{∅} = 0, if A 6⊂ B. (8.15)

Example 8.4: Let ξ be an uncertain set and let x be a real number. Then

{x ∈ ξ}c = {γ | x ∈ ξ(γ)}c = {γ | x 6∈ ξ(γ)} = {x 6∈ ξ}.

Thus {x ∈ ξ} and {x 6∈ ξ} are opposite events. Furthermore, by the duality


axiom, we obtain
M{x ∈ ξ} + M{x 6∈ ξ} = 1. (8.16)

Exercise 8.1: Let ξ be an uncertain set and let B be a Borel set of real
numbers. Show that {B ⊂ ξ} and {B 6⊂ ξ} are opposite events, and

M{B ⊂ ξ} + M{B 6⊂ ξ} = 1. (8.17)

Exercise 8.2: Let ξ be an uncertain set and let B be a Borel set of real
numbers. Show that {ξ ⊂ B} and {ξ 6⊂ B} are opposite events, and

M{ξ ⊂ B} + M{ξ 6⊂ B} = 1. (8.18)

Exercise 8.3: Let ξ and η be two uncertain sets. Show that {ξ ⊂ η} and
{ξ 6⊂ η} are opposite events, and

M{ξ ⊂ η} + M{ξ 6⊂ η} = 1. (8.19)


176 Chapter 8 - Uncertain Set

Exercise 8.4: Let ∅ be the empty set, and let ξ be an uncertain set. Show
that
M{∅ ⊂ ξ} = 1. (8.20)

Exercise 8.5: Let ξ be an uncertain set, and let < be the set of real numbers.
Show that
M{ξ ⊂ <} = 1. (8.21)

Exercise 8.6: Let ξ be an uncertain set. Show that ξ is always included in


itself, i.e.,
M{ξ ⊂ ξ} = 1. (8.22)

Theorem 8.1 (Liu [99], Fundamental Relationship) Let ξ be an uncertain


set, and let B be a crisp set of real numbers. Then
\
{B ⊂ ξ} = {x ∈ ξ}, (8.23)
x∈B
\
{ξ ⊂ B} = {x 6∈ ξ}. (8.24)
x∈B c

Proof: For any γ ∈ {B ⊂ ξ}, we have B ⊂ ξ(γ). Thus x ∈ ξ(γ) whenever


x ∈ B. This means γ ∈ {x ∈ ξ} and then {B ⊂ ξ} ⊂ {x ∈ ξ} for any x ∈ B.
Hence \
{B ⊂ ξ} ⊂ {x ∈ ξ}. (8.25)
x∈B

On the other hand, for any


\
γ∈ {x ∈ ξ},
x∈B

we have x ∈ ξ(γ) whenever x ∈ B. Thus B ⊂ ξ(γ), i.e., γ ∈ {B ⊂ ξ}. This


means \
{B ⊂ ξ} ⊃ {x ∈ ξ}. (8.26)
x∈B

It follows from (8.25) and (8.26) that (8.23) holds. The first equation is
proved. Next we verify the second equation. For any γ ∈ {ξ ⊂ B}, we have
ξ(γ) ⊂ B. Thus x 6∈ ξ(γ) whenever x ∈ B c . This means γ ∈ {x 6∈ ξ} and
then {ξ ⊂ B} ⊂ {x 6∈ ξ} for any x ∈ B c . Hence
\
{ξ ⊂ B} ⊂ {x 6∈ ξ}. (8.27)
x∈B c

On the other hand, for any


\
γ∈ {x 6∈ ξ},
x∈B c
Section 8.1 - Uncertain Set 177

we have x 6∈ ξ(γ) whenever x ∈ B c . Thus ξ(γ) ⊂ B, i.e., γ ∈ {ξ ⊂ B}. This


means \
{ξ ⊂ B} ⊃ {x 6∈ ξ}. (8.28)
x∈B c

It follows from (8.27) and (8.28) that (8.24) holds. The theorem is proved.

Definition 8.2 An uncertain set ξ on the uncertainty space (Γ, L, M) is said


to be (i) nonempty if
ξ(γ) 6= ∅ (8.29)
for almost all γ ∈ Γ, (ii) empty if

ξ(γ) = ∅ (8.30)

for almost all γ ∈ Γ, and (iii) half-empty if otherwise.

Example 8.5: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Then

ξ(γ) = [0, γ], ∀γ ∈ Γ (8.31)

is a nonempty uncertain set,

ξ(γ) = ∅, ∀γ ∈ Γ (8.32)

is an empty uncertain set, and


(
∅, if γ > 0.8
ξ(γ) = (8.33)
[0, γ], if γ ≤ 0.8

is a half-empty uncertain set.

Union, Intersection and Complement


Definition 8.3 Let ξ and η be two uncertain sets on the uncertainty space
(Γ, L, M). Then (i) the union ξ ∪ η of the uncertain sets ξ and η is

(ξ ∪ η)(γ) = ξ(γ) ∪ η(γ), ∀γ ∈ Γ; (8.34)

(ii) the intersection ξ ∩ η of the uncertain sets ξ and η is

(ξ ∩ η)(γ) = ξ(γ) ∩ η(γ), ∀γ ∈ Γ; (8.35)

(iii) the complement ξ c of the uncertain set ξ is

ξ c (γ) = ξ(γ)c , ∀γ ∈ Γ. (8.36)


178 Chapter 8 - Uncertain Set

Example 8.6: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2. Let ξ and η be two
uncertain sets,
 
 [1, 2], if γ = γ1





(2, 3), if γ = γ1
ξ(γ) = [1, 3], if γ = γ2 η(γ) = (2, 4), if γ = γ2

 

[1, 4], if γ = γ3 , (2, 5), if γ = γ3 .
 

Then their union is




 [1, 3), if γ = γ1

(ξ ∪ η)(γ) = [1, 4), if γ = γ2


[1, 5), if γ = γ3 ,

their intersection is


 ∅, if γ = γ1

(ξ ∩ η)(γ) = (2, 3], if γ = γ2


(2, 4], if γ = γ3 ,

and their complement sets are




 (−∞, 1) ∪ (2, +∞), if γ = γ1

c
ξ (γ) = (−∞, 1) ∪ (3, +∞), if γ = γ2


(−∞, 1) ∪ (4, +∞), if γ = γ3 ,



 (−∞, 2] ∪ [3, +∞), if γ = γ1

c
η (γ) = (−∞, 2] ∪ [4, +∞), if γ = γ2


(−∞, 2] ∪ [5, +∞), if γ = γ3 .

Theorem 8.2 (Idempotent Law) Let ξ be an uncertain set. Then we have

ξ ∪ ξ = ξ, ξ ∩ ξ = ξ. (8.37)

Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ξ)(γ) = ξ(γ) ∪ ξ(γ) = ξ(γ).
Thus we have ξ ∪ ξ = ξ. In addition, the intersection is

(ξ ∩ ξ)(γ) = ξ(γ) ∩ ξ(γ) = ξ(γ).

Thus we have ξ ∩ ξ = ξ.
Section 8.1 - Uncertain Set 179

Theorem 8.3 (Double-Negation Law) Let ξ be an uncertain set. Then we


have
(ξ c )c = ξ. (8.38)
Proof: For each γ ∈ Γ, it follows from the definition of complement that
(ξ c )c (γ) = (ξ c (γ))c = (ξ(γ)c )c = ξ(γ).
Thus we have (ξ c )c = ξ.
Theorem 8.4 (Law of Excluded Middle and Law of Contradiction) Let ξ be
an uncertain set and let ξ c be its complement. Then
ξ ∪ ξ c ≡ <, ξ ∩ ξ c ≡ ∅. (8.39)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
the union is
(ξ ∪ ξ c )(γ) = ξ(γ) ∪ ξ c (γ) = ξ(γ) ∪ ξ(γ)c ≡ <.
Thus we have ξ ∪ ξ c ≡ <. In addition, the intersection is
(ξ ∩ ξ c )(γ) = ξ(γ) ∩ ξ c (γ) = ξ(γ) ∩ ξ(γ)c ≡ ∅.
Thus we have ξ ∩ ξ c ≡ ∅.
Theorem 8.5 (Commutative Law) Let ξ and η be uncertain sets. Then we
have
ξ ∪ η = η ∪ ξ, ξ ∩ η = η ∩ ξ. (8.40)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
(ξ ∪ η)(γ) = ξ(γ) ∪ η(γ) = η(γ) ∪ ξ(γ) = (η ∪ ξ)(γ).
Thus we have ξ ∪ η = η ∪ ξ. In addition, it follows that
(ξ ∩ η)(γ) = ξ(γ) ∩ η(γ) = η(γ) ∩ ξ(γ) = (η ∩ ξ)(γ).
Thus we have ξ ∩ η = η ∩ ξ.
Theorem 8.6 (Associative Law) Let ξ, η, τ be uncertain sets. Then we have
(ξ ∪ η) ∪ τ = ξ ∪ (η ∪ τ ), (ξ ∩ η) ∩ τ = ξ ∩ (η ∩ τ ). (8.41)
Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that
((ξ ∪ η) ∪ τ )(γ) = (ξ(γ) ∪ η(γ)) ∪ τ (γ)
= ξ(γ) ∪ (η(γ) ∪ τ (γ)) = (ξ ∪ (η ∪ τ ))(γ).
Thus we have (ξ ∪ η) ∪ τ = ξ ∪ (η ∪ τ ). In addition, it follows that
((ξ ∩ η) ∩ τ )(γ) = (ξ(γ) ∩ η(γ)) ∩ τ (γ)
= ξ(γ) ∩ (η(γ) ∩ τ (γ)) = (ξ ∩ (η ∩ τ ))(γ).
Thus we have (ξ ∩ η) ∩ τ = ξ ∩ (η ∩ τ ).
180 Chapter 8 - Uncertain Set

Theorem 8.7 (Distributive Law) Let ξ, η, τ be uncertain sets. Then we have

ξ ∪ (η ∩ τ ) = (ξ ∪ η) ∩ (ξ ∪ τ ), ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ). (8.42)

Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that

(ξ ∪ (η ∩ τ ))(γ) = ξ(γ) ∪ (η(γ) ∩ τ (γ))


= (ξ(γ) ∪ η(γ)) ∩ (ξ(γ) ∪ τ (γ))
= ((ξ ∪ η) ∩ (ξ ∪ τ ))(γ).

Thus we have ξ ∪ (η ∩ τ ) = (ξ ∪ η) ∩ (ξ ∪ τ ). In addition, it follows that

(ξ ∩ (η ∪ τ ))(γ) = ξ(γ) ∩ (η(γ) ∪ τ (γ))


= (ξ(γ) ∩ η(γ)) ∪ (ξ(γ) ∩ τ (γ))
= ((ξ ∩ η) ∪ (ξ ∩ τ ))(γ).

Thus we have ξ ∩ (η ∪ τ ) = (ξ ∩ η) ∪ (ξ ∩ τ ).

Theorem 8.8 (Absorbtion Law) Let ξ and η be uncertain sets. Then we


have
ξ ∪ (ξ ∩ η) = ξ, ξ ∩ (ξ ∪ η) = ξ. (8.43)

Proof: For each γ ∈ Γ, it follows from the definition of uncertain set that

(ξ ∪ (ξ ∩ η))(γ) = ξ(γ) ∪ (ξ(γ) ∩ η(γ)) = ξ(γ).

Thus we have ξ ∪ (ξ ∩ η) = ξ. In addition, since

(ξ ∩ (ξ ∪ η))(γ) = ξ(γ) ∩ (ξ(γ) ∪ η(γ)) = ξ(γ),

we get ξ ∩ (ξ ∪ η) = ξ.

Theorem 8.9 (De Morgan’s Law) Let ξ and η be uncertain sets. Then we
have
(ξ ∪ η)c = ξ c ∩ η c , (ξ ∩ η)c = ξ c ∪ η c . (8.44)

Proof: For each γ ∈ Γ, it follows from the definition of complement that

(ξ ∪ η)c (γ) = ((ξ(γ) ∪ η(γ))c = ξ(γ)c ∩ η(γ)c = (ξ c ∩ η c )(γ).

Thus we have (ξ ∪ η)c = ξ c ∩ η c . In addition, since

(ξ ∩ η)c (γ) = ((ξ(γ) ∩ η(γ))c = ξ(γ)c ∪ η(γ)c = (ξ c ∪ η c )(γ),

we get (ξ ∩ η)c = ξ c ∪ η c .
Section 8.1 - Uncertain Set 181

Exercise 8.7: Let ξ be an uncertain set and let x be a real number. Show
that
{x ∈ ξ c } = {x 6∈ ξ} (8.45)
and
M{x ∈ ξ c } = M{x 6∈ ξ}. (8.46)

Exercise 8.8: Let ξ be an uncertain set and let x be a real number. Show
that {x ∈ ξ} and {x ∈ ξ c } are opposite events, and

M{x ∈ ξ} + M{x ∈ ξ c } = 1. (8.47)

Exercise 8.9: Let ξ be an uncertain set and let B be a Borel set of real
numbers. Show that {B ⊂ ξ} and {B ⊂ ξ c } are not necessarily opposite
events.

Exercise 8.10: Let ξ and η be two uncertain sets. Show that {ξ ⊂ η} and
{η c ⊂ ξ c } are identical events, i.e.,

{ξ ⊂ η} = {η c ⊂ ξ c }. (8.48)

Exercise 8.11: Let ξ and η be two uncertain sets. Show that {ξ ⊂ η} and
{ξ ⊂ η c } are not necessarily opposite events.

Function of Uncertain Sets


Definition 8.4 Let ξ1 , ξ2 , · · · , ξn be uncertain sets on the uncertainty space
(Γ, L, M), and let f be a measurable function. Then ξ = f (ξ1 , ξ2 , · · · , ξn ) is
an uncertain set defined by

ξ(γ) = f (ξ1 (γ), ξ2 (γ), · · · , ξn (γ)), ∀γ ∈ Γ. (8.49)

Example 8.7: Let ξ be an uncertain set on the uncertainty space (Γ, L, M)


and let A be a crisp set of real numbers. Then ξ + A is also an uncertain set
determined by
(ξ + A)(γ) = ξ(γ) + A, ∀γ ∈ Γ. (8.50)

Example 8.8: Note that the empty set ∅ annihilates every other set. For
example, A + ∅ = ∅ and A × ∅ = ∅. Take an uncertainty space (Γ, L, M) to
be {γ1 , γ2 , γ3 } with power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2.
Define two uncertain sets,
 
 ∅,
 if γ = γ1  (2, 3), if γ = γ1

ξ(γ) = [1, 3], if γ = γ2 η(γ) = (2, 4), if γ = γ2
 
[1, 4], if γ = γ3 , (2, 5), if γ = γ3 .
 
182 Chapter 8 - Uncertain Set

Then their sum is




 ∅, if γ = γ1
(ξ + η)(γ) = (3, 7), if γ = γ2

(3, 9), if γ = γ3 ,

and their product is




 ∅, if γ = γ1
(ξ × η)(γ) = (2, 12), if γ = γ2

(2, 20), if γ = γ3 .

Exercise 8.12: Let ξ be an uncertain set. (i) Show that ξ + ξ 6≡ 2ξ. (ii) Do
you think the same of crisp set?

Exercise 8.13: Let ξ be an uncertain set. What are the potential values of
the difference ξ − ξ?

8.2 Membership Function


It is well-known that a crisp set can be described by its indicator function.
As a generalization of indicator function, membership function will be used
to describe an uncertain set.

Definition 8.5 (Liu [88]) An uncertain set ξ is said to have a membership


function µ if for any Borel set B of real numbers, we have

M{B ⊂ ξ} = inf µ(x), (8.51)


x∈B

M{ξ ⊂ B} = 1 − sup µ(x). (8.52)


x∈B c

The above equations will be called measure inversion formulas.

Theorem 8.10 Let ξ be an uncertain set whose membership function µ ex-


ists. Then
µ(x) = M{x ∈ ξ} (8.53)
for any number x.

Proof: For any number x, it follows from the first measure inversion formula
that
M{x ∈ ξ} = M{{x} ⊂ ξ} = inf µ(y) = µ(x).
y∈{x}

The theorem is proved.


Section 8.2 - Membership Function 183

µ(x) µ(x)
.... ....
........ .... ........ ....
.. ...... ........... .. ...... ...........
... .... .... ... .... ....
... ... .... ... ... ....
... ... ...
... ... ... ...
...
... ... ... ... ... ...
... ... ... ... ..
. .
... .. ... .................................................................................
...
... .
.
..
..
. ...
... sup µ(x) ...
.
. .
..
....
... .
... ..
... ... .
... ... . x∈B c .... .... ... .....
............. ................................................................... ... ... ..
inf µ(x) . ...
.
... .... ..
..
......
.. .....
... ... ..
....
......
.. ......
x∈B . ... .... ..
......
. .. .. ...... ....
. . .. ......
...... .
. .. ....... ...... .
. .. .....
.
...... ... . .. .....
.. ...
.... ... .
. ..
.....
...
.... ... .. . .... ... .
. .
............................................................................................................................................................................ x ................................................................................................................................................................................. x
.... .. .. . .
. .... .............................. . .
0 .. B
............................. ............................. 0 .. ... B ..............................

Figure 8.2: M{B ⊂ ξ} = inf µ(x) and M{ξ ⊂ B} = 1 − sup µ(x)


x∈B x∈B c

Remark 8.4: The value of µ(x) is just the membership degree that x belongs
to the uncertain set ξ. If µ(x) = 1, then x completely belongs to ξ; if µ(x) = 0,
then x does not belong to ξ at all. Thus the larger the value of µ(x) is, the
more true x belongs to ξ.

Theorem 8.11 Let ξ be an uncertain set with membership function µ. Then

M{x 6∈ ξ} = 1 − µ(x) (8.54)

for any number x.

Proof: Since {x 6∈ ξ} and {x ∈ ξ} are opposite events, it follows from the


duality axiom of uncertain measure that

M{x 6∈ ξ} = 1 − M{x ∈ ξ} = 1 − µ(x).

The theorem is proved.

Remark 8.5: Theorem 8.11 states that if an element x belongs to an uncer-


tain set with membership degree α, then x does not belong to the uncertain
set with membership degree 1 − α.

Theorem 8.12 Let ξ be an uncertain set with membership function µ. Then

M{x ∈ ξ c } = 1 − µ(x) (8.55)

for any number x.

Proof: Since {x ∈ ξ c } and {x ∈ ξ} are opposite events, it follows from the


duality axiom of uncertain measure that

M{x ∈ ξ c } = 1 − M{x ∈ ξ} = 1 − µ(x).

The theorem is proved.


184 Chapter 8 - Uncertain Set

Remark 8.6: Theorem 8.12 states that if an element x belongs to an un-


certain set with membership degree α, then x belongs to its complement set
with membership degree 1 − α.

Remark 8.7: For any membership function µ, it is clear that 0 ≤ µ(x) ≤ 1.


We will always take

inf µ(x) = 1, sup µ(x) = 0. (8.56)


x∈∅ x∈∅

Thus we have
M{∅ ⊂ ξ} = 1 = inf µ(x).
x∈∅

That is, the first measure inversion formula always holds for B = ∅. Further-
more, we have
M{ξ ⊂ <} = 1 = 1 − sup µ(x).
x∈∅

That is, the second measure inversion formula always holds for B = <.

Example 8.9: The set < of real numbers is a special uncertain set ξ(γ) ≡ <.
Such an uncertain set has a membership function

µ(x) ≡ 1 (8.57)

that is just the indicator function of <. In order to prove it, we must verify
that < and µ simultaneously satisfy the two measure inversion formulas (8.51)
and (8.52). Let B be a Borel set of real numbers. If B = ∅, then the first
measure inversion formula always holds. If B 6= ∅, then

M{B ⊂ ξ} = M{Γ} = 1 = inf µ(x).


x∈B

The first measure inversion formula is verified. Next we prove the second
measure inversion formula. If B = <, then the second measure inversion
formula always holds. If B 6= <, then

M{ξ ⊂ B} = M{∅} = 0 = 1 − sup µ(x).


x∈B c

The second measure inversion formula is verified. Therefore, the uncertain


set ξ(γ) ≡ < has a membership function µ(x) ≡ 1.

Exercise 8.14: The empty set ∅ is a special uncertain set ξ(γ) ≡ ∅. Show
that such an uncertain set has a membership function

µ(x) ≡ 0 (8.58)

that is just the indicator function of ∅.


Section 8.2 - Membership Function 185

Exercise 8.15: A crisp set A of real numbers is a special uncertain set


ξ(γ) ≡ A. Show that such an uncertain set has a membership function
(
1, if x ∈ A
µ(x) = (8.59)
0, if x 6∈ A

that is just the indicator function of A.

Exercise 8.16: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with


power set and M{γ1 } = 0.4, M{γ2 } = 0.6. Show that the uncertain set
(
∅, if γ = γ1
ξ(γ) =
A, if γ = γ2

has a membership function


(
0.6, if x ∈ A
µ(x) = (8.60)
0, if x 6∈ A

where A is a crisp set of real numbers.

Exercise 8.17: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. (i) Show that the uncertain set
ξ(γ) = [−γ, γ] , ∀γ ∈ [0, 1] (8.61)
has a membership function
(
1 − |x|, if − 1 ≤ x ≤ 1
µ(x) = (8.62)
0, otherwise.

(ii) What is the membership function of ξ(γ) = [γ − 1, 1 − γ]? (iii) What do


those two uncertain sets make you think about? (iv) Design a third uncertain
set whose membership function is also (8.62).

Exercise 8.18: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2. Define an uncertain
set 
 [2, 3], if γ = γ1

ξ(γ) = [0, 5], if γ = γ2

[1, 4], if γ = γ3 .

(i) What is the membership function of ξ? (ii) Please justify your answer.
(Hint: If ξ does have a membership function, then µ(x) = M{x ∈ ξ}.)

Exercise 8.19: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Define an uncertain set
ξ(γ) = γ 2 , +∞ .

(8.63)
186 Chapter 8 - Uncertain Set

(i) What is the membership function of ξ? (ii) What is the membership


function of the complement set ξ c ? (iii) What do those two uncertain sets
make you think about?

Exercise 8.20: It is not true that every uncertain set has a membership
function. Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with power set
and M{γ1 } = 0.4, M{γ2 } = 0.6. Show that the uncertain set
(
[1, 3], if γ = γ1
ξ(γ) = (8.64)
[2, 4], if γ = γ2

has no membership function. (Hint: If ξ does have a membership function,


then by using µ(x) = M{x ∈ ξ}, we get


 0.4, if 1 ≤ x < 2
 1, if 2 ≤ x ≤ 3

µ(x) = (8.65)

 0.6, if 3 < x ≤ 4


0, otherwise.
Verify that ξ and µ cannot simultaneously satisfy the two measure inversion
formulas (8.51) and (8.52).)

Exercise 8.21: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Show that the uncertain set

ξ(γ) = [γ, γ + 1] , ∀γ ∈ Γ (8.66)

has no membership function.

Definition 8.6 An uncertain set ξ is called triangular if it has a membership


function
 x−a
 , if a ≤ x ≤ b
b−a

µ(x) = (8.67)
 x − c , if b ≤ x ≤ c

b−c
denoted by (a, b, c) where a, b, c are real numbers with a < b < c.

Definition 8.7 An uncertain set ξ is called trapezoidal if it has a member-


ship function
x−a


 , if a ≤ x ≤ b
 b−a



µ(x) = 1, if b ≤ x ≤ c (8.68)

x−d


, if c ≤ x ≤ d



c−d
denoted by (a, b, c, d) where a, b, c, d are real numbers with a < b < c < d.
Section 8.2 - Membership Function 187

µ(x) µ(x)
.... ....
........ ........
.. ..
... .. ... ...................................................
... ....... ... .... .....
... ... .. .... ... ..... ..
... ... . ... ... ... . . ....
... ... . .... ... .. .
. . ...
... ... .. ..... ... ... . . ...
... ... . ... ... .. ..
.
.
. .....
... ... . ... ... ... . . ..
... ... . ... ... .. ..
. . ...
.. . ... .. . ...
... . . ... ... . . . ...
... ... . ... ... ..
. . . ...
... ... .
. ... ... ... .
.
.
. ...
... ... . ... ... ..
. . . ...
... .. . ... ... .... . . ...
. . ... . . ...
... ... . ... ... .
. . . ...
... ..
. . . ... .... . .
...................................................................................................................................... x ............................................................................................................................................................. x
a .... c a .... c
.. b .. b d

Figure 8.3: Triangular and Trapezoidal Membership Functions

What is “young”?

Sometimes we say “those students are young”. What ages can be considered
“young”? In this case, “young” may be regarded as an uncertain set whose
membership function is


 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



µ(x) = 1, if 20 ≤ x ≤ 35 (8.69)

(45 − x)/10, if 35 ≤ x ≤ 45





0, if x ≥ 45.

Note that we do not say “young” if the age is below 15.

µ(x)
...
..........
... .........................................................................................
.. ..... ......
... ..... .. ....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. .. ...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. .. ...
... .. .. ...
. .. ...
... ... .. .. ...
... ..
. .. .. ...
... ... .. ...
.. .. ...
... . .. .. ...
... ... .. .. ...
... ..
. .. ...
. .. ...
... .
.
. .
. . ..
............................................................................................................................................................................................................................................................... x
...
15yr 20yr .. 35yr 45yr

Figure 8.4: Membership Function of “young”

What is “tall”?

Sometimes we say “those sportsmen are tall”. What heights (centimeters)


can be considered “tall”? In this case, “tall” may be regarded as an uncertain
188 Chapter 8 - Uncertain Set

set whose membership function is




 0, if x ≤ 180

 (x − 180)/5, if 180 ≤ x ≤ 185



µ(x) = 1, if 185 ≤ x ≤ 195 (8.70)

(200 − x)/5, if 195 ≤ x ≤ 200





0, if x ≥ 200.

Note that we do not say “tall” if the height is over 200cm.

µ(x)
..
.........
... .........................................................................................
.... ..... ......
.. .... .. ...
... ... .. .. ....
... ... ..
... ... .. .. ....
... ... .. .. ...
.. .. ...
... ... ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... ...
... .. .. .. ...
. .. .. ...
... ... ...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
. . .. .. .
..........................................................................................................................................................................................................................................................
. .
.
x
..
180cm 185cm ... 195cm 200cm

Figure 8.5: Membership Function of “tall”

What is “warm”?

Sometimes we say “those days are warm”. What temperatures can be con-
sidered “warm”? In this case, “warm” may be regarded as an uncertain set
whose membership function is


 0, if x ≤ 15

(x − 15)/3, if 15 ≤ x ≤ 18




µ(x) = 1, if 18 ≤ x ≤ 24 (8.71)

(28 − x)/4, if 24 ≤ x ≤ 28





0, if 28 ≤ x.

What is “most”?

Sometimes we say “most students are boys”. What percentages can be con-
sidered “most”? In this case, “most” may be regarded as an uncertain set
Section 8.2 - Membership Function 189

µ(x)
....
........
..
... ........................................................................
... ..... ......
... ... . .. ...
... ... ... .. ....
... .. ..
. .. ....
... ... .. .. ...
... .. ..
. .. ...
... .. .. .. ...
. ...
... ... .. . ...
... .. .. .
. ...
. .
... ..
. .. .
.
...
... ... .. . ...
...
... .. .. .
.
. . ...
... ..
. .. .
. ...
... ... .. . ...
... .. .. .
. ...
. . ...
... ..
. .. .
. ...
.
. .. .. .
...................................................................................................................................................................................................................................
.
x
..
... ◦ ◦ ◦ ◦
15 C 18 C 24 C 28 C

Figure 8.6: Membership Function of “warm”

whose membership function is




 0, if 0 ≤ x ≤ 0.7

20(x − 0.7), if 0.7 ≤ x ≤ 0.75




µ(x) = 1, if 0.75 ≤ x ≤ 0.85 (8.72)

20(0.9 − x), if 0.85 ≤ x ≤ 0.9





0, if 0.9 ≤ x ≤ 1.

µ(x)
.
....
.......
..
... .....................................................................
... ..... ....
... ... . .. ....
... ... ... .. ...
... ... .. .. ....
... .. ..
. .. ...
... ... .. .. ....
... ..
. .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
... ..
. .. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
. .
. .. .. .
.......................................................................................................................................................................................................................
. .
x
....
70% 75% .. 85% 90%

Figure 8.7: Membership Function of “most”

What uncertain sets have membership functions?


It is known that some uncertain sets do not have membership functions. This
subsection shows that totally ordered uncertain sets defined on a continuous
uncertainty space always have membership functions.

Definition 8.8 (Liu [99]) An uncertain set ξ defined on the uncertainty


space (Γ, L, M) is called totally ordered if {ξ(γ) | γ ∈ Γ} is a totally ordered
set, i.e., for any given γ1 and γ2 ∈ Γ, either ξ(γ1 ) ⊂ ξ(γ2 ) or ξ(γ2 ) ⊂ ξ(γ1 )
holds.
190 Chapter 8 - Uncertain Set

Example 8.10: Let (Γ, L, M) be an uncertainty space, and let A be a crisp


set of real numbers. The uncertain set ξ(γ) ≡ A is of total order.

Example 8.11: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.3, M{γ3 } = 0.2. The uncertain set

 [2, 3], if γ = γ1

ξ(γ) = [0, 5], if γ = γ2 (8.73)

[1, 4], if γ = γ3

is of total order.

Example 8.12: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. The uncertain set

ξ(γ) = [−γ, γ] , ∀γ ∈ Γ (8.74)

is of total order.

Example 8.13: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. The uncertain set

ξ(γ) = [γ, γ + 1] , ∀γ ∈ Γ (8.75)

is not of total order.

Exercise 8.22: Let ξ be a totally ordered uncertain set. Show that its
complement ξ c is also of total order.

Exercise 8.23: Let ξ be a totally ordered uncertain set, and let f be a


real-valued function. Show that f (ξ) is also of total order.

Exercise 8.24: Let ξ and η be totally ordered uncertain sets. Show that
their union ξ ∪ η is not necessarily of total order.

Exercise 8.25: Let ξ and η be totally ordered uncertain sets. Show that
their intersection ξ ∩ η is not necessarily of total order.

Theorem 8.13 (Liu [99]) Let ξ be a totally ordered uncertain set, and let
B be a crisp set of real numbers. Then (i) the collection {x ∈ ξ} indexed by
x ∈ B is of total order, and (ii) the collection {x 6∈ ξ} indexed by x ∈ B is
also of total order.

Proof: If {x ∈ ξ} indexed by x ∈ B is not of total order, then there exist


two numbers x1 and x2 in B such that neither {x1 ∈ ξ} ⊂ {x2 ∈ ξ} nor
{x2 ∈ ξ} ⊂ {x1 ∈ ξ} holds. This means there exist γ1 and γ2 in Γ such that

γ1 ∈ {x1 ∈ ξ}, γ1 6∈ {x2 ∈ ξ},


Section 8.2 - Membership Function 191

γ2 ∈ {x2 ∈ ξ}, γ2 6∈ {x1 ∈ ξ}.


That is,
x1 ∈ ξ(γ1 ), x1 6∈ ξ(γ2 ),
x2 ∈ ξ(γ2 ), x2 6∈ ξ(γ1 ).
Thus neither ξ(γ1 ) ⊂ ξ(γ2 ) nor ξ(γ2 ) ⊂ ξ(γ1 ) holds. This result is in con-
tradiction with that ξ is a totally ordered uncertain set. Therefore, {x ∈ ξ}
indexed by x ∈ B is of total order. The first part is proved. It follows from

{x 6∈ ξ} = {x ∈ ξ}c

that {x 6∈ ξ} indexed by x ∈ B is also of total order. The second part is


verified.

Theorem 8.14 (Liu [99], Existence Theorem) Let ξ be a totally ordered un-
certain set on a continuous uncertainty space. Then its membership function
always exists, and
µ(x) = M{x ∈ ξ}. (8.76)

Proof: In order to prove that µ is the membership function of ξ, we must


verify the two measure inversion formulas. Let B be any Borel set of real
numbers. Theorem 8.1 states that
\
{B ⊂ ξ} = {x ∈ ξ}.
x∈B

Since the uncertain measure is assumed to be continuous, and {x ∈ ξ} indexed


by x ∈ B is of total order, we obtain
( )
\
M{B ⊂ ξ} = M (x ∈ ξ) = inf M{x ∈ ξ} = inf µ(x).
x∈B x∈B
x∈B

The first measure inversion formula is verified. Next, Theorem 8.1 states that
\
{ξ ⊂ B} = {x 6∈ ξ}.
x∈B c

Since the uncertain measure is assumed to be continuous, and {x 6∈ ξ} indexed


by x ∈ B c is of total order, we obtain
( )
\
M{ξ ⊂ B} = M (x 6∈ ξ) = inf c M{x 6∈ ξ} = 1 − sup µ(x).
x∈B x∈B c
x∈B c

The second measure inversion formula is verified. Therefore, µ is the mem-


bership function of ξ.
192 Chapter 8 - Uncertain Set

Remark 8.8: Theorem 8.14 tells us that the membership function of a


totally ordered uncertain set on a continuous uncertainty space exists and is
determined by µ(x) = M{x ∈ ξ}. In other words, the two measure inversion
formulas are no longer required to be verified whenever the uncertain set is
of total order and defined on a continuous uncertainty space.

Example 8.14: The continuity condition in Theorem 8.14 cannot be re-


moved. For example, take an uncertainty space (Γ, L, M) to be (0, 1) with
power set and 
 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ (8.77)

0.5, otherwise.

Then
ξ(γ) = (−γ, γ), ∀γ ∈ (0, 1) (8.78)
is a totally ordered uncertain set on a discontinuous uncertainty space. If it
indeed has a membership function, then

 1, if x = 0

µ(x) = 0.5, if − 1 < x < 0 or 0 < x < 1 (8.79)

0, otherwise.

However,

M{(−1, 1) ⊂ ξ} = M{∅} = 0 6= 0.5 = inf µ(x). (8.80)


x∈(−1,1)

That is, the first measure inversion formula is not valid and then ξ has
no membership function. Therefore, the continuity condition cannot be re-
moved.

Example 8.15: Some non-totally ordered uncertain sets may have mem-
bership functions. For example, take an uncertainty space (Γ, L, M) to be
{γ1 , γ2 , γ3 , γ4 } with power set and

 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ (8.81)

0.5, otherwise.

Then 

 {1}, if γ = γ1
 {1, 2},

if γ = γ2
ξ(γ) = (8.82)

 {1, 3}, if γ = γ3

{1, 2, 3},

if γ = γ4
Section 8.2 - Membership Function 193

is a non-totally ordered uncertain set. However, it has a membership function



 1, if x = 1

µ(x) = 0.5, if x = 2 or 3 (8.83)

0, otherwise

because ξ and µ can simultaneously satisfy the two measure inversion formu-
las (8.51) and (8.52).

Remark 8.9: In practice, the unsharp concepts like “young”, “tall”, “warm”,
and “most” can be regarded as totally ordered uncertain sets on a continuous
uncertainty space.

Sufficient and Necessary Condition


Theorem 8.15 (Liu [85]) A real-valued function µ is a membership function
of uncertain set if and only if
0 ≤ µ(x) ≤ 1. (8.84)
Proof: If µ is a membership function of some uncertain set ξ, then µ(x) =
M{x ∈ ξ} and 0 ≤ µ(x) ≤ 1. Conversely, suppose µ is a function such that
0 ≤ µ(x) ≤ 1. Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel
algebra and Lebesgue measure. Then
ξ(γ) = {x ∈ < | µ(x) ≥ γ} (8.85)
is a totally ordered uncertain set defined on the continuous uncertainty space
(Γ, L, M). See Figure 8.8. By using Theorem 8.14, it is easy to verify that ξ
has the membership function µ.
...
..........
... .........................
..... .
.... ...............................................
.. ..... ..
... ..................................................................
.. .....
... .
...........................................................................
.
... .
. . ..
γ ............ ...
... .
............................................................................................
.... . ...
... ..........................................................................................................
... .... .. . ...
... ..........................................................................................................................
.
.
.. ..
... ................................................................................................................................................
......... .. .. ....
...........................................................................................................................................................
........... . .. .......
........
.......... ... .
. .
.
. . . .
................................................................................................................................................................................................................... x
.. ... ...
...
. ξ(γ)
............................
..
.
............................ .

Figure 8.8: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel algebra
and Lebesgue measure. Then ξ(γ) = {x ∈ < | µ(x) ≥ γ} has the membership
function µ. Keep in mind that ξ is not the unique uncertain set whose
membership function is µ.

Example 8.16: Let c be a number between 0 and 1. It follows from the


sufficient and necessary condition that
µ(x) ≡ c (8.86)
194 Chapter 8 - Uncertain Set

is a membership function. Take an uncertainty space (Γ, L, M) to be [0, 1]


with Borel algebra and Lebesgue measure. Define
(
<, if 0 ≤ γ ≤ c
ξ(γ) = (8.87)
∅, if c < γ ≤ 1.

It is easy to verify that ξ is a totally ordered uncertain set on a continuous


uncertainty space, and has the membership function µ.

Example 8.17: Let us design an uncertain set whose membership function


is
µ(x) = exp(−x2 ) (8.88)
for any real number x. Take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Define
p p
ξ(γ) = (− − ln γ, − ln γ), ∀γ ∈ [0, 1]. (8.89)

It is easy to verify that ξ is a totally ordered uncertain set on a continuous


uncertainty space, and has the membership function µ.

Exercise 8.26: Design an uncertain set whose membership function is just


1
µ(x) = exp(−x2 ) (8.90)
2
for any real number x.

Exercise 8.27: Design an uncertain set whose membership function is just


1 1
µ(x) = exp(−x2 ) + (8.91)
2 2
for any real number x.

Theorem 8.16 Let ξ be an uncertain set whose membership function µ ex-


ists. Then ξ is (i) nonempty if and only if

sup µ(x) = 1, (8.92)


x∈<

(ii) empty if and only if


µ(x) ≡ 0, (8.93)
and (iii) half-empty if and only if otherwise.

Proof: Since the membership function µ exists, it follows from the second
measure inversion formula that

M{ξ = ∅} = M{ξ ⊂ ∅} = 1 − sup µ(x) = 1 − sup µ(x).


x∈∅c x∈<
Section 8.2 - Membership Function 195

Thus ξ is (i) nonempty if and only if M{ξ = ∅} = 0, i.e., (8.92) holds, (ii)
empty if and only if M{ξ = ∅} = 1, i.e., (8.93) holds, and (iii) half-empty if
and only if otherwise.

Exercise 8.28: Some people prefer the uncertain set whose height (i.e.,
the supremum of the membership function) achieves 1. When the height is
below 1, they divide all its membership values by the height and obtain a
“normalized” membership function. Why is this idea wrong and harmful?

Inverse Membership Function


Definition 8.9 (Liu [88]) Let ξ be an uncertain set with membership func-
tion µ. Then the set-valued function
µ−1 (α) = x ∈ < µ(x) ≥ α , ∀α ∈ [0, 1]

(8.94)
is called the inverse membership function of ξ. For each given α, the set
µ−1 (α) is also called the α-cut of µ.

µ(x)
....
.........
..........................
.... ..... .....
... .... .....
... .
....... .....
. .....
... ... .....
... . ... .....
... .
. . .....
.....
α ............ ...
... .
.
..
...
.
................................. ...
... ........
... ... ..
. .. ......
... .... .. . .....
. ... .....
... ....... ... .....
... ...... .. .. .....
.....
.......... .. .. ......
.
.. . . . .......
....... ...
. .
. .. .......
....
. ..
....... .. ... ..
.................................................................................................................................................................................................................. x
... .... . .
.
0 ... ......................... −1
µ (α) . ...
.................. ...
..
.

Figure 8.9: Inverse Membership Function µ−1 (α)

Remark 8.10: Let ξ be an uncertain set with inverse membership function


µ−1 (α). Then the membership function of ξ is determined by
µ(x) = sup α ∈ [0, 1] x ∈ µ−1 (α) .

(8.95)

Example 8.18: Note that an inverse membership function may take value
of the empty set ∅. Let ξ be an uncertain set with membership function
(
0.8, if 1 ≤ x ≤ 2
µ(x) = (8.96)
0, otherwise.

Then its inverse membership function is


(
−1 ∅, if α > 0.8
µ (α) = (8.97)
[1, 2], otherwise.
196 Chapter 8 - Uncertain Set

Example 8.19: The triangular uncertain set ξ = (a, b, c) has an inverse


membership function
µ−1 (α) = [(1 − α)a + αb, αb + (1 − α)c]. (8.98)

Example 8.20: The trapezoidal uncertain set ξ = (a, b, c, d) has an inverse


membership function
µ−1 (α) = [(1 − α)a + αb, αc + (1 − α)d]. (8.99)
Theorem 8.17 (Liu [88], Sufficient and Necessary Condition) A function
µ−1 (α) is an inverse membership function if and only if it is a monotone
decreasing set-valued function with respect to α ∈ [0, 1]. That is,
µ−1 (α) ⊂ µ−1 (β), if α > β. (8.100)
Proof: Suppose µ−1 (α) is an inverse membership function of some uncertain
set. For any x ∈ µ−1 (α), we have µ(x) ≥ α. Since α > β, we have µ(x) > β
and then x ∈ µ−1 (β). Hence µ−1 (α) ⊂ µ−1 (β). Conversely, suppose µ−1 (α)
is a monotone decreasing set-valued function. Then
µ(x) = sup α ∈ [0, 1] x ∈ µ−1 (α)


is a membership function of some uncertain set. It is easy to verify that


µ−1 (α) is the inverse membership function of the uncertain set. The theorem
is proved.

Uncertain set does not necessarily take values of its α-cut!


Please keep in mind that uncertain set does not necessarily take values of its
α-cuts. In fact, an α-cut is included in the uncertain set with uncertain mea-
sure α. Conversely, the uncertain set is included in its α-cut with uncertain
measure 1 − α. More precisely, we have the following theorem.
Theorem 8.18 (Liu [88]) Let ξ be an uncertain set with inverse membership
function µ−1 (α). Then for each α ∈ [0, 1], we have
M{µ−1 (α) ⊂ ξ} ≥ α, (8.101)
M{ξ ⊂ µ−1 (α)} ≥ 1 − α. (8.102)
Proof: For each x ∈ µ−1 (α), we have µ(x) ≥ α. It follows from the first
measure inversion formula that
M{µ−1 (α) ⊂ ξ} = inf µ(x) ≥ α.
x∈µ−1 (α)

For each x 6∈ µ−1 (α), we have µ(x) < α. It follows from the second measure
inversion formula that
M{ξ ⊂ µ−1 (α)} = 1 − sup µ(x) ≥ 1 − α.
x6∈µ−1 (α)
Section 8.3 - Independence 197

Regular Membership Function

Definition 8.10 (Liu [88]) A membership function µ of an uncertain set is


said to be regular if there exists a point x0 such that µ(x0 ) = 1 and µ(x) is
unimodal about the mode x0 . That is, µ(x) is increasing on (−∞, x0 ] and
decreasing on [x0 , +∞).

If µ is a regular membership function, then µ−1 (α) is an interval for each


α. In this case, the function

µ−1
l (α) = inf µ
−1
(α) (8.103)

is called the left inverse membership function, and the function

µ−1
r (α) = sup µ
−1
(α) (8.104)

is called the right inverse membership function. It is clear that the left inverse
membership function µ−1 l (α) is increasing, and the right inverse membership
function µ−1r (α) is decreasing with respect to α.
Conversely, suppose an uncertain set ξ has a left inverse membership
function µ−1 −1
l (α) and right inverse membership function µr (α). Then the
membership function µ is determined by

if x ≤ µ−1


 0, l (0)

if µ−1 −1 −1

α, l (0) ≤ x ≤ µl (1) and µl (α) = x





µ(x) = 1, if µ−1 −1
l (1) ≤ x ≤ µr (1)
(8.105)


β, if µ−1 −1 −1
r (1) ≤ x ≤ µr (0) and µr (β) = x





if x ≥ µ−1

0, r (0).

Note that the values of α and β may not be unique. In this case, we will take
the maximum values.

8.3 Independence

Note that an uncertain set is a measurable function from an uncertainty


space to a collection of sets of real numbers. The independence of two func-
tions means that knowing the value of one does not change our estimation
of the value of another. Two uncertain sets meet this condition if they are
defined on different uncertainty spaces. For example, let ξ1 (γ1 ) and ξ2 (γ2 )
be uncertain sets on the uncertainty spaces (Γ1 , L1 , M1 ) and (Γ2 , L2 , M2 ),
respectively. It is clear that they are also uncertain sets on the product un-
certainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ). Then for any Borel sets B1 and
198 Chapter 8 - Uncertain Set

B2 of real numbers, we have

M{(ξ1 ⊂ B1 ) ∩ (ξ2 ⊂ B2 )}
= M {(γ1 , γ2 ) | ξ1 (γ1 ) ⊂ B1 , ξ2 (γ2 ) ⊂ B2 }
= M {(γ1 | ξ1 (γ1 ) ⊂ B1 ) × (γ2 | ξ2 (γ2 ) ⊂ B2 )}
= M1 {γ1 | ξ1 (γ1 ) ⊂ B1 } ∧ M2 {γ2 | ξ2 (γ2 ) ⊂ B2 }
= M {ξ1 ⊂ B1 } ∧ M {ξ2 ⊂ B2 } .

That is

M{(ξ1 ⊂ B1 ) ∩ (ξ2 ⊂ B2 )} = M{ξ1 ⊂ B1 } ∧ M{ξ2 ⊂ B2 }. (8.106)

Similarly, we may verify the following seven equations:

M{(ξ1c ⊂ B1 ) ∩ (ξ2 ⊂ B2 )} = M{ξ1c ⊂ B1 } ∧ M{ξ2 ⊂ B2 }, (8.107)

M{(ξ1 ⊂ B1 ) ∩ (ξ2c ⊂ B2 )} = M{ξ1 ⊂ B1 } ∧ M{ξ2c ⊂ B2 }, (8.108)


M{(ξ1c ⊂ B1 ) ∩ (ξ2c ⊂ B2 )} = M{ξ1c ⊂ B1 } ∧ M{ξ2c ⊂ B2 }, (8.109)
M{(ξ1 ⊂ B1 ) ∪ (ξ2 ⊂ B2 )} = M{ξ1 ⊂ B1 } ∨ M{ξ2 ⊂ B2 }, (8.110)
M{(ξ1c ⊂ B1 ) ∪ (ξ2 ⊂ B2 )} = M{ξ1c ⊂ B1 } ∨ M{ξ2 ⊂ B2 }, (8.111)
M{(ξ1 ⊂ B1 ) ∪ (ξ2c ⊂ B2 )} = M{ξ1 ⊂ B1 } ∨ M{ξ2c ⊂ B2 }, (8.112)
M{(ξ1c ⊂ B1 ) ∪ (ξ2c ⊂ B2 )} = M{ξ1c ⊂ B1 } ∨ M{ξ2c ⊂ B2 }. (8.113)
Thus we say two uncertain sets are independent if the above eight equations
hold. Generally, we may define independence in the following form.

Definition 8.11 (Liu [91]) The uncertain sets ξ1 , ξ2 , · · · , ξn are said to be


independent if for any Borel sets B1 , B2 , · · · , Bn of real numbers, we have
( n ) n
\ ^
M (ξi∗ ⊂ Bi ) = M {ξi∗ ⊂ Bi } (8.114)
i=1 i=1

and ( )
n
[ n
_
M (ξi∗ ⊂ Bi ) = M {ξi∗ ⊂ Bi } (8.115)
i=1 i=1

where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.

Remark 8.11: Note that (8.114) and (8.115) represent 2n+1 equations. For
example, when n = 2, they represent the 8 equations from (8.106) to (8.113).

Exercise 8.29: Show that a crisp set of real numbers (a special uncertain
set) is always independent of any uncertain set.
Section 8.3 - Independence 199

Exercise 8.30: Let ξ be an uncertain set. Are ξ and ξ c independent? Please


justify your answer.

Exercise 8.31: Construct n independent uncertain sets. (Hint: Define


them on the product uncertainty space (Γ1 , L1 , M1 ) × (Γ2 , L2 , M2 ) × · · · ×
(Γn , Ln , Mn ).)

Theorem 8.19 (Liu [91]) Let ξ1 , ξ2 , · · · , ξn be uncertain sets, and let ξi∗ be
arbitrarily chosen uncertain sets from {ξi , ξic }, i = 1, 2, · · · , n, respectively.
Then ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are indepen-
dent.

Proof: Let ξi∗∗ be arbitrarily chosen uncertain sets from {ξi∗ , ξi∗c }, i =
1, 2, · · · , n, respectively. Then ξ1∗ , ξ2∗ , · · · , ξn∗ and ξ1∗∗ , ξ2∗∗ , · · · , ξn∗∗ represent
the same 2n combinations. This fact implies that (8.114) and (8.115) are
equivalent to ( n )
\ ^n
M ∗∗
(ξi ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } , (8.116)
i=1 i=1
( n
) n
[ _
M (ξi∗∗ ⊂ Bi ) = M {ξi∗∗ ⊂ Bi } . (8.117)
i=1 i=1

Hence ξ1 , ξ2 , · · · , ξn are independent if and only if ξ1∗ , ξ2∗ , · · · , ξn∗ are indepen-
dent.

Exercise 8.32: Show that the following four statements are equivalent: (i)
ξ1 and ξ2 are independent; (ii) ξ1c and ξ2 are independent; (iii) ξ1 and ξ2c are
independent; and (iv) ξ1c and ξ2c are independent.

Theorem 8.20 (Liu [91]) The uncertain sets ξ1 , ξ2 , · · · , ξn are independent


if and only if for any Borel sets B1 , B2 , · · · , Bn of real numbers, we have
( n ) n
\ ^
M (Bi ⊂ ξi∗ ) = M {Bi ⊂ ξi∗ } (8.118)
i=1 i=1

and ( )
n
[ n
_
M (Bi ⊂ ξi∗ ) = M {Bi ⊂ ξi∗ } (8.119)
i=1 i=1

where ξi∗ are arbitrarily chosen from {ξi , ξic }, i = 1, 2, · · · , n, respectively.

Proof: Since {Bi ⊂ ξi∗ } = {ξi∗c ⊂ Bic } for i = 1, 2, · · · , n, we immediately


have ( n ) ( n )
\ \
M (Bi ⊂ ξi ) = M
∗ ∗c c
(ξi ⊂ Bi ) , (8.120)
i=1 i=1
200 Chapter 8 - Uncertain Set

n
^ n
^
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }, (8.121)
i=1 i=1
( n
) ( n
)
[ [
M (Bi ⊂ ξi∗ ) =M (ξi∗c ⊂ Bic ) , (8.122)
i=1 i=1
n
_ n
_
M {Bi ⊂ ξi∗ } = M{ξi∗c ⊂ Bic }. (8.123)
i=1 i=1

It follows from (8.120), (8.121), (8.122) and (8.123) that (8.118) and (8.119)
are valid if and only if
( n ) n
\ ^
M (ξi∗c ⊂ Bic ) = M{ξi∗c ⊂ Bic }, (8.124)
i=1 i=1
( n
) n
[ _
M (ξi∗c ⊂ Bic ) = M{ξi∗c ⊂ Bic }. (8.125)
i=1 i=1

The above two equations are also equivalent to the independence of the un-
certain sets ξ1 , ξ2 , · · · , ξn . The theorem is thus proved.

8.4 Set Operational Law


This section will discuss the union, intersection and complement of uncertain
sets via membership functions.

Union of Uncertain Sets


Theorem 8.21 (Liu [88]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then their union ξ ∪ η has a
membership function
λ(x) = µ(x) ∨ ν(x). (8.126)

Proof: In order to prove µ ∨ ν is the membership function of ξ ∪ η, we must


verify the two measure inversion formulas. Let B be any Borel set of real
numbers, and write
β = inf µ(x) ∨ ν(x).
x∈B

Then B ⊂ µ−1 (β) ∪ ν −1 (β). By the independence of ξ and η, we have

M{B ⊂ (ξ ∪ η)} ≥ M{(µ−1 (β) ∪ ν −1 (β)) ⊂ (ξ ∪ η)}


≥ M{(µ−1 (β) ⊂ ξ) ∩ (ν −1 (β) ⊂ η)}
= M{µ−1 (β) ⊂ ξ} ∧ M{ν −1 (β) ⊂ η}
≥ β ∧ β = β.
Section 8.4 - Set Operational Law 201

Thus
M{B ⊂ (ξ ∪ η)} ≥ inf µ(x) ∨ ν(x). (8.127)
x∈B

On the other hand, for any x ∈ B, we have

M{B ⊂ (ξ ∪ η)} ≤ M{x ∈ (ξ ∪ η)} = M{(x ∈ ξ) ∪ (x ∈ η)}


= M{x ∈ ξ} ∨ M{x ∈ η} = µ(x) ∨ ν(x).

Thus
M{B ⊂ (ξ ∪ η)} ≤ inf µ(x) ∨ ν(x). (8.128)
x∈B

It follows from (8.127) and (8.128) that

M{B ⊂ (ξ ∪ η)} = inf µ(x) ∨ ν(x). (8.129)


x∈B

The first measure inversion formula is verified. Next we prove the second
measure inversion formula. By the independence of ξ and η, we have

M{(ξ ∪ η) ⊂ B} = M{(ξ ⊂ B) ∩ (η ⊂ B)} = M{ξ ⊂ B} ∧ M{η ⊂ B}


   
= 1 − sup µ(x) ∧ 1 − sup ν(x)
x∈B c x∈B c

= 1 − sup µ(x) ∨ ν(x).


x∈B c

That is,
M{(ξ ∪ η) ⊂ B} = 1 − sup µ(x) ∨ ν(x). (8.130)
x∈B c

The second measure inversion formula is verified. Therefore, the union ξ ∪ η


is proved to have the membership function µ ∨ ν by the measure inversion
formulas (8.129) and (8.130).

λ(x)
.....
.......
µ(x) ν(x)
. ..........
.... .................. ..... ........
.... ... ....
... ... ... .. ...
... ..
. .... .
. ...
... ... ... ... ...
...
... ... ... ... ...
... ... ... ... ...
... ... ...
... ..
. ...
... ..
. ... . .
.. ...
... .. . . ...
. ... .. . ...
... ..
. . .
..... ...
... ..
. ...
... .
.. ...... ...
... ...
. .. .. ...
... ..
.... . . . ..
..
....
.....
... ..
....
. . .. . .....
.. . ... ......
... ............ .. . . . ..... ......
................................................................................................................................................................................................................................................................
.
. .
. . .
x
...
....

Figure 8.10: Membership Function of Union of Uncertain Sets

Example 8.21: The independence condition in Theorem 8.21 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be {γ1 , γ2 }
202 Chapter 8 - Uncertain Set

with power set and M{γ1 } = M{γ2 } = 0.5. Then


(
[0, 1], if γ = γ1
ξ(γ) =
[0, 2], if γ = γ2

is an uncertain set with membership function



 1, if 0 ≤ x ≤ 1

µ(x) = 0.5, if 1 < x ≤ 2

0, otherwise,

and (
[0, 2], if γ = γ1
η(γ) =
[0, 1], if γ = γ2
is also an uncertain set with membership function

 1, if 0 ≤ x ≤ 1

ν(x) = 0.5, if 1 < x ≤ 2

0, otherwise.

Note that ξ and η are not independent, and ξ ∪ η ≡ [0, 2] whose membership
function is (
1, if 0 ≤ x ≤ 2
λ(x) =
0, otherwise.
Thus
λ(x) 6= µ(x) ∨ ν(x). (8.131)
Therefore, the independence condition cannot be removed.

Exercise 8.33: Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets with mem-


bership functions µ1 , µ2 , · · · , µn , respectively. What is the membership func-
tion of ξ1 ∪ ξ2 ∪ · · · ∪ ξn ?

Exercise 8.34: Some people suggest λ(x) = µ(x) + ν(x) − µ(x) · ν(x) and
λ(x) = min{1, µ(x) + ν(x)} for the membership function of the union of
uncertain sets. Why is this idea wrong and harmful?

Exercise 8.35: Why is λ(x) = µ(x) ∨ ν(x) the only option for the member-
ship function of the union of uncertain sets?

Intersection of Uncertain Sets


Theorem 8.22 (Liu [88]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then their intersection ξ ∩ η has
a membership function
λ(x) = µ(x) ∧ ν(x). (8.132)
Section 8.4 - Set Operational Law 203

Proof: In order to prove µ ∧ ν is the membership function of ξ ∩ η, we must


verify the two measure inversion formulas. Let B be any Borel set of real
numbers. By the independence of ξ and η, we have

M{B ⊂ (ξ ∩ η)} = M{(B ⊂ ξ) ∩ (B ⊂ η)} = M{B ⊂ ξ} ∧ M{B ⊂ η}

= inf µ(x) ∧ inf ν(x) = inf µ(x) ∧ ν(x).


x∈B x∈B x∈B

That is,
M{B ⊂ (ξ ∩ η)} = inf µ(x) ∧ ν(x). (8.133)
x∈B

The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write

β = sup µ(x) ∧ ν(x).


x∈B c

Then for any given number ε > 0, we have µ−1 (β + ε) ∩ ν −1 (β + ε) ⊂ B. By


the independence of ξ and η, we obtain

M{(ξ ∩ η) ⊂ B} ≥ M{(ξ ∩ η) ⊂ (µ−1 (β + ε) ∩ ν −1 (β + ε))}


≥ M{(ξ ⊂ µ−1 (β + ε)) ∩ (η ⊂ ν −1 (β + ε))}
= M{ξ ⊂ µ−1 (β + ε)} ∧ M{η ⊂ ν −1 (β + ε)}
≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε.

Letting ε → 0, we get

M{(ξ ∩ η) ⊂ B} ≥ 1 − sup µ(x) ∧ ν(x). (8.134)


x∈B c

On the other hand, for any x ∈ B c , we have

M{(ξ ∩ η) ⊂ B} ≤ M{x 6∈ (ξ ∩ η)} = M{(x 6∈ ξ) ∪ (x 6∈ η)}


= M{x 6∈ ξ} ∨ M{x 6∈ η} = (1 − µ(x)) ∨ (1 − ν(x))
= 1 − µ(x) ∧ ν(x).

Thus
M{(ξ ∩ η) ⊂ B} ≤ 1 − sup µ(x) ∧ ν(x). (8.135)
x∈B c

It follows from (8.134) and (8.135) that

M{(ξ ∩ η) ⊂ B} = 1 − sup µ(x) ∧ (x). (8.136)


x∈B c

The second measure inversion formula is verified. Therefore, the intersection


ξ∩η is proved to have the membership function µ∧ν by the measure inversion
formulas (8.133) and (8.136).
204 Chapter 8 - Uncertain Set

λ(x)
....
........
µ(x) ν(x)
.. ....... .......
... .. .. .. ..
... .. .. .. ..
... ... .. . .. ..
... .
. .. .. ..
.. .. . ..
... .. .. ... ..
... .
. .. .. ..
... .. ... .. ..
... . .. .. . ..
... ..
. .. . . ..
... .. ... ..
... .. .. .
. ..
. .. ... ..
... ..
.
... .... ..
... . ... ..... ..
... ... ..
.... ..
..... ..
... ...
......
. ... ...
... .....
.
...... .... ......
....
...
.
...............................................................................................................................................................................................................................................
....................... x
....
..

Figure 8.11: Membership Function of Intersection of Uncertain Sets

Example 8.22: The independence condition in Theorem 8.22 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be {γ1 , γ2 }
with power set and M{γ1 } = M{γ2 } = 0.5. Then
(
[0, 1], if γ = γ1
ξ(γ) =
[0, 2], if γ = γ2

is an uncertain set with membership function



 1, if 0 ≤ x ≤ 1

µ(x) = 0.5, if 1 < x ≤ 2

0, otherwise,

and (
[0, 2], if γ = γ1
η(γ) =
[0, 1], if γ = γ2

is also an uncertain set with membership function



 1, if 0 ≤ x ≤ 1

ν(x) = 0.5, if 1 < x ≤ 2

0, otherwise.

Note that ξ and η are not independent, and ξ ∩ η ≡ [0, 1] whose membership
function is (
1, if 0 ≤ x ≤ 1
λ(x) =
0, otherwise.

Thus
λ(x) 6= µ(x) ∧ ν(x). (8.137)
Therefore, the independence condition cannot be removed.
Section 8.4 - Set Operational Law 205

Exercise 8.36: Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets with mem-


bership functions µ1 , µ2 , · · · , µn , respectively. What is the membership func-
tion of ξ1 ∩ ξ2 ∩ · · · ∩ ξn ?

Exercise 8.37: Some people suggest λ(x) = max{0, µ(x) + ν(x) − 1} and
λ(x) = µ(x)·ν(x) for the membership function of the intersection of uncertain
sets. Why is this idea wrong and harmful?

Exercise 8.38: Why is λ(x) = µ(x) ∧ ν(x) the only option for the member-
ship function of the intersection of uncertain sets?

Complement of Uncertain Set


Theorem 8.23 (Liu [88]) Let ξ be an uncertain set with membership func-
tion µ. Then its complement ξ c has a membership function

λ(x) = 1 − µ(x). (8.138)

Proof: In order to prove 1 − µ is the membership function of ξ c , we must


verify the two measure inversion formulas. Let B be a Borel set of real
numbers. It follows from the definition of membership function that

M{B ⊂ ξ c } = M{ξ ⊂ B c } = 1 − sup µ(x) = inf (1 − µ(x)),


x∈(B c )c x∈B

M{ξ c ⊂ B} = M{B c ⊂ ξ} = inf c µ(x) = 1 − sup (1 − µ(x)).


x∈B x∈B c
c
Thus ξ has the membership function 1 − µ.

λ(x)
..
.........
µ(x)
... .............. ........... ..............
... ........ ... .. ..........
...
....... .. .. .......
...... .. .. .......
.
... ..... . . .. .
...
.
... .....
..... .. .. .....
... ..... .. .. .....
... .... .. .. .....
.... ... .. ........
... .... .. .. ... .
... .... .
... ......
... .. ... ... ..
... .. ..... ... ...
... . .. ... ....
. ..
. .... . ..
... .. .... .... ..
... .. .... .... ..
... . . .. .....
..
.....
. ...
.. ..... .... ...
... . . ..
...... .
.. ....
. ........
. ........ ............. .... .
....................................................................................................................................................................................................................................... ................................. x
....
..

Figure 8.12: Membership Function of Complement of Uncertain Set

Exercise 8.39: Let ξ and η be independent uncertain sets with membership


functions µ and ν, respectively. Then the set difference of ξ and η, denoted
by ξ \ η, is the set of all elements that are members of ξ but not members of
η. That is,
ξ \ η = ξ ∩ ηc . (8.139)
206 Chapter 8 - Uncertain Set

Show that ξ \ η has a membership function

λ(x) = µ(x) ∧ (1 − ν(x)). (8.140)

Exercise 8.40: Let ξ be an uncertain set with membership function µ(x).


Theorem 8.23 tells us that ξ c has a membership function 1 − µ(x). (i) It is
known that ξ ∪ ξ c ≡ < whose membership function is λ(x) ≡ 1, and

λ(x) 6= µ(x) ∨ (1 − µ(x)). (8.141)

Why is Theorem 8.21 not applicable to the union of ξ and ξ c ? (ii) It is known
that ξ ∩ ξ c ≡ ∅ whose membership function is λ(x) ≡ 0, and

λ(x) 6= µ(x) ∧ (1 − µ(x)). (8.142)

Why is Theorem 8.22 not applicable to the intersection of ξ and ξ c ?

8.5 Arithmetic Operational Law


This section will present an arithmetic operational law of independent uncer-
tain sets, including addition, subtraction, multiplication and division.

Arithmetic Operational Law via Inverse Membership Functions


Theorem 8.24 (Liu [88]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets
with inverse membership functions µ−1 −1 −1
1 , µ2 , · · · , µn , respectively, and let f
be a measurable function. Then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.143)

has an inverse membership function,

λ−1 (α) = f (µ−1 −1 −1


1 (α), µ2 (α), · · · , µn (α)). (8.144)

Proof: For simplicity, we only prove the case n = 2. Let B be any Borel set
of real numbers, and write
β = inf λ(x).
x∈B

Then B ⊂ λ−1 (β). Since λ−1 (β) = f (µ1−1 (β), µ−1


2 (β)), by the independence
of ξ1 and ξ2 , we have

M{B ⊂ ξ} ≥ M{λ−1 (β) ⊂ ξ} = M{f (µ−1 −1


1 (β), µ2 (β)) ⊂ ξ}

≥ M{(µ−1 −1
1 (β) ⊂ ξ1 ) ∩ (µ2 (β) ⊂ ξ2 )}

= M{µ−1 −1
1 (β) ⊂ ξ1 } ∧ M{µ2 (β) ⊂ ξ2 }

≥ β ∧ β = β.
Section 8.5 - Arithmetic Operational Law 207

Thus
M{B ⊂ ξ} ≥ inf λ(x). (8.145)
x∈B
On the other hand, for any given number ε > 0, we have B 6⊂ λ−1 (β + ε).
Since λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)), we obtain

M{B 6⊂ ξ} ≥ M{ξ ⊂ λ−1 (β + ε)} = M{ξ ⊂ f (µ−1 −1


1 (β + ε), µ2 (β + ε))}

≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}

= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}

≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε
and then
M{B ⊂ ξ} = 1 − M{B 6⊂ ξ} ≤ β + ε.
Letting ε → 0, we get
M{B ⊂ ξ} ≤ β = inf λ(x). (8.146)
x∈B

It follows from (8.145) and (8.146) that


M{B ⊂ ξ} = inf λ(x). (8.147)
x∈B

The first measure inversion formula is verified. In order to prove the second
measure inversion formula, we write
β = sup λ(x).
x∈B c

Then for any given number ε > 0, we have λ−1 (β + ε) ⊂ B. Please note that
λ−1 (β + ε) = f (µ−1 −1
1 (β + ε), µ2 (β + ε)). By the independence of ξ1 and ξ2 ,
we obtain
M{ξ ⊂ B} ≥ M{ξ ⊂ λ−1 (β + ε)} = M{ξ ⊂ f (µ−1 −1
1 (β + ε), µ2 (β + ε))}

≥ M{(ξ1 ⊂ µ−1 −1
1 (β + ε)) ∩ (ξ2 ⊂ µ2 (β + ε))}

= M{ξ1 ⊂ µ−1 −1
1 (β + ε)} ∧ M{ξ2 ⊂ µ2 (β + ε)}

≥ (1 − β − ε) ∧ (1 − β − ε) = 1 − β − ε.
Letting ε → 0, we get
M{ξ ⊂ B} ≥ 1 − sup λ(x). (8.148)
x∈B c

On the other hand, for any given number ε > 0, we have λ−1 (β − ε) 6⊂ B.
Since λ−1 (β − ε) = f (µ−1 −1
1 (β − ε), µ2 (β − ε)), we obtain

M{ξ 6⊂ B} ≥ M{λ−1 (β − ε) ⊂ ξ} = M{f (µ−1 −1


1 (β − ε), µ2 (β − ε)) ⊂ ξ}

≥ M{(µ−1 −1
1 (β − ε) ⊂ ξ1 ) ∩ (µ2 (β − ε) ⊂ ξ2 )}

= M{µ−1 −1
1 (β − ε) ⊂ ξ1 } ∧ M{µ2 (β − ε) ⊂ ξ2 }

≥ (β − ε) ∧ (β − ε) = β − ε
208 Chapter 8 - Uncertain Set

and then
M{ξ ⊂ B} = 1 − M{ξ 6⊂ B} ≤ 1 − β + ε.
Letting ε → 0, we get

M{ξ ⊂ B} ≤ 1 − β = 1 − sup λ(x). (8.149)


x∈B c

It follows from (8.148) and (8.149) that

M{ξ ⊂ B} = 1 − sup λ(x). (8.150)


x∈B c

The second measure inversion formula is verified. Therefore, ξ is proved to


have the membership function λ by the measure inversion formulas (8.147)
and (8.150).

Example 8.23: Let ξ = (a1 , a2 , a3 ) and η = (b1 , b2 , b3 ) be two independent


triangular uncertain sets. At first, ξ has an inverse membership function,

µ−1 (α) = [(1 − α)a1 + αa2 , αa2 + (1 − α)a3 ], (8.151)

and η has an inverse membership function,

ν −1 (α) = [(1 − α)b1 + αb2 , αb2 + (1 − α)b3 ]. (8.152)

It follows from the operational law that the sum ξ + η has an inverse mem-
bership function,

λ−1 (α) = [(1 − α)(a1 + b1 ) + α(a2 + b2 ), α(a2 + b2 ) + (1 − α)(a3 + b3 )]. (8.153)

In other words, the sum ξ + η is also a triangular uncertain set, and

ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 ). (8.154)

Example 8.24: Let ξ = (a1 , a2 , a3 ) and η = (b1 , b2 , b3 ) be two indepen-


dent triangular uncertain sets. It follows from the operational law that the
difference ξ − η has an inverse membership function,

λ−1 (α) = [(1 − α)(a1 − b3 ) + α(a2 − b2 ), α(a2 − b2 ) + (1 − α)(a3 − b1 )]. (8.155)

In other words, the difference ξ − η is also a triangular uncertain set, and

ξ − η = (a1 − b3 , a2 − b2 , a3 − b1 ). (8.156)

Example 8.25: Let ξ = (a1 , a2 , a3 ) be a triangular uncertain set, and k


a real number. When k ≥ 0, the product k · ξ has an inverse membership
function,

λ−1 (α) = [(1 − α)(ka1 ) + α(ka2 ), α(ka2 ) + (1 − α)(ka3 )]. (8.157)


Section 8.5 - Arithmetic Operational Law 209

That is, the product k · ξ is a triangular uncertain set (ka1 , ka2 , ka3 ). When
k < 0, the product k · ξ has an inverse membership function,

λ−1 (α) = [(1 − α)(ka3 ) + α(ka2 ), α(ka2 ) + (1 − α)(ka1 )]. (8.158)

That is, the product k · ξ is a triangular uncertain set (ka3 , ka2 , ka1 ). In
summary, we have
(
(ka1 , ka2 , ka3 ), if k ≥ 0
k·ξ = (8.159)
(ka3 , ka2 , ka1 ), if k < 0.

Exercise 8.41: Show that the product of triangular uncertain sets is no


longer a triangular one even they are independent and positive.

Exercise 8.42: Let ξ = (a1 , a2 , a3 , a4 ) and η = (b1 , b2 , b3 , b4 ) be two inde-


pendent trapezoidal uncertain sets, and k a real number. Show that

ξ + η = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 ), (8.160)

ξ − η = (a1 − b4 , a2 − b3 , a3 − b2 , a4 − b1 ), (8.161)
(
(ka1 , ka2 , ka3 , ka4 ), if k ≥ 0
k·ξ = (8.162)
(ka4 , ka3 , ka2 , ka1 ), if k < 0.

Example 8.26: The independence condition in Theorem 8.24 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then

ξ1 (γ) = [−γ, γ] (8.163)

is a triangular uncertain set (−1, 0, 1) with inverse membership function

µ−1
1 (α) = [α − 1, 1 − α], (8.164)

and
ξ2 (γ) = [γ − 1, 1 − γ] (8.165)
is also a triangular uncertain set (−1, 0, 1) with inverse membership function

µ−1
2 (α) = [α − 1, 1 − α]. (8.166)

Note that ξ1 and ξ2 are not independent, and ξ1 + ξ2 ≡ [−1, 1] whose inverse
membership function is
λ−1 (α) = [−1, 1]. (8.167)
Thus
λ−1 (α) 6= µ−1 −1
1 (α) + µ2 (α). (8.168)
Therefore, the independence condition cannot be removed.
210 Chapter 8 - Uncertain Set

Monotone Function of Regular Uncertain Sets


In practice, it is usually required to deal with monotone functions of regular
uncertain sets. In this case, we have the following shortcut.

Theorem 8.25 (Liu [88]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets


with regular membership functions µ1 , µ2 , · · · , µn , respectively. If the func-
tion f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then

ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.169)

has a regular membership function, and

λ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (8.170)

−1 −1 −1
λ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)), (8.171)

where λ−1 −1 −1 −1
l , µ1l , µ2l , · · · , µnl are left inverse membership functions, and λr ,
−1
−1 −1 −1
µ1r , µ2r , · · · , µnr are right inverse membership functions of ξ, ξ1 , ξ2 , · · · , ξn ,
respectively.

Proof: Note that µ−1 −1 −1


1 (α), µ2 (α), · · · , µn (α) are intervals for each α. Since
f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , the value

λ−1 (α) = f (µ−1 −1 −1 −1


1 (α), · · · , µm (α), µm+1 (α), · · · , µn (α))

is also an interval. Thus ξ has a regular membership function, and its left and
right inverse membership functions are determined by (8.170) and (8.171),
respectively.

Exercise 8.43: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the sum ξ + η has left and right inverse
membership functions,

λ−1 −1 −1
l (α) = µl (α) + νl (α), (8.172)

λ−1 −1 −1
r (α) = µr (α) + νr (α). (8.173)

Exercise 8.44: Let ξ and η be independent uncertain sets with left inverse
membership functions µ−1
l and νl−1 and right inverse membership functions
−1 −1
µr and νr , respectively. Show that the difference ξ − η has left and right
inverse membership functions,

λ−1 −1 −1
l (α) = µl (α) − νr (α), (8.174)
Section 8.5 - Arithmetic Operational Law 211

−1
λ−1 −1
r (α) = µr (α) − νl (α). (8.175)

Exercise 8.45: Let ξ and η be independent and positive uncertain sets with
left inverse membership functions µ−1
l and νl−1 and right inverse membership
−1 −1
functions µr and νr , respectively. Show that

ξ
(8.176)
ξ+η

has left and right inverse membership functions,

µ−1
l (α)
λ−1
l (α) = , (8.177)
µl (α) + νr−1 (α)
−1

µ−1
r (α)
λ−1
r (α) = . (8.178)
µr (α) + νl−1 (α)
−1

Arithmetic Operational Law via Membership Functions


Theorem 8.26 Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets with mem-
bership functions µ1 (x), µ2 (x), · · · , µn (x), respectively, and let f be a mea-
surable function. Then
ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.179)
has a membership function,

λ(x) = sup min µi (xi ). (8.180)


f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

Proof: Let λ be the membership function of ξ. For any given real number
x, write λ(x) = β. By using Theorem 8.24, we get

λ−1 (β) = f (µ−1 −1 −1


1 (β), µ2 (β), · · · , µn (β)).

Since x ∈ λ−1 (β), there exist real numbers xi ∈ µ−1 i (β), i = 1, 2, · · · , n such
that f (x1 , x2 , · · · , xn ) = x. Noting that µi (xi ) ≥ β for i = 1, 2, · · · , n, we
have
λ(x) = β ≤ min µi (xi )
1≤i≤n

and then
λ(x) ≤ sup min µi (xi ). (8.181)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

On the other hand, assume x1 , x2 , · · · , xn are any given real numbers with
f (x1 , x2 , · · · , xn ) = x. Write

min µi (xi ) = β.
1≤i≤n
212 Chapter 8 - Uncertain Set

By using Theorem 8.24, we get

λ−1 (β) = f (µ−1 −1 −1


1 (β), µ2 (β), · · · , µn (β)).

Noting that xi ∈ µ−1


i (β) for i = 1, 2, · · · , n, we have

x = f (x1 , x2 , · · · , xn ) ∈ f (µ−1 −1 −1
1 (β), µ2 (β), · · · , µn (β)) = λ
−1
(β).

Hence
λ(x) ≥ β = min µi (xi )
1≤i≤n

and then
λ(x) ≥ sup min µi (xi ). (8.182)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤n

It follows from (8.181) and (8.182) that (8.180) holds.

Remark 8.12: It is possible that the equation f (x1 , x2 , · · · , xn ) = x does


not have a root for some values of x. In this case, we set λ(x) = 0.

Example 8.27: The independence condition in Theorem 8.26 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then

ξ1 (γ) = [−γ, γ] (8.183)

is a triangular uncertain set (−1, 0, 1) with membership function


(
1 − |x|, if − 1 ≤ x ≤ 1
µ1 (x) = (8.184)
0, otherwise,

and
ξ2 (γ) = [γ − 1, 1 − γ] (8.185)
is also a triangular uncertain set (−1, 0, 1) with membership function
(
1 − |x|, if − 1 ≤ x ≤ 1
µ2 (x) = (8.186)
0, otherwise.

Note that ξ1 and ξ2 are not independent, and ξ1 + ξ2 ≡ [−1, 1] whose mem-
bership function is
(
1, if − 1 ≤ x ≤ 1
λ(x) = (8.187)
0, otherwise.

Thus
λ(x) 6= sup µ1 (x1 ) ∧ µ2 (x2 ). (8.188)
x1 +x2 =x
Section 8.6 - Inclusion Relation 213

Therefore, the independence condition cannot be removed.

Exercise 8.46: Let ξ and η be independent uncertain sets with membership


functions µ(x) and ν(x), respectively. Show that ξ + η has a membership
function,
λ(x) = sup µ(x − y) ∧ ν(y). (8.189)
y∈<

Exercise 8.47: Let ξ and η be independent uncertain sets with membership


functions µ(x) and ν(x), respectively. Show that ξ − η has a membership
function,
λ(x) = sup µ(x + y) ∧ ν(y). (8.190)
y∈<

8.6 Inclusion Relation


Let ξ be an uncertain set with membership function µ, and let B be a Borel
set of real numbers. By using the definition of membership function, Liu
[88] presented two measure inversion formulas for calculating the uncertain
measure of inclusion relation,

M{B ⊂ ξ} = inf µ(x), (8.191)


x∈B

M{ξ ⊂ B} = 1 − sup µ(x). (8.192)


x∈B c

Especially, for any point x, Liu [88] also gave a formula for calculating the
uncertain measure of containment relation,

M{x ∈ ξ} = µ(x). (8.193)

A general formula was derived by Yao [180] for calculating the uncertain
measure of inclusion relation between uncertain sets.

Theorem 8.27 (Yao [180]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then

M{ξ ⊂ η} = inf (1 − µ(x)) ∨ ν(x). (8.194)


x∈<

Proof: Note that ξ ∩ η c has a membership function λ(x) = µ(x) ∧ (1 − ν(x)).


It follows from {ξ ⊂ η} ≡ {ξ ∩ η c = ∅} and the second measure inversion
formula that
M{ξ ⊂ η} = M{ξ ∩ η c = ∅}

= M{ξ ∩ η c ⊂ ∅}

= 1 − sup µ(x) ∧ (1 − ν(x))


x∈∅c

= inf (1 − µ(x)) ∨ ν(x).


x∈<
214 Chapter 8 - Uncertain Set

The theorem is proved.

Example 8.28: Consider two special uncertain sets ξ = [1, 2] and η = [0, 3]
that are essentially crisp intervals whose membership functions are
(
1, if 1 ≤ x ≤ 2
µ(x) =
0, otherwise,
(
1, if 0 ≤ x ≤ 3
ν(x) =
0, otherwise,

respectively. Mention that ξ ⊂ η is a completely true relation since [1, 2] is


indeed included in [0, 3]. By using (8.194), we also obtain

M{ξ ⊂ η} = inf (1 − µ(x)) ∨ ν(x) = 1.


x∈<

Example 8.29: Consider two special uncertain sets ξ = [0, 2] and η = [1, 3]
that are essentially crisp intervals whose membership functions are
(
1, if 0 ≤ x ≤ 2
µ(x) =
0, otherwise,
(
1, if 1 ≤ x ≤ 3
ν(x) =
0, otherwise,

respectively. Mention that ξ ⊂ η is a completely false relation since [0, 2] is


not a subset of [1, 3]. By using (8.194), we also obtain

M{ξ ⊂ η} = inf (1 − µ(x)) ∨ ν(x) = 0.


x∈<

Example 8.30: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 , γ4 }


with power set and


 0, if Λ=∅

 1, if Λ=Γ
M{Λ} = (8.195)

 0.8, if γ1 ∈ Λ 6= Γ

γ1 6∈ Λ 6= ∅.

0.2, if

Define two uncertain sets,


(
[0, 3], if γ = γ1 or γ2
ξ(γ) = (8.196)
[1, 2], if γ = γ3 or γ4 ,
Section 8.6 - Inclusion Relation 215

(
[0, 3], if γ = γ1 or γ3
η(γ) = (8.197)
[1, 2], if γ = γ2 or γ4 .
We may verify that ξ and η are independent, and share a common member-
ship function,

 1,
 if 1 ≤ x ≤ 2
µ(x) = 0.8, if 0 ≤ x < 1 or 2 < x ≤ 3 (8.198)

0, otherwise.

Note that
M{ξ ⊂ η} = M{γ1 , γ3 , γ4 } = 0.8. (8.199)
By using (8.194), we also obtain

M{ξ ⊂ η} = inf (1 − µ(x)) ∨ µ(x) = 0.8. (8.200)


x∈<

Exercise 8.48: Let ξ and η be independent uncertain sets with membership


functions µ and ν, respectively. Show that if µ ≤ ν, then

M{ξ ⊂ η} ≥ 0.5. (8.201)

Exercise 8.49: Let ξ and η be independent uncertain sets with membership


functions µ and ν, respectively, and let c be a number between 0.5 and 1. (i)
Construct ξ and η such that

µ≡ν and M{ξ ⊂ η} = c. (8.202)

(ii) Is it possible to re-do (i) when c is below 0.5? (iii) Is it stupid to think
that ξ ⊂ η if and only if µ(x) ≤ ν(x) for all x? (iv) Is it stupid to think that
ξ = η if and only if µ(x) = ν(x) for all x? (Hint: Use (8.195), (8.196) and
(8.197) as a reference.)

Example 8.31: The independence condition in Theorem 8.27 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then

ξ(γ) = [−γ, γ] (8.203)

is a triangular uncertain set (−1, 0, 1) with membership function


(
1 − |x|, if − 1 ≤ x ≤ 1
µ(x) = (8.204)
0, otherwise,

and
η(γ) = [−γ, γ] (8.205)
216 Chapter 8 - Uncertain Set

is also a triangular uncertain set (−1, 0, 1) with membership function


(
1 − |x|, if − 1 ≤ x ≤ 1
ν(x) = (8.206)
0, otherwise.

Note that ξ and η are not independent (in fact, they are the same one), and
M{ξ ⊂ η} = 1. However, by using (8.194), we obtain
M{ξ ⊂ η} = inf (1 − µ(x)) ∨ ν(x) = 0.5 6= 1. (8.207)
x∈<

Thus the independence condition cannot be removed.

8.7 Expected Value


This section will introduce a concept of expected value for nonempty uncer-
tain set (Empty set and half-empty uncertain set have no expected value).
Definition 8.12 (Liu [82]) Let ξ be a nonempty uncertain set. Then the
expected value of ξ is defined by
Z +∞ Z 0
E[ξ] = M{ξ  x}dx − M{ξ  x}dx (8.208)
0 −∞

provided that at least one of the two integrals is finite.


Please note that ξ  x represents “ξ is imaginarily included in [x, +∞)”,
and ξ  x represents “ξ is imaginarily included in (−∞, x]”. What are the
appropriate values of M{ξ  x} and M{ξ  x}? Unfortunately, this problem
is not as simple as you think.
..................................................................
..................... ..............
.............
........... ...........
.
.............. .... ....... ....... ....... ....... ... .........
........
...... ....... ..
. .... .... .......
... ... ......
.
....... ....... ....... ......
..
............ ......................................................... ......
.
.....
.....
........................... ..........
. ......... .
...... ...
. .............. ..
..... ...
...
.......... ... ..
... ...
...... ...
... ...
....
....
.......
ξ≥x .
.
..
... ξx .
.
.
. ξ 6< x ..
.
...
............. ... . . ..
... .. .
.
.... ....... .... .. ...
........ .......... ....... ...
........ ............... .......... ...... .....
..... ..... .... ................ .....
...... .. . .......................... ....... .......
...... ...... .. .... .
...
....... ....... ..
..... ....... ....... ....... ....... ....... .......
........ ........
......... ........
........... ..........
...............
............................. ... .....
. ....... ...............
.....................................

Figure 8.13: {ξ ≥ x} ⊂ {ξ  x} ⊂ {ξ 6< x}

It is clear that the imaginary event {ξ  x} is one between {ξ ≥ x}


and {ξ 6< x}. See Figure 8.13. Intuitively, for the value of M{ξ  x}, it is
too conservative if we take M{ξ ≥ x}, and it is too adventurous if we take
M{ξ 6< x} = 1 − M{ξ < x}. Thus we assign M{ξ  x} the middle value
between M{ξ ≥ x} and 1 − M{ξ < x}. That is,
1
M{ξ  x} = (M{ξ ≥ x} + 1 − M{ξ < x}) . (8.209)
2
Section 8.7 - Expected Value 217

Similarly, we also define

1
M{ξ  x} = (M{ξ ≤ x} + 1 − M{ξ > x}) . (8.210)
2

Example 8.32: Let [a, b] be a crisp interval and assume a > 0 for simplicity.
Then
ξ(γ) ≡ [a, b], ∀γ ∈ Γ

is a special uncertain set. It follows from the definition of M{ξ  x} and


M{ξ  x} that

 1, if x ≤ a

M{ξ  x} = 0.5, if a < x ≤ b

0, if x > b,

M{ξ  x} ≡ 0, ∀x ≤ 0.

Thus
Z a Z b
a+b
E[ξ] = 1dx + 0.5dx = .
0 a 2

Example 8.33: In order to further illustrate the expected value operator,


let us consider an uncertain set,

 [1, 2] with uncertain measure 0.6

ξ= [2, 3] with uncertain measure 0.3

[3, 4] with uncertain measure 0.2.

It follows from the definition of M{ξ  x} and M{ξ  x} that




 1, if x≤1

0.7, if 1<x≤2




M{ξ  x} = 0.3, if 2<x≤3

0.1, if 3<x≤4





0, if x > 4,

M{ξ  x} ≡ 0, ∀x ≤ 0.

Thus
Z 1 Z 2 Z 3 Z 4
E[ξ] = 1dx + 0.7dx + 0.3dx + 0.1dx = 2.1.
0 1 2 3
218 Chapter 8 - Uncertain Set

How to Obtain Expected Value from Membership Function?


Let ξ be an uncertain set with membership function µ. In order to calculate
its expected value via (8.208), we must determine the values of M{ξ  x}
and M{ξ  x} from the membership function µ.

Theorem 8.28 (Liu [84]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then for any real number x, we have
 
1
M{ξ  x} = sup µ(y) + 1 − sup µ(y) , (8.211)
2 y≥x y<x

 
1
M{ξ  x} = sup µ(y) + 1 − sup µ(y) . (8.212)
2 y≤x y>x

Proof: Since the uncertain set ξ has a membership function µ, the second
measure inversion formula tells us that

M{ξ ≥ x} = 1 − sup µ(y),


y<x

M{ξ < x} = 1 − sup µ(y).


y≥x

Thus (8.211) follows from (8.209) immediately. We may also prove (8.212)
similarly.

Theorem 8.29 (Liu [84]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then
Z +∞ Z x0
1 1
E[ξ] = x0 + sup µ(y)dx − sup µ(y)dx (8.213)
2 x0 y≥x 2 −∞ y≤x

where x0 is a point such that µ(x0 ) = 1.

Proof: Since µ achieves 1 at x0 , it follows from Theorem 8.28 that for almost
all x, we have

 1 − y<x
 sup µ(x)/2, if x ≤ x0
M{ξ  x} = (8.214)
 sup µ(x)/2,
 if x > x0
y≥x

and 

 sup µ(x)/2, if x < x0
y≤x
M{ξ  x} = (8.215)
 1 − sup µ(x)/2, if x ≥ x0 .

y>x
Section 8.7 - Expected Value 219

If x0 ≥ 0, then
Z +∞ Z 0
E[ξ] = M{ξ  x}dx − M{ξ  x}dx
0 −∞
Z  x0  Z +∞ Z 0
µ(x) µ(x) µ(x)
= 1 − sup dx + sup dx − sup dx
0 y≤x 2 x0 y≥x 2 −∞ y≤x 2

1 +∞ 1 x0
Z Z
= x0 + sup µ(y)dx − sup µ(y)dx.
2 x0 y≥x 2 −∞ y≤x

If x0 < 0, then
Z +∞ Z 0
E[ξ] = M{ξ  x}dx − M{ξ  x}dx
0 −∞
Z +∞ Z x0 Z 0 
µ(x) µ(x) µ(x)
= sup dx − sup dx − 1 − sup dx
0 y≥x 2 −∞ y≤x 2 x0 y≥x 2

1 +∞ 1 x0
Z Z
= x0 + sup µ(y)dx − sup µ(y)dx.
2 x0 y≥x 2 −∞ y≤x

The theorem is thus proved.

Theorem 8.30 (Liu [84]) Let ξ be an uncertain set with regular membership
function µ. Then

1 +∞ 1 x0
Z Z
E[ξ] = x0 + µ(x)dx − µ(x)dx (8.216)
2 x0 2 −∞

where x0 is a point such that µ(x0 ) = 1.

Proof: Since µ is increasing on (−∞, x0 ] and decreasing on [x0 , +∞), for


almost all x ≥ x0 , we have

sup µ(y) = µ(x); (8.217)


y≥x

and for almost all x ≤ x0 , we have

sup µ(y) = µ(x). (8.218)


y≤x

Thus the theorem follows from (8.213) immediately.

Exercise 8.50: Show that the triangular uncertain set ξ = (a, b, c) has an
expected value
a + 2b + c
E[ξ] = . (8.219)
4
220 Chapter 8 - Uncertain Set

Exercise 8.51: Show that the trapezoidal uncertain set ξ = (a, b, c, d) has
an expected value
a+b+c+d
E[ξ] = . (8.220)
4
Theorem 8.31 (Liu [88]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then
1 1
Z
inf µ−1 (α) + sup µ−1 (α) dα

E[ξ] = (8.221)
2 0
where inf µ−1 (α) and sup µ−1 (α) are the infimum and supremum of the α-cut,
respectively.
Proof: Since ξ is a nonempty uncertain set and has a finite expected value,
we may assume that there exists a point x0 such that µ(x0 ) = 1 (perhaps
after a small perturbation). It is clear that the two integrals
Z +∞ Z 1
sup µ(y)dx and (sup µ−1 (α) − x0 )dα
x0 y≥x 0

make an identical acreage. Thus


Z +∞ Z 1 Z 1
sup µ(y)dx = (sup µ−1 (α) − x0 )dα = sup µ−1 (α)dα − x0 .
x0 y≥x 0 0

Similarly, we may prove


Z x0 Z 1 Z 1
−1
sup µ(y)dx = (x0 − inf µ (α))dα = x0 − inf µ−1 (α)dα.
−∞ y≤x 0 0

It follows from (8.213) that

1 +∞ 1 x0
Z Z
E[ξ] = x0 + sup µ(y)dx − sup µ(y)dx
2 x0 y≥x 2 −∞ y≤x
Z 1   Z 1 
1 1
= x0 + sup µ−1 (α)dα − x0 − x0 − inf µ−1 (α)dα
2 0 2 0
Z 1
1
= (inf µ−1 (α) + sup µ−1 (α))dα.
2 0
The theorem is thus verified.
Theorem 8.32 (Liu [88]) Let ξ1 , ξ2 , · · · , ξn be independent uncertain sets
with regular membership functions µ1 , µ2 , · · · , µn , respectively. If the func-
tion f (x1 , x2 , · · · , xn ) is strictly increasing with respect to x1 , x2 , · · · , xm and
strictly decreasing with respect to xm+1 , xm+2 , · · · , xn , then
ξ = f (ξ1 , ξ2 , · · · , ξn ) (8.222)
Section 8.7 - Expected Value 221

has an expected value


Z 1
1
µ−1 −1

E[ξ] = l (α) + µr (α) dα (8.223)
2 0

where µ−1 −1
l (α) and µr (α) are determined by

µ−1 −1 −1 −1 −1
l (α) = f (µ1l (α), · · · , µml (α), µm+1,r (α), · · · , µnr (α)), (8.224)

−1 −1 −1
µ−1 −1
r (α) = f (µ1r (α), · · · , µmr (α), µm+1,l (α), · · · , µnl (α)). (8.225)

Proof: It follows from Theorems 8.25 and 8.31 immediately.

Exercise 8.52: Let ξ and η be independent and nonnegative uncertain sets


with regular membership functions µ and ν, respectively. Show that
Z 1
1
µ−1 −1 −1 −1

E[ξη] = l (α)νl (α) + µr (α)νr (α) dα. (8.226)
2 0

Exercise 8.53: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that

1 1 µ−1 µ−1
  Z  
ξ l (α) r (α)
E = + −1 dα. (8.227)
η 2 0 νr−1 (α) νl (α)

Exercise 8.54: Let ξ and η be independent and positive uncertain sets with
regular membership functions µ and ν, respectively. Show that

1 1 µ−1 µ−1
  Z  
ξ l (α) r (α)
E = + dα. (8.228)
ξ+η 2 0 µ−1 −1
l (α) + νr (α) µ−1 −1
r (α) + νl (α)

Linearity of Expected Value Operator


Theorem 8.33 (Liu [88]) Let ξ and η be independent uncertain sets with
finite expected values. Then for any real numbers a and b, we have

E[aξ + bη] = aE[ξ] + bE[η]. (8.229)

Proof: Denote the membership functions of ξ and η by µ and ν, respectively.


Then
1 1
Z
inf µ−1 (α) + sup µ−1 (α) dα,

E[ξ] =
2 0

1 1
Z
inf ν −1 (α) + sup ν −1 (α) dα.

E[η] =
2 0
222 Chapter 8 - Uncertain Set

Step 1: We first prove E[aξ] = aE[ξ]. The product aξ has an inverse


membership function,
λ−1 (α) = aµ−1 (α).
It follows from Theorem 8.31 that
1 1
Z
inf λ−1 (α) + sup λ−1 (α) dα

E[aξ] =
2 0
a 1
Z
inf µ−1 (α) + sup µ−1 (α) dα = aE[ξ].

=
2 0

Step 2: We then prove E[ξ + η] = E[ξ] + E[η]. The sum ξ + η has an


inverse membership function,
λ−1 (α) = µ−1 (α) + ν −1 (α).
It follows from Theorem 8.31 that
1 1
Z
inf λ−1 (α) + sup λ−1 (α) dα

E[ξ + η] =
2 0
1 1
Z
inf µ−1 (α) + sup µ−1 (α) dα

=
2 0
1 1
Z
inf ν −1 (α) + sup ν −1 (α) dα

+
2 0
= E[ξ] + E[η].

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that
E[aξ + bη] = E[aξ] + E[bη] = aE[ξ] + bE[η].
The theorem is proved.

Example 8.34: Generally speaking, the expected value operator is not


necessarily linear if the independence is not assumed. For example, take an
uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 } with power set and M{γ1 } =
0.6, M{γ2 } = 0.3, M{γ3 } = 0.2. Define two uncertain sets as follows,
 
 [1, 4], if γ = γ1
  [1, 5], if γ = γ1

ξ(γ) = [1, 3], if γ = γ2 η(γ) = [1, 2], if γ = γ2
 
[1, 2], if γ = γ3 , [1, 4], if γ = γ3 .
 

Note that ξ and η are not independent, and their sum is



 [2, 9], if γ = γ1

(ξ + η)(γ) = [2, 5], if γ = γ2

[2, 6], if γ = γ3 .

Section 8.8 - Variance 223

It is easy to verify that E[ξ] = 2.2, E[η] = 2.5 and E[ξ + η] = 4.75. Thus we
have
E[ξ + η] > E[ξ] + E[η].
If the uncertain sets are defined by
 
 [1, 4],
 if γ = γ1  [1, 4], if γ = γ1

ξ(γ) = [1, 3], if γ = γ2 η(γ) = [1, 6], if γ = γ2
 
[1, 2], if γ = γ3 , [1, 2], if γ = γ3 ,
 

then 
 [2, 8], if γ = γ1

(ξ + η)(γ) = [2, 9], if γ = γ2

[2, 4], if γ = γ3 .

It is easy to verify that E[ξ] = 2.2, E[η] = 2.6 and E[ξ + η] = 4.75. Thus we
have
E[ξ + η] < E[ξ] + E[η].
Therefore, the independence condition cannot be removed.

8.8 Variance
The variance of uncertain set provides a degree of the spread of the member-
ship function around its expected value.

Definition 8.13 (Liu [85]) Let ξ be an uncertain set with finite expected
value e. Then the variance of ξ is defined by

V [ξ] = E[(ξ − e)2 ]. (8.230)

This definition says that the variance is just the expected value of (ξ −e)2 .
Since (ξ − e)2 is a nonnegative uncertain set, we also have
Z +∞
V [ξ] = M{(ξ − e)2  x}dx. (8.231)
0

Please note that (ξ − e)  x represents “(ξ − e)2 is imaginarily included in


2

[x, +∞)”. What is the appropriate value of M{(ξ − e)2  x}? Intuitively,
it is too conservative if we take the value M{(ξ − e)2 ≥ x}, and it is too
adventurous if we take the value 1 − M{(ξ − e)2 < x}. Thus we assign
M{(ξ − e)2  x} the middle value between them. That is,
1
M{(ξ − e)2  x} = M{(ξ − e)2 ≥ x} + 1 − M{(ξ − e)2 < x} . (8.232)

2
Theorem 8.34 If ξ is an uncertain set with finite expected value, a and b
are real numbers, then
V [aξ + b] = a2 V [ξ]. (8.233)
224 Chapter 8 - Uncertain Set

Proof: If ξ has an expected value e, then aξ + b has an expected value ae + b.


It follows from the definition of variance that
V [aξ + b] = E (aξ + b − ae − b)2 = a2 E[(ξ − e)2 ] = a2 V [ξ].
 

Theorem 8.35 Let ξ be an uncertain set with expected value e. Then V [ξ] =
0 if and only if ξ = {e} almost surely.
Proof: We first assume V [ξ] = 0. It follows from the equation (8.231) that
Z +∞
M{(ξ − e)2  x}dx = 0
0

which implies M{(ξ − e)2  x} = 0 for any x > 0. Hence M{ξ = {e}} = 1.
Conversely, assume M{ξ = {e}} = 1. Then we have M{(ξ − e)2  x} = 0 for
any x > 0. Thus
Z +∞
V [ξ] = M{(ξ − e)2  x}dx = 0.
0

The theorem is proved.

How to Obtain Variance from Membership Function?


Let ξ be an uncertain set with membership function µ. In order to calculate
its variance by (8.231), we must determine the value of M{(ξ − e)2  x} from
the membership function µ.
Theorem 8.36 (Liu [95]) Let ξ be a nonempty uncertain set with member-
ship function µ. Then for any real numbers e and x, we have
!
1
M{(ξ − e)  x} =
2
sup µ(y) + 1 − sup µ(y) . (8.234)
2 (y−e)2 ≥x (y−e)2 <x

Proof: Since ξ is an uncertain set with membership function µ, it follows


from the measure inversion formula that for any real numbers e and x, we
have
M{(ξ − e)2 ≥ x} = 1 − sup µ(y),
(y−e)2 <x

M{(ξ − e) < x} = 1 −
2
sup µ(y).
(y−e)2 ≥x

The equation (8.234) is thus proved by (8.232).


Theorem 8.37 (Liu [95]) Let ξ be an uncertain set with membership func-
tion µ and finite expected value e. Then
!
1 +∞
Z
V [ξ] = sup µ(y) + 1 − sup µ(y) dx. (8.235)
2 0 (y−e)2 ≥x (y−e)2 <x

Proof: This theorem follows from (8.231) and (8.234) immediately.


Section 8.10 - Entropy 225

8.9 Distance
Definition 8.14 (Liu [85]) The distance between nonempty uncertain sets ξ
and η is defined as
d(ξ, η) = E[|ξ − η|]. (8.236)

That is, the distance between ξ and η is just the expected value of |ξ − η|.
Since |ξ − η| is a nonnegative uncertain set, we have
Z +∞
d(ξ, η) = M{|ξ − η|  x}dx. (8.237)
0

Please note that |ξ − η|  x represents “|ξ − η| is imaginarily included in


[x, +∞)”. What is the appropriate value of M{|ξ − η|  x}? Intuitively, it is
too conservative if we take the value M{|ξ −η| ≥ x}, and it is too adventurous
if we take the value 1 − M{|ξ − η| < x}. Thus we assign M{|ξ − η|  x} the
middle value between them. That is,
1
M{|ξ − η|  x} = (M{|ξ − η| ≥ x} + 1 − M{|ξ − η| < x}) . (8.238)
2
Theorem 8.38 (Liu [95]) Let ξ and η be nonempty uncertain sets. Then
for any real number x, we have
!
1
M{|ξ − η|  x} = sup λ(y) + 1 − sup λ(y) (8.239)
2 |y|≥x |y|<x

where λ is the membership function of ξ − η.

Proof: Since ξ − η is an uncertain set with membership function λ, it follows


from the measure inversion formula that for any real number x, we have

M{|ξ − η| ≥ x} = 1 − sup µ(y),


|y|<x

M{|ξ − η| < x} = 1 − sup µ(y).


|y|≥x

The equation (8.239) is thus proved by (8.238).

Theorem 8.39 (Liu [95]) Let ξ and η be nonempty uncertain sets. Then
the distance between ξ and η is
!
1 +∞
Z
d(ξ, η) = sup λ(y) + 1 − sup λ(y) dx (8.240)
2 0 |y|≥x |y|<x

where λ is the membership function of ξ − η.

Proof: The theorem follows from (8.237) and (8.239) immediately.


226 Chapter 8 - Uncertain Set

8.10 Entropy
This section defines an entropy as the degree of difficulty of predicting the
realization of an uncertain set.

Definition 8.15 (Liu [85]) Suppose that ξ is an uncertain set with member-
ship function µ. Then its entropy is defined by
Z +∞
H[ξ] = S(µ(x))dx (8.241)
−∞

where S(t) = −t ln t − (1 − t) ln(1 − t).

Remark 8.13: Note that the entropy (8.241) has the same form with de
Luca and Termini’s entropy for fuzzy set [24].

Remark 8.14: If ξ is a discrete uncertain set taking values in {x1 , x2 , · · · },


then the entropy becomes

X
H[ξ] = S(µ(xi )). (8.242)
i=1

Example 8.35: A crisp set A of real numbers is a special uncertain set


ξ(γ) ≡ A. Its membership function is
(
1, if x ∈ A
µ(x) =
0, if x 6∈ A

and entropy is
Z +∞ Z +∞
H[ξ] = S(µ(x))dx = 0dx = 0.
−∞ −∞

This means a crisp set has entropy 0.

Exercise 8.55: Let ξ = (a, b, c) be a triangular uncertain set. Show that its
entropy is
c−a
H[ξ] = . (8.243)
2

Exercise 8.56: Let ξ = (a, b, c, d) be a trapezoidal uncertain set. Show that


its entropy is
b−a+d−c
H[ξ] = . (8.244)
2
Theorem 8.40 Let ξ be an uncertain set. Then H[ξ] ≥ 0 and equality holds
if ξ is essentially a crisp set.
Section 8.10 - Entropy 227

Proof: The nonnegativity is clear. In addition, when an uncertain set tends


to a crisp set, its entropy tends to the minimum value 0.
Theorem 8.41 Let ξ be an uncertain set on the interval [a, b]. Then
H[ξ] ≤ (b − a) ln 2 (8.245)
and equality holds if ξ has a membership function µ(x) = 0.5 on [a, b].
Proof: The theorem follows from the fact that the function S(t) reaches its
maximum value ln 2 at t = 0.5.
Theorem 8.42 Let ξ be an uncertain set, and let ξ c be its complement. Then
H[ξ c ] = H[ξ]. (8.246)
Proof: Write the membership function of ξ by µ. Then its complement ξ c
has a membership function 1 − µ(x). It follows from the definition of entropy
that Z +∞ Z +∞
H[ξ c ] = S (1 − µ(x)) dx = S(µ(x))dx = H[ξ].
−∞ −∞
The theorem is proved.
Theorem 8.43 (Yao-Ke [175]) Let ξ be an uncertain set with regular mem-
bership function µ. Then
Z 1
α
H[ξ] = (µ−1 −1
l (α) − µr (α)) ln dα. (8.247)
0 1 − α
Proof: It is clear that S(α) = −α ln α − (1 − α) ln(1 − α) is a derivable
function whose derivative is
α
S 0 (α) = − ln .
1−α
Let x0 be a point such that µ(x0 ) = 1. Then we have
Z +∞ Z x0 Z +∞
H[ξ] = S(µ(x))dx = S(µ(x))dx + S(µ(x))dx
−∞ −∞ x0
Z x0 Z µ(x) Z +∞ Z µ(x)
= S 0 (α)dαdx + S 0 (α)dαdx.
−∞ 0 x0 0
It follows from Fubini theorem that
Z 1 Z x0 Z 1 Z µ−1
r (α)
H[ξ] = S 0 (α)dxdα + S 0 (α)dxdα
0 µ−1
l (α) 0 x0
Z 1 Z 1
= (x0 − µ−1 0
l (α))S (α)dα + (µ−1 0
r (α) − x0 )S (α)dα
0 0
Z 1
−1
= (µ−1 0
r (α) − µl (α))S (α)dα
0
Z 1
α
= (µ−1 −1
l (α) − µr (α)) ln dα.
0 1−α
228 Chapter 8 - Uncertain Set

The theorem is verified.

Positive Linearity of Entropy


Theorem 8.44 (Yao-Ke [175]) Let ξ and η be independent uncertain sets.
Then for any real numbers a and b, we have

H[aξ + bη] = |a|H[ξ] + |b|H[η]. (8.248)

Proof: Without loss of generality, assume the uncertain sets ξ and η have
regular membership functions µ and ν, respectively.
Step 1: We prove H[aξ] = |a|H[ξ]. If a > 0, then the left and right
inverse membership functions of aξ are

λ−1 −1
l (α) = aµl (α), λ−1 −1
r (α) = aµr (α).

It follows from Theorem 8.43 that


Z 1
α
H[aξ] = (aµ−1 −1
l (α) − aµr (α)) ln dα = aH[ξ] = |a|H[ξ].
0 1−α
If a = 0, then we immediately have H[aξ] = 0 = |a|H[ξ]. If a < 0, then we
have
λ−1 −1
l (α) = aµr (α), λ−1 −1
r (α) = aµl (α)

and
Z 1
−1 α
H[aξ] = (aµ−1
r (α) − aµl (α)) ln dα = (−a)H[ξ] = |a|H[ξ].
0 1−α
Thus we always have H[aξ] = |a|H[ξ].
Step 2: We prove H[ξ + η] = H[ξ] + H[η]. Note that the left and right
inverse membership functions of ξ + η are

λ−1 −1 −1
l (α) = µl (α) + νl (α), λ−1 −1 −1
r (α) = µr (α) + νr (α).

It follows from Theorem 8.43 that


Z 1
α
H[ξ + η] = (λ−1 −1
l (α) − λr (α)) ln dα
0 1−α
Z 1
α
= (µ−1 −1 −1 −1
l (α) + νl (α) − µr (α) − νr (α)) ln dα
0 1−α
= H[ξ] + H[η].

Step 3: Finally, for any real numbers a and b, it follows from Steps 1
and 2 that

H[aξ + bη] = H[aξ] + H[bη] = |a|H[ξ] + |b|H[η].


Section 8.11 - Conditional Membership Function 229

The theorem is proved.

Exercise 8.57: Let ξ be an uncertain set, and let A be a crisp set. Show
that
H[ξ + A] = H[ξ]. (8.249)
That is, the entropy is invariant under arbitrary translations.

Example 8.36: The independence condition in Theorem 8.44 cannot be


removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
Borel algebra and Lebesgue measure. Then

ξ(γ) = [−γ, γ] (8.250)

is a triangular uncertain set (−1, 0, 1) with entropy

H[ξ] = 1, (8.251)

and
η(γ) = [γ − 1, 1 − γ] (8.252)
is also a triangular uncertain set (−1, 0, 1) with entropy

H[η] = 1. (8.253)

Note that ξ and η are not independent, and ξ + η ≡ [−1, 1] whose entropy is

H[ξ + η] = 0. (8.254)

Thus
H[ξ + η] 6= H[ξ] + H[η]. (8.255)
Therefore, the independence condition cannot be removed.

8.11 Conditional Membership Function


What is the conditional membership function of an uncertain set ξ after it
has been learned that some event A has occurred? This section will answer
this question. At first, it follows from the definition of conditional uncertain
measure that
M{(B ⊂ ξ) ∩ A} M{(B ⊂ ξ) ∩ A}

 , if < 0.5
M{A} M{A}





M{B ⊂ ξ|A} = M{(B 6⊂ ξ) ∩ A} M{(B 6⊂ ξ) ∩ A}
1− , if < 0.5
M{A} M{A}






0.5, otherwise,
230 Chapter 8 - Uncertain Set

M{(ξ ⊂ B) ∩ A} M{(ξ ⊂ B) ∩ A}

 , if < 0.5
M{A} M{A}





M{ξ ⊂ B|A} = M{(ξ 6⊂ B) ∩ A} M{(ξ 6⊂ B) ∩ A}
1− , if < 0.5
M{A} M{A}






0.5, otherwise.

Definition 8.16 (Liu [95]) Let ξ be an uncertain set, and let A be an event
with M{A} > 0. Then the conditional uncertain set ξ given A is said to have
a membership function µ(x|A) if for any Borel set B of real numbers, we
have
M{B ⊂ ξ|A} = inf µ(x|A), (8.256)
x∈B

M{ξ ⊂ B|A} = 1 − sup µ(x|A). (8.257)


x∈B c

Theorem 8.45 (Yao [186]) Let ξ be a totally ordered uncertain set on a


continuous uncertainty space, and let A be an event with M{A} > 0. Then
the conditional membership function of ξ given A exists, and
µ(x|A) = M{x ∈ ξ|A}. (8.258)
Proof: Since the original uncertain measure M is continuous, the conditional
uncertain measure M{·|A} is also continuous. Thus the conditional uncertain
set ξ given A is a totally ordered uncertain set on a continuous uncertainty
space. It follows from Theorem 8.14 that the conditional membership func-
tion exists, and µ(x|A) = M{x ∈ ξ|A}. The proof is complete.

Example 8.37: The total order condition in Theorem 8.45 cannot be re-
moved. For example, take an uncertainty space (Γ, L, M) to be {γ1 , γ2 , γ3 , γ4 }
with power set and

 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ (8.259)

0.5, otherwise.

Then 

 [1, 4], if γ = γ1

 [1, 3], if γ = γ2
ξ(γ) = (8.260)

 [2, 4], if γ = γ3


[2, 3], if γ = γ4
is a non-totally ordered uncertain set on a continuous uncertainty space, but
has a membership function

 1, if 2 ≤ x ≤ 3

µ(x) = 0.5, if 1 ≤ x < 2 or 3 < x ≤ 4 (8.261)

0, otherwise.

Section 8.11 - Conditional Membership Function 231

However, the conditional uncertain measure given A = {γ1 , γ2 , γ3 } is



 0, if Λ ∩ A = ∅

M{Λ|A} = 1, if Λ ⊃ A (8.262)

0.5, otherwise.

If the conditional uncertain set ξ given A has a membership function, then



 1,
 if 2 ≤ x ≤ 3
µ(x|A) = 0.5, if 1 ≤ x < 2 or 3 < x ≤ 4 (8.263)

0, otherwise.

Taking B = [1.5, 3.5], we obtain

M{ξ ⊂ B|A} = M{γ4 |A} = 0 6= 0.5 = 1 − sup µ(x|A). (8.264)


x∈B c

That is, the second measure inversion formula is not valid and then the con-
ditional membership function does not exist. Thus the total order condition
cannot be removed.

Example 8.38: The continuity condition in Theorem 8.45 cannot be re-


moved. For example, take an uncertainty space (Γ, L, M) to be [0, 1] with
power set and 
 0, if Λ = ∅

M{Λ} = 1, if Λ = Γ (8.265)

0.5, otherwise.

Then
ξ(γ) = (−γ, γ), ∀γ ∈ [0, 1] (8.266)
is a totally ordered uncertain set on a discontinuous uncertainty space, but
has a membership function
(
0.5, if − 1 < x < 1
µ(x) = (8.267)
0, otherwise.

However, the conditional uncertain measure given A = (0, 1) is



 0, if Λ ∩ A = ∅

M{Λ|A} = 1, if Λ ⊃ A (8.268)

0.5, otherwise.

If the conditional uncertain set ξ given A has a membership function, then



 1,
 if x = 0
µ(x|A) = 0.5, if − 1 < x < 0 or 0 < x < 1 (8.269)

0, otherwise.

232 Chapter 8 - Uncertain Set

Taking B = (−1, 1), we obtain


M{B ⊂ ξ|A} = M{1|A} = 0 6= 0.5 = inf µ(x|A). (8.270)
x∈B

That is, the first measure inversion formula is not valid and then the con-
ditional membership function does not exist. Thus the continuity condition
cannot be removed.
Theorem 8.46 (Yao [186]) Let ξ and η be independent uncertain sets with
membership functions µ and ν, respectively. Then for any real number a, the
conditional uncertain set η given a ∈ ξ has a membership function
ν(y)


 , if ν(y) < µ(a)/2


 µ(a)

ν(y|a ∈ ξ) = ν(y) + µ(a) − 1 (8.271)
, if ν(y) > 1 − µ(a)/2
µ(a)






0.5, otherwise.
Proof: In order prove that ν(y|a ∈ ξ) is the membership function of the
conditional uncertain set η given a ∈ ξ, we must verify the two measure
inversion formulas,
M{B ⊂ η|a ∈ ξ} = inf ν(y|a ∈ ξ), (8.272)
y∈B

M{η ⊂ B|a ∈ ξ} = 1 − sup ν(y|a ∈ ξ). (8.273)


y∈B c

First, for any Borel set B of real numbers, by using the definition of condi-
tional uncertainty and independence of ξ and η, we have
M{B ⊂ η} M{B ⊂ η}

 , if < 0.5
M{a ∈ M{a ∈ ξ}



 ξ}

M{B ⊂ η|a ∈ ξ} = M{B 6⊂ η} M{B 6⊂ η}
1− , if < 0.5
M{a ∈ ξ} M{a ∈ ξ}






0.5, otherwise.
Since
M{B ⊂ η} = inf ν(y), M{B 6⊂ η} = 1 − inf ν(y), M{a ∈ ξ} = µ(a),
y∈B y∈B

we get

inf ν(y)
y∈B



 , if inf ν(y) < µ(a)/2
µ(a)


 y∈B


M{B ⊂ η|a ∈ ξ} = inf ν(y) + µ(a) − 1
y∈B

 , if inf ν(y) > 1 − µ(a)/2



 µ(a) y∈B


0.5, otherwise.

Section 8.11 - Conditional Membership Function 233

That is,
M{B ⊂ η|a ∈ ξ} = inf ν(y|a ∈ ξ).
y∈B

The first measure inversion formula is verified. Next, by using the definition
of conditional uncertainty and independence of ξ and η, we have
M{η ⊂ B} M{η ⊂ B}

 , if < 0.5
M{a ∈ M{a ∈ ξ}



 ξ}

M{η ⊂ B|a ∈ ξ} = M{η 6⊂ B} M{η 6⊂ B}
1− , if < 0.5
M{a ∈ ξ} M{a ∈ ξ}






0.5, otherwise.
Since
M{η ⊂ B} = 1 − sup ν(y), M{η 6⊂ B} = sup ν(y), M{a ∈ ξ} = µ(a),
y∈B c y∈B c

we get

 1 − sup ν(y)
y∈B c



 , if sup ν(y) > 1 − µ(a)/2
µ(a)

y∈B c




M{η ⊂ B|a ∈ ξ} = µ(a) − sup ν(y)
y∈B c
, if sup ν(y) < µ(a)/2





 µ(a) y∈B c



 0.5, otherwise.
That is,
M{η ⊂ B|a ∈ ξ} = 1 − sup ν(y|a ∈ ξ).
y∈B c

The second measure inversion formula is verified. Hence ν(y|a ∈ ξ) is the


membership function of the conditional uncertain set η given a ∈ ξ.
..
..........
...
...
ν(y)
1 ... . . . . . . . . . . . . . . . . . . . . . .........
... .. .
.... ......
... ..... ......
... ...... .........
... .... ... ... ..
... ... ... ... ...
... .. ... ....
... .. .... ... ...
... .. ... ...
... .... ... .. ...
.
... . . . . . . . . . . . ................................. . . . . . . . ................................
0.5 ...
... .
.
.. ....
.. ... ...
... ...
... ..
... ... ... ... ...
...
...
ν(y|a ∈ ξ) .
... ....
.. ...
... ...
... ...
... .. ....
. ... ..
... ...
... ....... ......
... ...... ......
... . ...... .....
..
...........................................................................................................................................................................................
0 ...
...
...

Figure 8.14: Membership Functions ν(y) and ν(y|a ∈ ξ)

Exercise 8.58: Let ξ1 , ξ2 , · · · , ξm , η be independent uncertain sets with


membership functions µ1 , µ2 , · · · , µm , ν, respectively. For any real numbers
234 Chapter 8 - Uncertain Set

a1 , a2 , · · · , am , show that the conditional uncertain set η given a1 ∈ ξ1 , a2 ∈


ξ2 , · · · , am ∈ ξm has a membership function

ν(y)


 , if ν(y) < min µi (ai )/2


 min µi (ai ) 1≤i≤m

 1≤i≤m


ν ∗ (y) = ν(y) + min µi (ai ) − 1
1≤i≤m
 , if ν(y) > 1 − min µi (ai )/2
min µi (ai )


 1≤i≤m
1≤i≤m





0.5, otherwise.

8.12 Bibliographic Notes


In order to model unsharp concepts like “young”, “tall” and “most”, uncer-
tain set was proposed by Liu [82] in 2010. After that, membership function
was presented by Liu [88] in 2012 to describe uncertain sets. However, not
all uncertain sets have membership functions. Liu [99] proved that totally
ordered uncertain sets on a continuous uncertainty space always have mem-
bership functions. In addition, Liu [91] defined the independence of uncertain
sets, and provided the operational law through membership functions. Yao
[180] derived a formula for calculating the uncertain measure of inclusion
relation between uncertain sets.
The expected value of uncertain set was defined by Liu [82]. Next, Liu [84]
gave a formula for caluculating the expected value by membership function,
and Liu [88] provided a formula by inverse membership function. Based on
the expected value operator, Liu [85] presented the variance and distance
between uncertain sets, and Yang-Gao [159] investigated the moments of
uncertain set.
The entropy was presented by Liu [85] as the degree of difficulty of pre-
dicting the realization of an uncertain set. Some formulas were also provided
by Yao-Ke [175] for calculating the value of entropy.
Conditional uncertain set was first investigated by Liu [82] and condi-
tional membership function was formally defined by Liu [95]. Furthermore,
Yao [186] presented some criteria for judging the existence of conditional
membership function.
Chapter 9

Uncertain Logic

Uncertain logic is a methodology for calculating the truth values of uncertain


propositions via uncertain set theory. This chapter will introduce individual
feature data, uncertain quantifier, uncertain subject, uncertain predicate,
uncertain proposition, and truth value. Uncertain logic may provide a flexible
means for extracting linguistic summary from a collection of raw data.

9.1 Individual Feature Data


At first, we should have a universe A of individuals we are talking about.
Without loss of generality, we may assume that A consists of n individuals
and is represented by
A = {a1 , a2 , · · · , an }. (9.1)
In order to deal with the universe A, we should have feature data of all
individuals a1 , a2 , · · · , an . When we talk about “those days are warm”, we
should know the individual feature data of all days, for example,

A = {22, 23, 25, 28, 30, 32, 36} (9.2)

whose elements are temperatures in centigrades. When we talk about “those


students are young”, we should know the individual feature data of all stu-
dents, for example,

A = {21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40} (9.3)

whose elements are ages in years. When we talk about “those sportsmen
are tall”, we should know the individual feature data of all sportsmen, for
example,  
175, 178, 178, 180, 183, 184, 186, 186
A= (9.4)
188, 190, 192, 192, 193, 194, 195, 196
whose elements are heights in centimeters.
236 Chapter 9 - Uncertain Logic

Sometimes the individual feature data are represented by vectors rather


a scalar number. When we talk about “those young students are tall”, we
should know the individual feature data of all students, for example,
 
 (24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188) 
A = (28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (9.5)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)
 

whose elements are ages and heights in years and centimeters, respectively.

9.2 Uncertain Quantifier


If we want to represent all individuals in the universe A, we use the universal
quantifier (∀) and
∀ = “for all”. (9.6)
If we want to represent some (at least one) individuals, we use the existential
quantifier (∃) and
∃ = “there exists at least one”. (9.7)
In addition to the two quantifiers, there are numerous imprecise quantifiers in
human language, for example, almost all, almost none, many, several, some,
most, a few, about half. This section will model them by the tool of uncertain
quantifier.

Definition 9.1 (Liu [85]) Uncertain quantifier is an uncertain set represent-


ing the number of individuals.

Example 9.1: The universal quantifier (∀) on the universe A is a special


uncertain quantifier,
∀ ≡ {n} (9.8)
whose membership function is
(
1, if x = n
λ(x) = (9.9)
0, otherwise.

Example 9.2: The existential quantifier (∃) on the universe A is a special


uncertain quantifier,
∃ ≡ {1, 2, · · · , n} (9.10)
whose membership function is
(
0, if x = 0
λ(x) = (9.11)
1, otherwise.
Section 9.2 - Uncertain Quantifier 237

Example 9.3: The quantifier “there does not exist one” on the universe A
is a special uncertain quantifier

Q ≡ {0} (9.12)

whose membership function is


(
1, if x = 0
λ(x) = (9.13)
0, otherwise.

Example 9.4: The quantifier “there exist exactly m” on the universe A is a


special uncertain quantifier
Q ≡ {m} (9.14)
whose membership function is
(
1, if x = m
λ(x) = (9.15)
0, otherwise.

Example 9.5: The quantifier “there exist at least m” on the universe A is


a special uncertain quantifier

Q ≡ {m, m + 1, · · · , n} (9.16)

whose membership function is


(
1, if m ≤ x ≤ n
λ(x) = (9.17)
0, if 0 ≤ x < m.

Example 9.6: The quantifier “there exist at most m” on the universe A is


a special uncertain quantifier

Q ≡ {0, 1, 2, · · · , m} (9.18)

whose membership function is


(
1, if 0 ≤ x ≤ m
λ(x) = (9.19)
0, if m < x ≤ n.

Example 9.7: The uncertain quantifier Q of “almost all ” on the universe A


may have a membership function


 0, if 0 ≤ x ≤ n − 5
λ(x) = (x − n + 5)/3, if n − 5 ≤ x ≤ n − 2 (9.20)

1, if n − 2 ≤ x ≤ n.

238 Chapter 9 - Uncertain Logic

λ(x)
....
........
..
............................................................. ...............................
... ..... ..
... .. . ..
... ... ...
. ..
... .... .. ..
... .. ..
. ..
... .. .
. ..
. .
... ... . ..
.. . ..
... . .
.
... ... . ..
.. . ..
... .. .
. ..
... .. .
... .... ... ..
... ..
. .
.
..
... ... .
.
..
... ..
. .
.
..
... ... .
.
..
. .. . .
................................................................................................................................................................................................
.
..
.
x
..
n−5 n−2 n

Figure 9.1: Membership Function of Quantifier “almost all ”

Example 9.8: The uncertain quantifier Q of “almost none” on the universe


A may have a membership function


 1, if 0 ≤ x ≤ 2
λ(x) = (5 − x)/3, if 2 ≤ x ≤ 5 (9.21)

0, if 5 ≤ x ≤ n.

λ(x)
.....
.......
...................................
... ......
... .. ...
... .. ....
... .. ....
... .. ...
...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
... .. ...
...
... .. ...
... .. ...
... .. .
................................................................................................................................................................................... x
...
2 .. 5

Figure 9.2: Membership Function of Quantifier “almost none”

Example 9.9: The uncertain quantifier Q of “about 10 ” on the universe A


may have a membership function


 0, if 0≤x≤7

 (x − 7)/2, if 7≤x≤9



λ(x) = 1, if 9 ≤ x ≤ 11 (9.22)

(13 − x)/2, if 11 ≤ x ≤ 13





0, if 13 ≤ x ≤ n.

Example 9.10: In many cases, it is more convenient for us to use a per-


centage than an absolute quantity. For example, we may use the uncertain
Section 9.2 - Uncertain Quantifier 239

λ(x)
....
........
..
................................................. ..................................................................
... .... .. .. ..
... .. .. .. .. ....
... ... ..
. .. .. ....
.
... .... .. .. .. ...
... ... .. .. .. ....
... .. .. . .. ...
. .. ...
... ... ..
.
.. ...
... ..
. .. .
..
.. ...
... ... ..
.
.. ...
... ... .. . .. ...
...
.
. .. .. ..
... . ...
... .... .. ... .. ...
... ..
. .. .. .. ...
... ... .. .. .. ...
...
... ... .. .. .. ...
... ..
. .. .. .. ...
. .. .. . .. .
..........................................................................................................................................................................................................................................................
.
..
..
.
x
7 9 10 11 13

Figure 9.3: Membership Function of Quantifier “about 10 ”

quantifier Q of “about 70% ”. In this case, a possible membership function of


Q is 

 0, if 0 ≤ x ≤ 0.6

 20(x − 0.6), if 0.6 ≤ x ≤ 0.65



λ(x) = 1, if 0.65 ≤ x ≤ 0.75 (9.23)

20(0.8 − x), if 0.75 ≤ x ≤ 0.8





0, if 0.8 ≤ x ≤ 1.

λ(x)
...
..........
.....................................................................................................
... .... .....
... ..... .....
... ... .. .. ....
... .... .. .. ...
... ... .. .. ....
... . .
. .. ...
.
... .... .. .. ....
... . .
. .. ...
... .... ... .. ...
...
... .
. .. .. ...
... .... .. .. ...
... .
. .. .. ...
... .... .. ..
..
...
...
... ... ..
...
... .
. . ..
.. ...
... .... ..
.. ...
... ... .. .
......................................................................................................................................................................................................................................................................... x
...
.
60% 65% 75% 80%

Figure 9.4: Membership Function of Quantifier “about 70% ”

Definition 9.2 An uncertain quantifier is said to be unimodal if its mem-


bership function is unimodal.

Example 9.11: The uncertain quantifiers “almost all”, “almost none”,


“about 10” and “about 70%” are unimodal.

Definition 9.3 An uncertain quantifier is said to be monotone if its mem-


bership function is monotone. Especially, an uncertain quantifier is said to be
increasing if its membership function is increasing; and an uncertain quanti-
fier is said to be decreasing if its membership function is decreasing.
240 Chapter 9 - Uncertain Logic

The uncertain quantifiers “almost all” and “almost none” are monotone,
but “about 10” and “about 70%” are not monotone. Note that both increas-
ing uncertain quantifiers and decreasing uncertain quantifiers are monotone.
In addition, any monotone uncertain quantifiers are unimodal.

Negated Quantifier
What is the negation of an uncertain quantifier? The following definition
gives a formal answer.
Definition 9.4 (Liu [85]) Let Q be an uncertain quantifier. Then the negated
quantifier ¬Q is the complement of Q in the sense of uncertain set, i.e.,
¬Q = Qc . (9.24)

Example 9.12: Let ∀ = {n} be the universal quantifier. Then its negated
quantifier
¬∀ ≡ {0, 1, 2, · · · , n − 1}. (9.25)

Example 9.13: Let ∃ = {1, 2, · · · , n} be the existential quantifier. Then its


negated quantifier is
¬∃ ≡ {0}. (9.26)
Theorem 9.1 Let Q be an uncertain quantifier whose membership function
is λ. Then the negated quantifier ¬Q has a membership function
¬λ(x) = 1 − λ(x). (9.27)
Proof: This theorem follows from the operational law of uncertain set im-
mediately.

Example 9.14: Let Q be the uncertain quantifier “almost all ” defined by


(9.20). Then its negated quantifier ¬Q has a membership function


 1, if 0 ≤ x ≤ n − 5
¬λ(x) = (n − x − 2)/3, if n − 5 ≤ x ≤ n − 2 (9.28)

0, if n − 2 ≤ x ≤ n.

Example 9.15: Let Q be the uncertain quantifier “about 70% ” defined by


(9.23). Then its negated quantifier ¬Q has a membership function


 1, if 0 ≤ x ≤ 0.6

 20(0.65 − x), if 0.6 ≤ x ≤ 0.65



¬λ(x) = 0, if 0.65 ≤ x ≤ 0.75 (9.29)

20(x − 0.75), if 0.75 ≤ x ≤ 0.8





1, if 0.8 ≤ x ≤ 1.

Section 9.2 - Uncertain Quantifier 241

..
.........
... ¬λ(x)
....................................................................................................................................
λ(x)
....... ....... ...
... ...
..
... ...
... ...
... ... .
... ... ..
.
.
... ...
... ... .
... ... ....
...
... ..... .
... ....
... .. ...
.
... ... .....
... . ...
... ..
.
. ...
... ...
...
... ..
... ... ...
...
... . ..
............................................................................................................................................................................................................................................................. x
...
. n−5 n−2

Figure 9.5: Membership Function of Negated Quantifier of “almost all ”

..... ¬λ(x) λ(x) ¬λ(x)


.......
...................................................................................... . ....... ....... ....... ....... ......................................................
... ... ... ... ...
... ... .. ..
... .. ...
... ... .
. ... .
... ... .. ...
... ... .... ..
... ... ... ....
.. ..
... ... .. ..
... ..... ......
... .. ...
... ..... ..
... ... .... .. ...
.
.
... .. ....
.
... ..
... .. .... ... ....
... ... ... .
... .. ... ...
.. ... ..
...
... ... .
. ..
... .. ..
............................................................................................................................................................................................................................................................................... x
....
60% 65% 75% 80%

Figure 9.6: Membership Function of Negated Quantifier of “about 70% ”

Theorem 9.2 Let Q be an uncertain quantifier. Then we have ¬¬Q = Q.

Proof: This theorem follows from ¬¬Q = ¬Qc = (Qc )c = Q.

Theorem 9.3 If Q is a monotone uncertain quantifier, then ¬Q is also


monotone. Especially, if Q is increasing, then ¬Q is decreasing; if Q is de-
creasing, then ¬Q is increasing.

Proof: Assume λ is the membership function of Q. Then ¬Q has a member-


ship function 1 − λ(x). The theorem follows from this fact immediately.

Dual Quantifier
Definition 9.5 (Liu [85]) Let Q be an uncertain quantifier. Then the dual
quantifier of Q is
Q∗ = ∀ − Q. (9.30)

Remark 9.1: Note that Q and Q∗ are dependent uncertain sets such that
Q + Q∗ ≡ ∀. Since the cardinality of the universe A is n, we also have

Q∗ = {n} − Q. (9.31)
242 Chapter 9 - Uncertain Logic

Example 9.16: Since ∀ ≡ {n}, we immediately have ∀∗ = {0} = ¬∃. That


is
∀∗ ≡ ¬∃. (9.32)

Example 9.17: Since ¬∀ = {0, 1, 2, · · · , n−1}, we immediately have (¬∀)∗ =


{1, 2, · · · , n} = ∃. That is,
(¬∀)∗ ≡ ∃. (9.33)

Example 9.18: Since ∃ ≡ {1, 2, · · · , n}, we have ∃∗ = {0, 1, 2, · · · , n − 1} =


¬∀. That is,
∃∗ ≡ ¬∀. (9.34)

Example 9.19: Since ¬∃ = {0}, we immediately have (¬∃)∗ = {n} = ∀.


That is,
(¬∃)∗ = ∀. (9.35)

Theorem 9.4 Let Q be an uncertain quantifier whose membership function


is λ. Then the dual quantifier Q∗ has a membership function

λ∗ (x) = λ(n − x) (9.36)

where n is the cardinality of the universe A.

Proof: This theorem follows from the operational law of uncertain set im-
mediately.

Example 9.20: Let Q be the uncertain quantifier “almost all ” defined by


(9.20). Then its dual quantifier Q∗ has a membership function


 1, if 0 ≤ x ≤ 2

λ (x) = (5 − x)/3, if 2 ≤ x ≤ 5 (9.37)

0, if 5 ≤ x ≤ n.

...
.......... ∗
....
λ (x) λ(x)
................................ ....... ....... ...
.... ... ..
... ...
... ...
... ... .
... ...
... ...
... ... ..
... ... ...
... ...
... ... ..
... ... ...
... ... ..
... .
... ... .
... ... .
... ...
... ...
... ... ..
... ... ...
... ...
. . ..
............................................................................................................................................................................................................................................................. x
....
5 n−5

Figure 9.7: Membership Function of Dual Quantifier of “almost all ”


Section 9.3 - Uncertain Subject 243

Example 9.21: Let Q be the uncertain quantifier “about 70% ” defined by


(9.23). Then its dual quantifier Q∗ has a membership function


 0, if 0 ≤ x ≤ 0.2

 20(x − 0.2), if 0.2 ≤ x ≤ 0.25



λ∗ (x) = 1, if 0.25 ≤ x ≤ 0.35 (9.38)

 20(0.4 − x), if 0.35 ≤ x ≤ 0.4




0, if 0.4 ≤ x ≤ 1.

..... ∗
.......
.... λ (x) λ(x)
... ................................ ..... ....... .......
.. ..
. ... ...
... ... ... .
.. ..
... ..
. ... .
... ... ...
..
...
... ..
. ...
... .
. ..
... ... ... ...
... .... ... .
. ..
... ... ... ..
... ... ... .. ...
... ... ... .. ..
... .. ... ...
... ..
. ... ...
. ..
... ..
. ...
...
.
... .... . ...
... . ..
... ... ... ..
... .. ... ...
..
. .. ... .. .
................................................................................................................................................................................................................................................................................
. .
x
....
20% 40% 60% 80%

Figure 9.8: Membership Function of Dual Quantifier of “about 70% ”

Theorem 9.5 Let Q be an uncertain quantifier. Then we have Q∗∗ = Q.

Proof: The theorem follows from Q∗∗ = ∀ − Q∗ = ∀ − (∀ − Q) = Q.

Theorem 9.6 If Q is a unimodal uncertain quantifier, then Q∗ is also uni-


modal. Especially, if Q is a monotone, then Q∗ is monotone; if Q is increasing,
then Q∗ is decreasing; if Q is decreasing, then Q∗ is increasing.

Proof: Assume λ is the membership function of Q. Then Q∗ has a member-


ship function λ(n − x). The theorem follows from this fact immediately.

9.3 Uncertain Subject


Sometimes, we are interested in a subset of the universe of individuals, for
example, “warm days”, “young students” and “tall sportsmen”. This section
will model them by the concept of uncertain subject.

Definition 9.6 (Liu [85]) Uncertain subject is an uncertain set containing


some specified individuals in the universe.

Example 9.22: “Warm days are here again” is a statement in which “warm
days” is an uncertain subject that is an uncertain set on the universe of “all
244 Chapter 9 - Uncertain Logic

days”, whose membership function may be defined by




 0, if x ≤ 15

 (x − 15)/3, if 15 ≤ x ≤ 18



ν(x) = 1, if 18 ≤ x ≤ 24 (9.39)

(28 − x)/4, if 24 ≤ x ≤ 28





0, if 28 ≤ x.

ν(x)
....
.........
..
... ........................................................................
... ..... ......
... ... . .. ...
... ... ... .. ....
... .. ..
. .. ....
... .. ..
. .. ...
... ... .. .. ...
... .. .. .. ...
. ...
... ... .. .
. ...
... ..
. .. . ...
... ..
. .. ... ...
...
... ..
. .. .
. ...
... ... .. . ...
... ... .. ... ...
... ... .. .
.
...
... ..
. .. . ...
...
... ..
. .. ... ...
. .
. .. .
........................................................................................................................................................................................................................................................................
. .
x
..
... ◦ ◦ ◦ ◦
15 C 18 C 24 C 28 C

Figure 9.9: Membership Function of Subject “warm days”

Example 9.23: “Young students are tall” is a statement in which “young


students” is an uncertain subject that is an uncertain set on the universe of
“all students”, whose membership function may be defined by


 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



ν(x) = 1, if 20 ≤ x ≤ 35 (9.40)

(45 − x)/10, if 35 ≤ x ≤ 45





0, if x ≥ 45.

Example 9.24: “Tall students are heavy” is a statement in which “tall


students” is an uncertain subject that is an uncertain set on the universe of
“all students”, whose membership function may be defined by


 0, if x ≤ 180

 (x − 180)/5, if 180 ≤ x ≤ 185



ν(x) = 1, if 185 ≤ x ≤ 195 (9.41)

 (200 − x)/5, if 195 ≤ x ≤ 200




0, if x ≥ 200.

Let S be an uncertain subject with membership function ν on the universe


A = {a1 , a2 , · · · , an } of individuals. Then S is an uncertain set of A such
Section 9.4 - Uncertain Predicate 245

ν(x)
....
........
..
... .............................................................................................
... ..... .. .....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. .. ...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. .. ...
. .
. .. .
............................................................................................................................................................................................................................................................
. .
x
..
15yr 20yr ... 35yr 45yr

Figure 9.10: Membership Function of Subject “young students”

ν(x)
...
..........
.... ..........................................................................................
... ..... .....
... .. . .. ....
... ... ...
. .. ....
... .... ... .. ...
... ... . .. ....
... .. .. .. ...
. ..
... ... .. ...
...
... ... .. ..
.. ...
... ... . ...
... ..
. .. .. ...
... ..
. .. .. ...
... .... ..
..
..
..
...
...
... ... .. .. ...
... ... .. .. ...
... ... .. ..
...
... .. ...
.
. .. .. ...
... .
. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm

Figure 9.11: Membership Function of Subject “tall students”

that
M{ai ∈ S} = ν(ai ), i = 1, 2, · · · , n. (9.42)

In many cases, we are interested in some individuals a’s with ν(a) ≥ ω, where
ω is a confidence level. Thus we have a subuniverse,

Sω = {a ∈ A | ν(a) ≥ ω} (9.43)

that will play a new universe of individuals we are talking about, and the
individuals out of Sω will be ignored at the confidence level ω.

Theorem 9.7 Let ω1 and ω2 be confidence levels with ω1 > ω2 , and let Sω1
and Sω2 be subuniverses with confidence levels ω1 an ω2 , respectively. Then

Sω1 ⊂ Sω2 . (9.44)

That is, Sω is a decreasing sequence of sets with respect to ω.

Proof: If a ∈ Sω1 , then ν(a) ≥ ω1 > ω2 . Thus a ∈ Sω2 . It follows that


Sω1 ⊂ Sω2 . Note that Sω1 and Sω2 may be empty.
246 Chapter 9 - Uncertain Logic

9.4 Uncertain Predicate


There are numerous imprecise predicates in human language, for example,
warm, cold, hot, young, old, tall, small, and big. This section will model them
by the concept of uncertain predicate.

Definition 9.7 (Liu [85]) Uncertain predicate is an uncertain set represent-


ing a property that the individuals have in common.

Example 9.25: “Today is warm” is a statement in which “warm” is an


uncertain predicate that may be represented by a membership function


 0, if x ≤ 15

 (x − 15)/3, if 15 ≤ x ≤ 18



µ(x) = 1, if 18 ≤ x ≤ 24 (9.45)

(28 − x)/4, if 24 ≤ x ≤ 28





0, if 28 ≤ x.

µ(x)
.....
.......
.... ....................................................................
... ..... ....
... ... .. .. ....
... .. ..
. .. ....
... ... .. .. ...
... ... .. .. ....
... ... .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... .. .. .. ...
. ..
... ... .. ...
...
... ... .. ..
.. ...
... ... .. ...
... ... .. .. ...
... ..
. .. .. ...
... .. .. .. ...
. .. ...
... ... .. ...
... ..
. .. . ..
............................................................................................................................................................................................................................................................................. x
.... ◦ ◦ ◦ ◦
15 C 18 C
. 24 C 28 C

Figure 9.12: Membership Function of Predicate “warm”

Example 9.26: “John is young” is a statement in which “young” is an


uncertain predicate that may be represented by a membership function


 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



µ(x) = 1, if 20 ≤ x ≤ 35 (9.46)

 (45 − x)/10, if 35 ≤ x ≤ 45




0, if x ≥ 45.

Example 9.27: “Tom is tall” is a statement in which “tall” is an uncertain


Section 9.4 - Uncertain Predicate 247

µ(x)
....
........
..
... .............................................................................................
... ..... .. .....
... ... . .. ....
... .. ...
. .. ...
... .. ..
. .. ...
... .. ..
. .. ...
...
... ... .. .. ...
... .. ..
. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ... .. .. ...
... ..
. .. .. ...
...
... ... .. .. ...
... ..
. .. .. ...
... ... .. .. ...
...
... ..
. .. .. ...
... ..
. .. .. ...
. .
. .. .
............................................................................................................................................................................................................................................................
. .
x
..
15yr 20yr ... 35yr 45yr

Figure 9.13: Membership Function of Predicate “young”

predicate that may be represented by a membership function




 0, if x ≤ 180

 (x − 180)/5, if 180 ≤ x ≤ 185



µ(x) = 1, if 185 ≤ x ≤ 195 (9.47)

(200 − x)/5, if 195 ≤ x ≤ 200





0, if x ≥ 200.

µ(x)
....
........
..... ..........................................................................................
... ..... .....
.. ... .. .. ....
... ... .. .. ....
... .... .. .. ...
... ... .. .. ....
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
...
... ... .. .. ...
... ... .. .. ...
... ..
. .. . ...
... ..
. .. .. ...
... .... .. .. ...
...
... ... .. .. ...
... ... .. .. ...
... ... .. .. ...
.... .. .. .. ...
. .
.............................................................................................................................................................................................................................................................
....
x
180cm 185cm . 195cm 200cm

Figure 9.14: Membership Function of Predicate “tall”

Negated Predicate
Definition 9.8 (Liu [85]) Let P be an uncertain predicate. Then its negated
predicate ¬P is the complement of P in the sense of uncertain set, i.e.,

¬P = P c . (9.48)

Theorem 9.8 Let P be an uncertain predicate with membership function µ.


Then its negated predicate ¬P has a membership function

¬µ(x) = 1 − µ(x). (9.49)


248 Chapter 9 - Uncertain Logic

Proof: The theorem follows from the definition of negated predicate and the
operational law of uncertain set immediately.

Example 9.28: Let P be the uncertain predicate “warm” defined by (9.45).


Then its negated predicate ¬P has a membership function


 1, if x ≤ 15

 (18 − x)/3, if 15 ≤ x ≤ 18



¬µ(x) = 0, if 18 ≤ x ≤ 24 (9.50)

(x − 24)/4, if 24 ≤ x ≤ 28





1, if 28 ≤ x.

....
........ ¬µ(x) µ(x) ¬µ(x)
........................................................... ....... ....... ....... ....... ....... ...........................................
... ... .. ... ...
.. .. ..
... ...
... ..
... ...
.
. .... .
.
... ... .. . ...
... ... . ... ...
... ... ... ..
... ...
..
...
.
... ... ... ... ....
.... ....
... . .
... .. .. ... ..
... .. ..... ... ...
... . ... ..
. ...
... ... ...
... ... ..
... ..
... ...
...
. .
... ...
..
. ... .
.
... ... .
. ...
.. .. . ...
.............................................................................................................................................................................................................................................................................. x
.. ◦ ◦ ◦ ◦
15 C 18 C
... 24 C 28 C

Figure 9.15: Membership Function of Negated Predicate of “warm”

Example 9.29: Let P be the uncertain predicate “young” defined by (9.46).


Then its negated predicate ¬P has a membership function


 1, if x ≤ 15

 (20 − x)/5, if 15 ≤ x ≤ 20



¬µ(x) = 0, if 20 ≤ x ≤ 35 (9.51)

(x − 35)/10, if 35 ≤ x ≤ 45





1, if x ≥ 45.

Example 9.30: Let P be the uncertain predicate “tall ” defined by (9.47).


Then its negated predicate ¬P has a membership function


 1, if x ≤ 180

(185 − x)/5, if 180 ≤ x ≤ 185




¬µ(x) = 0, if 185 ≤ x ≤ 195 (9.52)

(x − 195)/5, if 195 ≤ x ≤ 200





1, if x ≥ 200.

Section 9.5 - Uncertain Proposition 249

....
........
.
¬µ(x) µ(x) ¬µ(x)
............................................... .. ....... ....... ....... ....... ....... ....... ...... .............................
.. ... .. ...
... ... ... ...
... ... . .. ...
... ... ... . .....
... ... .
... ... ... . ...
... ... . ... ...
... .. ...
... . ....
... ...... ... . ..
... .... ....
... .... ...
... ... .... ... ...
... .
. .... ..
.
... .. .. .. ...
... ..... ... ..
... ... ...
... .. ... ... ...
... .. ... ....
.
..
.. ... .. ...
. ... .
......................................................................................................................................................................................................................................................................... x
....
15yr 20yr
.. 35yr 45yr

Figure 9.16: Membership Function of Negated Predicate of “young”

...
µ(x)
.......... µ(x) ¬µ(x)
................................................ ....... ....... ....... ....... ....... ....... ....... ............................
.. ... .. ... ...
... ... ... .. ...
... ... ..
...
... .
.. ...
. .....
... .. .
... ... ... ...
... ...
... ...
. .. ...
... ..
...
...
. .. ...
.
..... .....
... .... .
... .. ...
. ....
... ... .... .. ...
... . .... . .
.
... ...
... ..
.. ... ... .
... ... ... ...
... .. ... ..
... .. ... ...
. ..
... . ...
... . .. ...
........................................................................................................................................................................................................................................................................ x
....
180cm 185cm . 195cm 200cm

Figure 9.17: Membership Function of Negated Predicate of “tall ”

Theorem 9.9 Let P be an uncertain predicate. Then we have ¬¬P = P .

Proof: The theorem follows from ¬¬P = ¬P c = (P c )c = P.

9.5 Uncertain Proposition


Definition 9.9 (Liu [85]) Assume that Q is an uncertain quantifier, S is an
uncertain subject, and P is an uncertain predicate. Then the triplet

(Q, S, P ) =“ Q of S are P ” (9.53)

is called an uncertain proposition.

Remark 9.2: Let A be the universe of individuals. Then (Q, A, P ) is a


special uncertain proposition because A itself is a special uncertain subject.

Remark 9.3: Let ∀ be the universal quantifier. Then (∀, A, P ) is an uncer-


tain proposition representing “all of A are P ”.

Remark 9.4: Let ∃ be the existential quantifier. Then (∃, A, P ) is an un-


certain proposition representing “at least one of A is P ”.
250 Chapter 9 - Uncertain Logic

Example 9.31: “Almost all students are young” is an uncertain proposition


in which the uncertain quantifier Q is “almost all”, the uncertain subject S
is “students” (the universe itself) and the uncertain predicate P is “young”.

Example 9.32: “Most young students are tall” is an uncertain proposition


in which the uncertain quantifier Q is “most”, the uncertain subject S is
“young students” and the uncertain predicate P is “tall”.
Theorem 9.10 (Liu [85], Logical Equivalence Theorem) Let (Q, S, P ) be an
uncertain proposition. Then
(Q∗ , S, P ) = (Q, S, ¬P ) (9.54)
where Q∗ is the dual quantifier of Q and ¬P is the negated predicate of P .
Proof: Note that (Q∗ , S, P ) represents “Q∗ of S are P ”. In fact, the state-
ment “Q∗ of S are P ” implies “Q∗∗ of S are not P ”. Since Q∗∗ = Q, we obtain
(Q, S, ¬P ). Conversely, the statement “Q of S are not P ” implies “Q∗ of S
are P ”, i.e., (Q∗ , S, P ). Thus (9.54) is verified.

Example 9.33: When Q∗ = ¬∀, we have Q = ∃. If S = A, then (9.54)


becomes the classical equivalence
(¬∀, A, P ) = (∃, A, ¬P ). (9.55)

Example 9.34: When Q∗ = ¬∃, we have Q = ∀. If S = A, then (9.54)


becomes the classical equivalence
(¬∃, A, P ) = (∀, A, ¬P ). (9.56)

9.6 Truth Value


Let (Q, S, P ) be an uncertain proposition. The truth value of (Q, S, P ) should
be the uncertain measure that “Q of S are P ”. That is,
T (Q, S, P ) = M{Q of S are P }. (9.57)
However, it is impossible for us to deduce the value of M{Q of S are P } from
the information of Q, S and P within the framework of uncertain set theory.
Thus we need an additional formula to compose Q, S and P .
Definition 9.10 (Liu [85]) Let (Q, S, P ) be an uncertain proposition in which
Q is a unimodal uncertain quantifier with membership function λ, S is an un-
certain subject with membership function ν, and P is an uncertain predicate
with membership function µ. Then the truth value of (Q, S, P ) with respect
to the universe A is
!
T (Q, S, P ) = sup ω ∧ sup inf µ(a) ∧ sup inf ¬µ(a) (9.58)
0≤ω≤1 K∈Kω a∈K K∈K∗
ω
a∈K
Section 9.6 - Truth Value 251

where
Kω = {K ⊂ Sω | λ(|K|) ≥ ω} , (9.59)
K∗ω = {K ⊂ Sω | λ(|Sω | − |K|) ≥ ω} , (9.60)
Sω = {a ∈ A | ν(a) ≥ ω} . (9.61)

Remark 9.5: Keep in mind that the truth value formula (9.58) is vacuous
if the individual feature data of the universe A are not available.

Remark 9.6: The symbol |K| represents the cardinality of the set K. For
example, |∅| = 0 and |{2, 5, 6}| = 3.

Remark 9.7: Note that µ is the membership function of the negated


predicate of P , and
¬µ(a) = 1 − µ(a). (9.62)

Remark 9.8: When the subset K of individuals becomes an empty set ∅,


we set
inf µ(a) = inf ¬µ(a) = 1. (9.63)
a∈∅ a∈∅

Remark 9.9: If Q is an uncertain percentage rather than an absolute quan-


tity, then    
|K|
Kω = K ⊂ Sω λ ≥ω , (9.64)
|Sω |
   

|K|
Kω = K ⊂ Sω λ 1 − ≥ω . (9.65)
|Sω |

Remark 9.10: If the uncertain subject S is identical to the universe A itself


(i.e., S = A), then
Kω = {K ⊂ A | λ(|K|) ≥ ω} , (9.66)
K∗ω = {K ⊂ A | λ(|A| − |K|) ≥ ω} . (9.67)

Exercise 9.1: If the uncertain quantifier Q = ∀ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {A}, K∗ω = {∅}. (9.68)

Show that
T (∀, A, P ) = inf µ(a). (9.69)
a∈A

Exercise 9.2: If the uncertain quantifier Q = ∃ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {any nonempty subsets of A}, (9.70)


252 Chapter 9 - Uncertain Logic

K∗ω = {any proper subsets of A}. (9.71)


Show that
T (∃, A, P ) = sup µ(a). (9.72)
a∈A

Exercise 9.3: If the uncertain quantifier Q = ¬∀ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {any proper subsets of A}, (9.73)

K∗ω = {any nonempty subsets of A}. (9.74)


Show that
T (¬∀, A, P ) = 1 − inf µ(a). (9.75)
a∈A

Exercise 9.4: If the uncertain quantifier Q = ¬∃ and the uncertain subject


S = A, then for any ω > 0, we have

Kω = {∅}, K∗ω = {A}. (9.76)

Show that
T (¬∃, A, P ) = 1 − sup µ(a). (9.77)
a∈A

Theorem 9.11 (Liu [85], Truth Value Theorem) Let (Q, S, P ) be an uncer-
tain proposition in which Q is a unimodal uncertain quantifier with member-
ship function λ, S is an uncertain subject with membership function ν, and P
is an uncertain predicate with membership function µ. Then the truth value
of (Q, S, P ) is

T (Q, S, P ) = sup (ω ∧ ∆(kω ) ∧ ∆∗ (kω∗ )) (9.78)


0≤ω≤1

where
kω = min {x | λ(x) ≥ ω} , (9.79)
∆(kω ) = kω -max{µ(ai ) | ai ∈ Sω }, (9.80)
kω∗ = |Sω | − max{x | λ(x) ≥ ω}, (9.81)

∆ (kω∗ ) = kω∗ -max{1 − µ(ai ) | ai ∈ Sω }. (9.82)

Proof: Since the supremum is achieved at the subset with minimum cardi-
nality, we have

sup inf µ(a) = sup inf µ(a) = ∆(kω ),


K∈Kω a∈K K⊂Sω ,|K|=kω a∈K

sup inf ¬µ(a) = sup inf ¬µ(a) = ∆∗ (kω∗ ).


K∈K∗ a∈K ∗ a∈K
K⊂Sω ,|K|=kω
ω
Section 9.6 - Truth Value 253

The theorem is thus verified. Please note that ∆(0) = ∆∗ (0) = 1.

Remark 9.11: If Q is an uncertain percentage rather than an absolute


quantity, then    
x
kω = min x λ ≥ω , (9.83)
|Sω |
   
x
kω∗ = |Sω | − max x λ

≥ω . (9.84)
|Sω |

Remark 9.12: If the uncertain subject S is identical to the universe A itself


(i.e., S = A), then
kω = min {x | λ(x) ≥ ω} , (9.85)
∆(kω ) = kω -max{µ(a1 ), µ(a2 ), · · · , µ(an )}, (9.86)
kω∗ = n − max{x | λ(x) ≥ ω}, (9.87)
∆∗ (kω∗ ) = kω∗ -max{1 − µ(a1 ), 1 − µ(a2 ), · · · , 1 − µ(an )}. (9.88)

Exercise 9.5: If the uncertain quantifier Q = {m, m + 1, · · · , n} (i.e., “there


exist at least m”) with m ≥ 1, then we have kω = m and kω∗ = 0. Show that

T (Q, A, P ) = m-max{µ(a1 ), µ(a2 ), · · · , µ(an )}. (9.89)

Exercise 9.6: If the uncertain quantifier Q = {0, 1, 2, . . . , m} (i.e., “there


exist at most m”) with m < n, then we have kω = 0 and kω∗ = n − m. Show
that

T (Q, A, P ) = (n − m)-max{1 − µ(a1 ), 1 − µ(a2 ), · · · , 1 − µ(an )}. (9.90)

Example 9.35: Assume that the daily temperatures of some week from
Monday to Sunday are

22, 23, 25, 28, 30, 32, 36 (9.91)

in centigrades. Consider an uncertain proposition

(Q, A, P ) = “two or three days are warm”. (9.92)

Note that the uncertain quantifier is Q = {2, 3}. We also suppose that the
uncertain predicate P = “warm” has a membership function


 0, if x ≤ 15

 (x − 15)/3, if 15 ≤ x ≤ 18



µ(x) = 1, if 18 ≤ x ≤ 24 (9.93)

(28 − x)/4, if 24 ≤ x ≤ 28





0, if 28 ≤ x.

254 Chapter 9 - Uncertain Logic

It is clear that Monday and Tuesday are warm with truth value 1, and
Wednesday is warm with truth value 0.75. But Thursday to Sunday are
not “warm” at all (in fact, they are “hot”). Intuitively, the uncertain propo-
sition “two or three days are warm” should be completely true. The truth
value formula (9.58) yields that the truth value is

T (“two or three days are warm”) = 1. (9.94)

This is an intuitively expected result. In addition, we also have

T (“two days are warm”) = 0.25, (9.95)

T (“three days are warm”) = 0.75. (9.96)

Example 9.36: Assume that in a class there are 15 students whose ages are

21, 22, 22, 23, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, 40 (9.97)

in years. Consider an uncertain proposition

(Q, A, P ) = “almost all students are young”. (9.98)

Suppose the uncertain quantifier Q = “almost all” has a membership function




 0, if 0 ≤ x ≤ 10
λ(x) = (x − 10)/3, if 10 ≤ x ≤ 13 (9.99)

1, if 13 ≤ x ≤ 15,

and the uncertain predicate P = “young” has a membership function




 0, if x ≤ 15

 (x − 15)/5, if 15 ≤ x ≤ 20



µ(x) = 1, if 20 ≤ x ≤ 35 (9.100)

(45 − x)/10, if 35 ≤ x ≤ 45





0, if x ≥ 45.

The truth value formula (9.58) yields that the uncertain proposition has a
truth value
T (“almost all students are young”) = 0.9. (9.101)

Example 9.37: Assume that in a team there are 16 sportsmen whose heights
are
175, 178, 178, 180, 183, 184, 186, 186
(9.102)
188, 190, 192, 192, 193, 194, 195, 196
in centimeters. Consider an uncertain proposition

(Q, A, P ) = “about 70% of sportsmen are tall”. (9.103)


Section 9.6 - Truth Value 255

Suppose the uncertain quantifier Q = “about 70%” has a membership func-


tion 

 0, if 0 ≤ x ≤ 0.6

20(x − 0.6), if 0.6 ≤ x ≤ 0.65




λ(x) = 1, if 0.65 ≤ x ≤ 0.75 (9.104)

20(0.8 − x), if 0.75 ≤ x ≤ 0.8





0, if 0.8 ≤ x ≤ 1

and the uncertain predicate P = “tall” has a membership function




 0, if x ≤ 180

 (x − 180)/5, if 180 ≤ x ≤ 185



µ(x) = 1, if 185 ≤ x ≤ 195 (9.105)

 (200 − x)/5,

 if 195 ≤ x ≤ 200


0, if x ≥ 200.

The truth value formula (9.58) yields that the uncertain proposition has a
truth value
T (“about 70% of sportsmen are tall”) = 0.8. (9.106)

Example 9.38: Assume that in a class there are 18 students whose ages
and heights are

(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (9.107)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)

in years and centimeters. Consider an uncertain proposition

(Q, S, P ) = “most young students are tall”. (9.108)

Suppose the uncertain quantifier (percentage) Q = “most” has a membership


function 

 0, if 0 ≤ x ≤ 0.7

 20(x − 0.7), if 0.7 ≤ x ≤ 0.75



λ(x) = 1, if 0.75 ≤ x ≤ 0.85 (9.109)

20(0.9 − x), if 0.85 ≤ x ≤ 0.9





0, if 0.9 ≤ x ≤ 1.

Note that each individual is described by a feature data (y, z), where y rep-
resents ages and z represents heights. In this case, the uncertain subject
256 Chapter 9 - Uncertain Logic

S = “young students” has a membership function




 0, if y ≤ 15

 (y − 15)/5, if 15 ≤ y ≤ 20



ν(y) = 1, if 20 ≤ y ≤ 35 (9.110)

(45 − y)/10, if 35 ≤ y ≤ 45





0, if y ≥ 45

and the uncertain predicate P = “tall” has a membership function




 0, if z ≤ 180

 (z − 180)/5, if 180 ≤ z ≤ 185



µ(z) = 1, if 185 ≤ z ≤ 195 (9.111)

 (200 − z)/5, if

 195 ≤ z ≤ 200


0, if z ≥ 200.

The truth value formula (9.58) yields that the uncertain proposition has a
truth value
T (“most young students are tall”) = 0.8. (9.112)

9.7 Linguistic Summarizer


Linguistic summary is a human language statement that is concise and easy-
to-understand by humans. For example, “most young students are tall” is
a linguistic summary of students’ ages and heights. Thus a linguistic sum-
mary is a special uncertain proposition whose uncertain quantifier, uncertain
subject and uncertain predicate are linguistic terms. Uncertain logic pro-
vides a flexible means that is capable of extracting linguistic summary from
a collection of raw data.
What inputs does the uncertain logic need? First, we should have some
raw data (i.e., the individual feature data),

A = {a1 , a2 , · · · , an }. (9.113)

Next, we should have some linguistic terms to represent quantifiers, for exam-
ple, “most” and “all”. Denote them by a collection of uncertain quantifiers,

Q = {Q1 , Q2 , · · · , Qm }. (9.114)

Then, we should have some linguistic terms to represent subjects, for exam-
ple, “young students” and “old students”. Denote them by a collection of
uncertain subjects,
S = {S1 , S2 , · · · , Sn }. (9.115)
Section 9.7 - Linguistic Summarizer 257

Last, we should have some linguistic terms to represent predicates, for exam-
ple, “short” and “tall”. Denote them by a collection of uncertain predicates,

P = {P1 , P2 , · · · , Pk }. (9.116)

One problem of data mining is to choose an uncertain quantifier Q ∈ Q, an


uncertain subject S ∈ S and an uncertain predicate P ∈ P such that the
truth value of the linguistic summary “Q of S are P ” to be extracted is at
least β, i.e.,
T (Q, S, P ) ≥ β (9.117)
for the universe A = {a1 , a2 , · · · , an }, where β is a confidence level. In order
to solve this problem, Liu [85] proposed the following linguistic summarizer,

Find Q, S and P







 subject to:
Q∈Q


(9.118)

 S∈S

P ∈P





T (Q, S, P ) ≥ β.

Each solution (Q, S, P ) of the linguistic summarizer (9.118) produces a lin-


guistic summary “Q of S are P ”.

Example 9.39: Assume that in a class there are 18 students whose ages
and heights are

(24, 185), (25, 190), (26, 184), (26, 170), (27, 187), (27, 188)
(28, 160), (30, 190), (32, 185), (33, 176), (35, 185), (36, 188) (9.119)
(38, 164), (38, 178), (39, 182), (40, 186), (42, 165), (44, 170)

in years and centimeters. Suppose we have three linguistic terms “about


half”, “most” and “all” as uncertain quantifiers whose membership functions
are 

 0, if 0 ≤ x ≤ 0.4

 20(x − 0.4), if 0.4 ≤ x ≤ 0.45



λhalf (x) = 1, if 0.45 ≤ x ≤ 0.55 (9.120)

20(0.6 − x), if 0.55 ≤ x ≤ 0.6





0, if 0.6 ≤ x ≤ 1,



 0, if 0 ≤ x ≤ 0.7

 20(x − 0.7), if 0.7 ≤ x ≤ 0.75



λmost (x) = 1, if 0.75 ≤ x ≤ 0.85 (9.121)

20(0.9 − x), if 0.85 ≤ x ≤ 0.9





0, if 0.9 ≤ x ≤ 1,

258 Chapter 9 - Uncertain Logic

(
1, if x = 1
λall (x) = (9.122)
0, if 0 ≤ x < 1,
respectively. Denote the collection of uncertain quantifiers by
Q = {“about half ”, “most”,“all”}. (9.123)
We also have three linguistic terms “young students”, “middle-aged students”
and “old students” as uncertain subjects whose membership functions are


 0, if y ≤ 15

 (y − 15)/5, if 15 ≤ y ≤ 20



νyoung (y) = 1, if 20 ≤ y ≤ 35 (9.124)

(45 − y)/10, if 35 ≤ y ≤ 45





0, if y ≥ 45,



 0, if y ≤ 40

 (y − 40)/5, if 40 ≤ y ≤ 45



νmiddle (y) = 1, if 45 ≤ y ≤ 55 (9.125)

(60 − y)/5, if 55 ≤ y ≤ 60





0, if y ≥ 60,



 0, if y ≤ 55

 (y − 55)/5, if 55 ≤ y ≤ 60



νold (y) = 1, if 60 ≤ y ≤ 80 (9.126)

 (85 − y)/5, if 80 ≤ y ≤ 85




1, if y ≥ 85,

respectively. Denote the collection of uncertain subjects by


S = {“young students”, “middle-aged students”, “old students”}. (9.127)
Finally, we suppose that there are two linguistic terms “short” and “tall” as
uncertain predicates whose membership functions are


 0, if z ≤ 145

(z − 145)/5, if 145 ≤ z ≤ 150




µshort (z) = 1, if 150 ≤ z ≤ 155 (9.128)

(160 − z)/5, if 155 ≤ z ≤ 160





0, if z ≥ 200,



 0, if z ≤ 180

 (z − 180)/5, if 180 ≤ z ≤ 185



µtall (z) = 1, if 185 ≤ z ≤ 195 (9.129)

(200 − z)/5, if 195 ≤ z ≤ 200





0, if z ≥ 200,

Section 9.8 - Bibliographic Notes 259

respectively. Denote the collection of uncertain predicates by

P = {“short”, “tall”}. (9.130)

We would like to extract an uncertain quantifier Q ∈ Q, an uncertain subject


S ∈ S and an uncertain predicate P ∈ P such that the truth value of the
linguistic summary “Q of S are P ” to be extracted is at least 0.8, i.e.,

T (Q, S, P ) ≥ 0.8 (9.131)

where 0.8 is a predetermined confidence level. The linguistic summarizer


(9.118) yields

Q = “most”, S = “young students”, P = “tall”

and then extracts a linguistic summary “most young students are tall”.

9.8 Bibliographic Notes


Based on uncertain set theory, uncertain logic was designed by Liu [85] in
2011 for dealing with human language by using the truth value formula for
uncertain propositions. As an application of uncertain logic, Liu [85] also pro-
posed a linguistic summarizer that provides a means for extracting linguistic
summary from a collection of raw data.
Chapter 10

Uncertain Inference

Uncertain inference is a process of deriving consequences from human knowl-


edge via uncertain set theory. This chapter will introduce a family of uncer-
tain inference rules, uncertain system, and uncertain control with application
to an inverted pendulum system.

10.1 Uncertain Inference Rule

Let X and Y be two concepts. It is assumed that we only have a single if-then
rule,
“if X is ξ then Y is η” (10.1)

where ξ and η are two uncertain sets. We first introduce the following infer-
ence rule.

Inference Rule 10.1 (Liu [82]) Let X and Y be two concepts. Assume a
rule “if X is an uncertain set ξ then Y is an uncertain set η”. From X is a
constant a we infer that Y is an uncertain set

η ∗ = η|a∈ξ (10.2)

which is the conditional uncertain set η given a ∈ ξ. The inference rule is


represented by
Rule: If X is ξ then Y is η
From: X is a constant a (10.3)
Infer: Y is η ∗ = η|a∈ξ

Theorem 10.1 (Liu [82]) In Inference Rule 10.1, if ξ and η are independent
uncertain sets with membership functions µ and ν, respectively, then η ∗ has
262 Chapter 10 - Uncertain Inference

a membership function
ν(y)


 , if ν(y) < µ(a)/2


 µ(a)

ν ∗ (y) = ν(y) + µ(a) − 1 (10.4)
, if ν(y) > 1 − µ(a)/2
µ(a)






0.5, otherwise.
Proof: It follows from Inference Rule 10.1 that η ∗ is the conditional uncer-
tain set η given a ∈ ξ. By applying Theorem 8.46, the membership function
of η ∗ is just ν ∗ .

Inference Rule 10.2 (Gao-Gao-Ralescu [42]) Let X, Y and Z be three con-


cepts. Assume a rule “if X is an uncertain set ξ and Y is an uncertain set η
then Z is an uncertain set τ ”. From X is a constant a and Y is a constant b
we infer that Z is an uncertain set

τ ∗ = τ |(a∈ξ)∩(b∈η) (10.5)

which is the conditional uncertain set τ given a ∈ ξ and b ∈ η. The inference


rule is represented by
Rule: If X is ξ and Y is η then Z is τ
From: X is a and Y is b (10.6)
Infer: Z is τ ∗ = τ |(a∈ξ)∩(b∈η)

Theorem 10.2 (Gao-Gao-Ralescu [42]) In Inference Rule 10.2, if ξ, η, τ are


independent uncertain sets with membership functions µ, ν, λ, respectively,
then τ ∗ has a membership function
λ(z) µ(a) ∧ ν(b)

 , if λ(z) <




 µ(a) ν(b) 2

∗ λ(z) + µ(a) ∧ ν(b) − 1 µ(a) ∧ ν(b)
λ (z) = (10.7)
, if λ(z) > 1 −
µ(a) ∧ ν(b) 2






0.5, otherwise.
Proof: It follows from Inference Rule 10.2 that τ ∗ is the conditional uncer-
tain set τ given a ∈ ξ and b ∈ η. By applying Theorem 8.46, the membership
function of τ ∗ is just λ∗ .

Inference Rule 10.3 (Gao-Gao-Ralescu [42]) Let X and Y be two concepts.


Assume two rules “if X is an uncertain set ξ1 then Y is an uncertain set η1 ”
and “if X is an uncertain set ξ2 then Y is an uncertain set η2 ”. From X is a
constant a we infer that Y is an uncertain set
M{a ∈ ξ1 } · η1 |a∈ξ1 M{a ∈ ξ2 } · η2 |a∈ξ2
η∗ = + . (10.8)
M{a ∈ ξ1 } + M{a ∈ ξ2 } M{a ∈ ξ1 } + M{a ∈ ξ2 }
Section 10.1 - Uncertain Inference Rule 263

The inference rule is represented by

Rule 1: If X is ξ1 then Y is η1
Rule 2: If X is ξ2 then Y is η2
(10.9)
From: X is a constant a
Infer: Y is η ∗ determined by (10.8)

Theorem 10.3 (Gao-Gao-Ralescu [42]) In Inference Rule 10.3, if ξ1 , ξ2 , η1 ,


η2 are independent uncertain sets with membership functions µ1 , µ2 , ν1 , ν2 ,
respectively, then

µ1 (a) µ2 (a)
η∗ = η∗ + η∗ (10.10)
µ1 (a) + µ2 (a) 1 µ1 (a) + µ2 (a) 2

where η1∗ and η2∗ are uncertain sets whose membership functions are respec-
tively given by

ν1 (y)


 , if ν1 (y) < µ1 (a)/2


 µ1 (a)

ν1∗ (y) = ν1 (y) + µ1 (a) − 1 (10.11)
, if ν1 (y) > 1 − µ1 (a)/2
µ1 (a)






0.5, otherwise,

ν2 (y)


 , if ν2 (y) < µ2 (a)/2


 µ2 (a)

ν2∗ (y) = ν2 (y) + µ2 (a) − 1 (10.12)
, if ν2 (y) > 1 − µ2 (a)/2
µ2 (a)






0.5, otherwise.

Proof: It follows from Inference Rule 10.3 that the uncertain set η ∗ is just

M{a ∈ ξ1 } · η1 |a∈ξ1 M{a ∈ ξ2 } · η2 |a∈ξ2


η∗ = + .
M{a ∈ ξ1 } + M{a ∈ ξ2 } M{a ∈ ξ1 } + M{a ∈ ξ2 }

The theorem follows from M{a ∈ ξ1 } = µ1 (a) and M{a ∈ ξ2 } = µ2 (a)


immediately.

Inference Rule 10.4 (Gao-Gao-Ralescu [42]) Let X1 , X2 , · · · , Xm be con-


cepts. Assume rules “if X1 is ξi1 and · · · and Xm is ξim then Y is ηi ” for
i = 1, 2, · · · , k. From X1 is a1 and · · · and Xm is am we infer that Y is an
uncertain set
k
X ci · ηi |(a1 ∈ξi1 )∩(a2 ∈ξi2 )∩···∩(am ∈ξim )
η∗ = (10.13)
i=1
c1 + c2 + · · · + ck
264 Chapter 10 - Uncertain Inference

where the coefficients are determined by

ci = M {(a1 ∈ ξi1 ) ∩ (a2 ∈ ξi2 ) ∩ · · · ∩ (am ∈ ξim )} (10.14)

for i = 1, 2, · · · , k. The inference rule is represented by

Rule 1: If X1 is ξ11 and · · · and Xm is ξ1m then Y is η1


Rule 2: If X1 is ξ21 and · · · and Xm is ξ2m then Y is η2
···
(10.15)
Rule k: If X1 is ξk1 and · · · and Xm is ξkm then Y is ηk
From: X1 is a1 and · · · and Xm is am
Infer: Y is η ∗ determined by (10.13)

Theorem 10.4 (Gao-Gao-Ralescu [42]) In Inference Rule 10.4, if ξi1 , ξi2 ,


· · · , ξim , ηi are independent uncertain sets with membership functions µi1 , µi2 ,
· · · , µim , νi , i = 1, 2, · · · , k, respectively, then

k
X ci · ηi∗
η∗ = (10.16)
i=1
c1 + c2 + · · · + ck

where ηi∗ are uncertain sets whose membership functions are given by

νi (y)


 , if νi (y) < ci /2


 ci

νi∗ (y) = νi (y) + ci − 1 (10.17)
, if νi (y) > 1 − ci /2
ci






0.5, otherwise

and ci are constants determined by

ci = min µil (al ) (10.18)


1≤l≤m

for i = 1, 2, · · · , k, respectively.

Proof: For each i, since {a1 ∈ ξi1 }, {a2 ∈ ξi2 }, · · · , {am ∈ ξim } are indepen-
dent events, we immediately have
 
\m 
M (aj ∈ ξij ) = min M{aj ∈ ξij } = min µil (al )
  1≤j≤m 1≤l≤m
j=1

for i = 1, 2, · · · , k. From those equations, we may prove the theorem by


Inference Rule 10.4 immediately.
Section 10.2 - Uncertain System 265

10.2 Uncertain System


Uncertain system, proposed by Liu [82], is a function from its inputs to
outputs based on the uncertain inference rule. Usually, an uncertain system
consists of 5 parts:

1. inputs that are crisp data to be fed into the uncertain system;

2. a rule-base that contains a set of if-then rules provided by the experts;

3. an uncertain inference rule that infers uncertain consequents from the


uncertain antecedents;

4. an expected value operator that converts the uncertain consequents to


crisp values;

5. outputs that are crisp data yielded from the expected value operator.

Now let us consider an uncertain system in which there are m crisp inputs
α1 , α2 , · · · , αm , and n crisp outputs β1 , β2 , · · · , βn . At first, we infer n un-
certain sets η1∗ , η2∗ , · · · , ηn∗ from the m crisp inputs by the rule-base (i.e., a set
of if-then rules),

If ξ11 and ξ12 and· · · and ξ1m then η11 and η12 and· · · and η1n
If ξ21 and ξ22 and· · · and ξ2m then η21 and η22 and· · · and η2n
(10.19)
···
If ξk1 and ξk2 and· · · and ξkm then ηk1 and ηk2 and· · · and ηkn

and the uncertain inference rule


k
X ci · ηij |(α1 ∈ξi1 )∩(α2 ∈ξi2 )∩···∩(αm ∈ξim )
ηj∗ = (10.20)
i=1
c1 + c2 + · · · + ck

for j = 1, 2, · · · , n, where the coefficients are determined by

ci = M {(α1 ∈ ξi1 ) ∩ (α2 ∈ ξi2 ) ∩ · · · ∩ (αm ∈ ξim )} (10.21)

for i = 1, 2, · · · , k. Thus by using the expected value operator, we obtain

βj = E[ηj∗ ] (10.22)

for j = 1, 2, · · · , n. Until now we have constructed a function from inputs


α1 , α2 , · · · , αm to outputs β1 , β2 , · · · , βn . Write this function by f , i.e.,

(β1 , β2 , · · · , βn ) = f (α1 , α2 , · · · , αm ). (10.23)

Then we get an uncertain system f .


266 Chapter 10 - Uncertain Inference

............................................................................................ ............................ ..........................................................................


.
α1 ................................ ........................................................................................... ..................................
. ∗ ......................... ∗ ..........................
.
η 1 ...... .. ..... 1 β = E[η ] 1 ...... .. β1
... ... Inference Rule ... ... ... ... ... .
α2 ............................... ................................................................................................. .................................. η ∗ ..................... ∗ ...........................
2 ..... ... ...... 2 β = E[η ] 2 ..... .. β2
... ... ... ...
.. ...
...
.
..................................................................
...
... ...
...
.. ....
....
....
...
.. ....
....
..
. ...
...
....
... Rule Base ....
... ...
... ...
..
. ...
... ...
.
.
. ...
...
.
... ∗ ...........................
αm ......................... ........................................................... .............................
. . . . .
. .
. .
. .
...
η n . ................n . β = E[η ]
.
.
∗ .
...........................
.. βn
......................................................................................... .......................... ...........................................n
................

Figure 10.1: An Uncertain System

Theorem 10.5 Assume ξi1 , ξi2 , · · · , ξim , ηi1 , ηi2 , · · · , ηin are independent un-
certain sets with membership functions µi1 , µi2 , · · · , µim , νi1 , νi2 , · · · , νin , i =
1, 2, · · · , k, respectively. Then the uncertain system from (α1 , α2 , · · · , αm ) to
(β1 , β2 , · · · , βn ) is
k ∗
X ci · E[ηij ]
βj = (10.24)
c + c2 + · · · + ck
i=1 1

for j = 1, 2, · · · , n, where ηij are uncertain sets whose membership functions
are given by

νij (y)


 , if νij (y) < ci /2


 ci

∗ νij (y) + ci − 1
νij (y) = (10.25)
, if νij (y) > 1 − ci /2
ci






0.5, otherwise

and ci are constants determined by

ci = min µil (αl ) (10.26)


1≤l≤m

for i = 1, 2, · · · , k, j = 1, 2, · · · , n, respectively.

Proof: It follows from Inference Rule 10.4 that the uncertain sets ηj∗ are

k ∗
X ci · ηij
ηj∗ =
i=1
c1 + c2 + · · · + ck


for j = 1, 2, · · · , n. Since ηij , i = 1, 2, · · · , k, j = 1, 2, · · · , n are independent
uncertain sets, we get the theorem immediately by the linearity of expected
value operator.

Remark 10.1: The uncertain system allows the uncertain sets ηij in the
rule-base (10.19) become constants bij , i.e.,

ηij = bij (10.27)


Section 10.2 - Uncertain System 267

for i = 1, 2, · · · , k and j = 1, 2, · · · , n. In this case, the uncertain system


(10.24) becomes
k
X ci · bij
βj = (10.28)
i=1 1
c + c2 + · · · + ck

for j = 1, 2, · · · , n.

Remark 10.2: The uncertain system allows the uncertain sets ηij in the
rule-base (10.19) become functions hij of inputs α1 , α2 , · · · , αm , i.e.,

ηij = hij (α1 , α2 , · · · , αm ) (10.29)

for i = 1, 2, · · · , k and j = 1, 2, · · · , n. In this case, the uncertain system


(10.24) becomes
k
X ci · hij (α1 , α2 , · · · , αm )
βj = (10.30)
i=1
c1 + c2 + · · · + ck

for j = 1, 2, · · · , n.

Uncertain Systems are Universal Approximator


Uncertain systems are capable of approximating any continuous function on
a compact set (i.e., bounded and closed set) to arbitrary accuracy. This is the
reason why uncertain systems may play a controller. The following theorem
shows this fact.

Theorem 10.6 (Peng-Chen [124]) For any given continuous function g on


a compact set D ⊂ <m and any given ε > 0, there exists an uncertain system
f such that

kf (α1 , α2 , · · · , αm ) − g(α1 , α2 , · · · , αm )k < ε (10.31)

for any (α1 , α2 , · · · , αm ) ∈ D.

Proof: Without loss of generality, we assume that the function g is a real-


valued function with only two variables α1 and α2 , and the compact set is
a unit rectangle D = [0, 1] × [0, 1]. Since g is continuous on D and then is
uniformly continuous, for any given number ε > 0, there is a number δ > 0
such that
|g(α1 , α2 ) − g(α10 , α20 )| < ε (10.32)

whenever k(α1 , α2 ) − (α10 , α20 )k < δ. Let k be an integer larger than 2/δ,
and write
 
i−1 i j−1 j
Dij = (α1 , α2 ) < α1 ≤ , < α2 ≤ (10.33)
k k k k
268 Chapter 10 - Uncertain Inference

for i, j = 1, 2, · · · , k. Note that {Dij } is a sequence of disjoint rectangles


whose “diameter” is less than δ. Define uncertain sets
 
i−1 i
ξi = , , i = 1, 2, · · · , k, (10.34)
k k
 
j−1 j
ηj = , , j = 1, 2, · · · , k. (10.35)
k k
Then we assume a rule-base with k × k if-then rules,

Rule ij: If ξi and ηj then g(i/k, j/k), i, j = 1, 2, · · · , k. (10.36)

According to the uncertain inference rule, the corresponding uncertain system


from D to < is

f (α1 , α2 ) = g(i/k, j/k), if (α1 , α2 ) ∈ Dij , i, j = 1, 2, · · · , k. (10.37)

It follows from (10.32) that for any (α1 , α2 ) ∈ Dij ⊂ D, we have

|f (α1 , α2 ) − g(α1 , α2 )| = |g(i/k, j/k) − g(α1 , α2 )| < ε. (10.38)

The theorem is thus verified. Hence uncertain systems are universal approx-
imators.

10.3 Uncertain Control


Uncertain controller, designed by Liu [82], is a special uncertain system that
maps the state variables of a process under control to the action variables.
Thus an uncertain controller consists of the same 5 parts of uncertain system:
inputs, a rule-base, an uncertain inference rule, an expected value operator,
and outputs. The distinguished point is that the inputs of uncertain controller
are the state variables of the process under control, and the outputs are the
action variables.
Figure 10.2 shows an uncertain control system consisting of an uncertain
controller and a process. Note that t represents time, α1 (t), α2 (t), · · · , αm (t)
are not only the inputs of uncertain controller but also the outputs of process,
and β1 (t), β2 (t), · · · , βn (t) are not only the outputs of uncertain controller but
also the inputs of process.

10.4 Inverted Pendulum


Inverted pendulum system is a nonlinear unstable system that is widely used
as a benchmark for testing control algorithms. Many good techniques already
exist for balancing inverted pendulum. Among others, Gao [46] successfully
balanced an inverted pendulum by the uncertain controller with 5 × 5 if-then
rules.
Section 10.4 - Inverted Pendulum 269

.........................................................................
... ...
Inputs of Controller ...
.
...
. Outputs of Controller
......................................................................................................................................... ...............................................................................................................................................
... .
.
. Process .....
..
...
...
... Outputs of Process .
..... .. Inputs of Process ...
... ....................................................................... ...
... ...
....... .
. ...
. . .
....................................... . ............................................................................................................... .................................. .......................................................................................... ........................................
... .
. . .
. ... .. .. ∗ ... .. .. ∗ ... . .
. ...
α (t)
...
... 1
. .. . .
.......................... .............................................................................................. ........................
.
. ... ...
.. .
η (t)
1
......................
.
...
..
β (t)=E[η (t)]
.
. 1 1
..........................
.
...
.. . .
. 1 β (t) .....
.... .... .... .... .... .... ...
...
... ... . . Inference Rule
. .. ..... .....
.
.
. ∗ .....
.............................
.
. ∗ .....
..............................
.
. ...
α (t) ... ............................... ................................................................................................ ............................ η (t) β (t)=E[η (t)] β (t) ....
... 2 ... ... ....... ... ... 2 ... ... 2 2 ... ..
. 2 ...
.... ..
.. ...
...
.
.
..
.
.
.
..
.
.
.
.
.
.
............................................................................
...
...
.
.
.
.
.
.. ...
... ..
.
.
.
.
. ... .
.
.
.
.. ...
...
... .. .. . . .
. ... .
.
.
.
.
.
.
.
. ...
...
...
.
.
.
. . ...
... .
.
.
.
.
...
...
.
.
.
. . ...
...
...
...
.
.
.
.
..........................
.
.
.
. Rule Base
.
.
.
.
..
.
..
..
.
.. ... .. ∗
.
.
.
.
..
.
... ... .
.
.
.
. ∗
..
.
.. ...........
.
.
.
. .....
α (t)
.............m
...
........................
.
.
...
.
.. ...
.
.................................... .. ..
...
...
............................
.............................................................................................................
....
...
...
..
.. .. ...
.
.
.
...
.
.
η (t)
n
................................
...
...
..
.. .. ...
.
.
..
...
..β (t)=E[η (t)]
..
. n n
.
........................................................................................
.
..
.
.
..
...
...
...
..
.. ... n β (t)
......................................
.
...
.

Figure 10.2: An Uncertain Control System


....
.........
....
...................
...
A(t)
........... ... ...
.......
........... .. ..........
... ................ ...
... ..
.. ..
.
.
... ... ...
... ... ...
... ... ...
... ..........
... .. ..
... ...
... ... ...
... ... ...
... .
.........
... .. ..
... ...
... ..........
... ........
... ........
.. .
... ...
...
..

...................................................................................................................
...
..
............................... ..
F (t) ...
... ............................. ... .
.
....
... .... ..
.
.........
. .. .
......................................................................................................................................
• •
.. .... . . . . . .. ...... .
... . ..
........................................................................................................................................................................................................

Figure 10.3: An Inverted Pendulum in which A(t) represents the angular


position and F (t) represents the force that moves the cart at time t.

The uncertain controller has two inputs (“angle” and “angular velocity”)
and one output (“force”). Three of them will be represented by uncertain
sets labeled by
“negative large” NL
“negative small” NS
“zero” Z
“positive small” PS
“positive large” PL
The membership functions of those uncertain sets are shown in Figures 10.4,
10.5 and 10.6.
Intuitively, when the inverted pendulum has a large clockwise angle and
a large clockwise angular velocity, we should give it a large force to the right.
Thus we have an if-then rule,
If the angle is negative large
and the angular velocity is negative large,
then the force is positive large.
Similarly, when the inverted pendulum has a large counterclockwise angle
270 Chapter 10 - Uncertain Inference

NL
..................................................
NS ... .
Z .
PS PL
...............................................
... ... ... ...... ...... ...
... ... ..... ... ... ... ... ...
... .. .. ..... .. ..... ..
... ...
.
...
.. .
.... ... .
.... ... .....
... . ... . ... . ... .
... ..... ... .... ... ..... ... ....
... .. ... ... ... .. ... ...
...... ...... ...... ......
...... ...... ...... ......
... ..... ... ..... ... ..... ... ...
..
.
. ... .
... ... .
... ... .
.. .....
.
.... ...
.... .
... ...
.... ...
. ...
.... ...
. ...
...
... ... ..... ... ..... ... ..... ...
... ... ... ... ... ... ... ...
..
. ..... .... ...... ..
............................................................................................................................................................................................................................................................................
.
(rad)
−π/2 −π/4 0 π/4 π/2

Figure 10.4: Membership Functions of “Angle”

NL
..................................................
NS ...
Z PS PL
... .. ... ...... ...... .................................................
... ... ... ... ... ... ... ...
... ... ..... ... ..... ... ..... ...
... ..
. . .
... .. .
... .. .
...
. ... ... ...
...
... ..... ... ..... ... ..... ... .....
... ... ... .. ... .. ... ..
..... ... ... ... ... ... ...
..... ..... ..
.....
. ... . .
. .... .
. .... ......
.
. .
. .
. ... ...
... .....
. .... ..... .... ..... .... .....
..
. ...
... ... ... ... ...
... ... ...
. . ... . . ...
... ... ..... ... ..... ... ..... ...
... ... .. ... .. ... .. ...
... ... .. . .. .
. .
. .
...................................................................................................................................................................................................................................................................................... (rad/sec)
−π/4 −π/8 0 π/8 π/4

Figure 10.5: Membership Functions of “Angular Velocity”

and a large counterclockwise angular velocity, we should give it a large force


to the left. Thus we have an if-then rule,

If the angle is positive large


and the angular velocity is positive large,
then the force is negative large.

Note that each input or output has 5 states and each state is represented by
an uncertain set. This implies that the rule-base contains 5 × 5 if-then rules.
In order to balance the inverted pendulum, the 25 if-then rules in Table 10.1
are accepted.
A lot of simulation results show that the uncertain controller may balance
the inverted pendulum successfully.

NL NS Z PS PL
....... ....... ....... ....... .......
... ... ... ... ... ... ... ... ... ...
... ..... ... ..... ... ..... ... ..... ... .....
.... ... ...
. ... ...
. ... .... ... ...
. ...
. ... . ... . ... . ... . ...
... ... ..... ... ..... ... ..... ... ..... ...
.. ... .. ... .. ... .. ... .. ...
.... .
. .. .. .. .. .. .. .. ...
.... ......... ......... ......... ........
. ...
...
.
.. ... .. ... .. . .. .. ... .. ...
...
. ... .....
. ... .....
. ... .....
. ... .....
. ...
..
. ..
. ... ..
. ... ..
. ... ..
. ... ...
.... ...
.
...
.... .. .... ...
.... .. ...
. ...
.... .. .... ...
....
...
...
... ... ... .. . ... .. . ... .. . ... ...
.
.. ..
. . . . . . . . .
...................................................................................................................................................................................................................................................................................... (N)
−60 −40 −20 0 20 40 60

Figure 10.6: Membership Functions of “Force”


Section 10.5 - Bibliographic Notes 271

Table 10.1: Rule Base with 5 × 5 If-Then Rules


XXX
X XXvelocity NL NS Z PS PL
angle XXX
X
NL PL PL PL PS Z
NS PL PL PS Z NS
Z PL PS Z NS NL
PS PS Z NS NL NL
PL Z NS NL NL NL

10.5 Bibliographic Notes


The basic uncertain inference rule was initialized by Liu [82] in 2010 by the
tool of conditional uncertain set. After that, Gao-Gao-Ralescu [42] extended
the uncertain inference rule to the case with multiple antecedents and mul-
tiple if-then rules.
Based on the uncertain inference rules, Liu [82] suggested the concept of
uncertain system, and then presented the tool of uncertain controller. As
an important contribution, Peng-Chen [124] proved that uncertain systems
are universal approximator and then demonstrated that the uncertain con-
troller is a reasonable tool. As a successful application, Gao [46] balanced an
inverted pendulum by using the uncertain controller.
Chapter 11

Uncertain Process

The study of uncertain process was started by Liu [78] in 2008 for modelling
the evolution of uncertain phenomena. This chapter will give the concept of
uncertain process, and introduce sample path, uncertainty distribution, in-
dependent increment process, extreme value, first hitting time, time integral,
and stationary increment process.

11.1 Uncertain Process


An uncertain process is essentially a sequence of uncertain variables indexed
by time. A formal definition is given below.

Definition 11.1 (Liu [78]) Let (Γ, L, M) be an uncertainty space and let T
be a totally ordered set (e.g. time). An uncertain process is a function Xt (γ)
from T × (Γ, L, M) to the set of real numbers such that {Xt ∈ B} is an event
for any Borel set B of real numbers at each time t.

Remark 11.1: The above definition says Xt is an uncertain process if and


only if it is an uncertain variable at each time t.

Example 11.1: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.4. Then
(
t, if γ = γ1
Xt (γ) = (11.1)
t + 1, if γ = γ2

is an uncertain process.

Example 11.2: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Then

Xt (γ) = t − γ, ∀γ ∈ Γ (11.2)
274 Chapter 11 - Uncertain Process

is an uncertain process.

Example 11.3: A real-valued function f (t) with respect to time t may be


regarded as a special uncertain process on an uncertainty space (Γ, L, M),
i.e.,
Xt (γ) = f (t), ∀γ ∈ Γ. (11.3)

Sample Path
Definition 11.2 (Liu [78]) Let Xt be an uncertain process. Then for each
γ ∈ Γ, the function Xt (γ) is called a sample path of Xt .

Note that each sample path is a real-valued function of time t. In addition,


an uncertain process may also be regarded as a function from an uncertainty
space to a collection of sample paths.

<..
...
.......
..
... ...... .
... ... .... ....... ..
... ........... ..... .............
... ...... . . ... .....
.
... ... ......
......
... ... ... ...
... ... ... .......
..
.. ...
... .. ....
. .
. . .
. ..
... . .. .... ......... ... .
. .
.........
... ...... ..
. .. .............. .... .......
. ..
.
.
... .. .........
. ... .... .. ........
.. .......... . .... ...... ... .
... . .
. .
. . .
.
... ....... ... ....... ....... .... ... ..... ... ..
... .... ... ... .. . ...... ... ... ... ... .....
... ......... ... .... ..... .. ....
... .... .... .... ..
... .. .. ..
.
... . ....
... ...........
... ...
......
..
..............................................................................................................................................................................................................................................................
t

Figure 11.1: A Sample Path of Uncertain Process

Definition 11.3 An uncertain process Xt is said to be sample-continuous if


almost all sample paths are continuous functions with respect to time t.

11.2 Uncertainty Distribution


An uncertainty distribution of uncertain process is a sequence of uncertainty
distributions of uncertain variables indexed by time. Thus an uncertainty
distribution of uncertain process is a surface rather than a curve. A formal
definition is given below.

Definition 11.4 (Liu [94]) The uncertainty distribution Φt (x) of an uncer-


tain process Xt is defined by

Φt (x) = M {Xt ≤ x} (11.4)

for any time t and any number x.


Section 11.2 - Uncertainty Distribution 275

That is, the uncertain process Xt has an uncertainty distribution Φt (x)


if at each time t, the uncertain variable Xt has the uncertainty distribution
Φt (x).

Example 11.4: The linear uncertain process Xt ∼ L(at, bt) has an uncer-
tainty distribution,

if x ≤ at

 0,

 x − at

Φt (x) = , if at ≤ x ≤ bt (11.5)
 (b − a)t



1, if x ≥ bt.

Example 11.5: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
uncertainty distribution,


 0, if x ≤ at


 x − at
, if at ≤ x ≤ bt


2(b − a)t


Φt (x) = (11.6)
 x + ct − 2bt

 , if bt ≤ x ≤ ct
2(c − b)t





if x ≥ ct.

1,

Example 11.6: The normal uncertain process Xt ∼ N (et, σt) has an un-
certainty distribution,
  −1
π(et − x)
Φt (x) = 1 + exp √ . (11.7)
3σt

Example 11.7: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an uncertainty distribution,
  −1
π(et − ln x)
Φt (x) = 1 + exp √ . (11.8)
3σt

Exercise 11.1: Take an uncertainty space (Γ, L, M) to be {γ1 , γ2 } with


power set and M{γ1 } = 0.6, M{γ2 } = 0.4. Derive the uncertainty distribu-
tion of the uncertain process
(
t, if γ = γ1
Xt (γ) = (11.9)
t + 1, if γ = γ2 .
276 Chapter 11 - Uncertain Process

Exercise 11.2: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Derive the uncertainty distribution of the
uncertain process
Xt (γ) = t − γ, ∀γ ∈ Γ. (11.10)

Exercise 11.3: A real-valued function f (t) with respect to time t is a special


uncertain process. What is the uncertainty distribution of f (t)?
Theorem 11.1 (Liu [94], Sufficient and Necessary Condition) A function
Φt (x) : T × < → [0, 1] is an uncertainty distribution of uncertain process if
and only if at each time t, it is a monotone increasing function with respect
to x except Φt (x) ≡ 0 and Φt (x) ≡ 1.
Proof: If Φt (x) is an uncertainty distribution of some uncertain process
Xt , then at each time t, Φt (x) is the uncertainty distribution of uncertain
variable Xt . It follows from Peng-Iwamura theorem that Φt (x) is a monotone
increasing function with respect to x and Φt (x) 6≡ 0, Φt (x) 6≡ 1. Conversely,
if at each time t, Φt (x) is a monotone increasing function except Φt (x) ≡ 0
and Φt (x) ≡ 1, it follows from Peng-Iwamura theorem that there exists an
uncertain variable ξt whose uncertainty distribution is just Φt (x). Define
Xt = ξt , ∀t ∈ T.
Then Xt is an uncertain process and has the uncertainty distribution Φt (x).
The theorem is verified.
Theorem 11.2 Let Xt be an uncertain process with uncertainty distribution
Φt (x), and let f (x) be a continuous function. Then f (Xt ) is also an uncertain
process. Furthermore, (i) if f (x) is a strictly increasing function, then f (Xt )
has an uncertainty distribution
Ψt (x) = Φt (f −1 (x)); (11.11)
and (ii) if f (x) is a strictly decreasing function and Φt (x) is continuous with
respect to x, then f (Xt ) has an uncertainty distribution
Ψt (x) = 1 − Φt (f −1 (x)). (11.12)
Proof: At each time t, since Xt is an uncertain variable, it follows from
Theorem 2.1 that f (Xt ) is also an uncertain variable. Thus f (Xt ) is an
uncertain process. The equations (11.11) and (11.12) may be verified by the
operational law of uncertain variables immediately.

Example 11.8: Let Xt be an uncertain process with uncertainty distri-


bution Φt (x). Show that the uncertain process aXt + b has an uncertainty
distribution, (
Φt ((x − b)/a), if a > 0
Ψt (x) = (11.13)
1 − Φt ((x − b)/a), if a < 0.
Section 11.2 - Uncertainty Distribution 277

Regular Uncertainty Distribution


Definition 11.5 (Liu [94]) An uncertainty distribution Φt (x) is said to be
regular if at each time t, it is a continuous and strictly increasing function
with respect to x at which 0 < Φt (x) < 1, and

lim Φt (x) = 0, lim Φt (x) = 1. (11.14)


x→−∞ x→+∞

It is clear that linear uncertainty distribution, zigzag uncertainty distribu-


tion, normal uncertainty distribution and lognormal uncertainty distribution
of uncertain process are all regular.

Inverse Uncertainty Distribution


Definition 11.6 (Liu [94]) Let Xt be an uncertain process with regular un-
certainty distribution Φt (x). Then the inverse function Φ−1
t (α) is called the
inverse uncertainty distribution of Xt .

Note that at each time t, the inverse uncertainty distribution Φ−1t (α) is
well defined on the open interval (0, 1). If needed, we may extend the domain
to [0, 1] via

Φ−1 −1
t (0) = lim Φt (α), Φ−1 −1
t (1) = lim Φt (α). (11.15)
α↓0 α↑1

Φ−1
t (α)
....
.........
......
......
.......... α = 0.9
.................
.. ......
... .... .
... ......
.......
... .......... ....
............. ............
...
... ...................
.........................
.......
........ α = 0.8
... ..
...........
. ............
..... ...... .... ...............
.........
... ........ ................................................ ......... α = 0.7
... ......... .................... ........
.......... . ........
................................................................................................................................ .... .................................
............................................................ .............................................
....................................................... ..................................
......... α = 0.6 .

.....................................................................................................................................................................................
.................................................................................. .... ....
.......... ..................
α = 0.5
...................................................... ................................... .........
................... ..................... ........................................................................
... ........... .
......... .....................
.........
........
...........
α = 0.4
.........................
..
... ........ .................................................. .........
...
...
........
........
.........
... ..........
.......
........ ....α = 0.3
...
. ............
.
... ............... .......
........................ ........
... ..........
....... α = 0.2
...........................
... ......
... ......
......
... ......
... .......
............ .....
....
..
..
α = 0.1
........
......................................................................................................................................................................................................................................................
t

Figure 11.2: Inverse Uncertainty Distribution of Uncertain Process

Example 11.9: The linear uncertain process Xt ∼ L(at, bt) has an inverse
uncertainty distribution,

Φ−1
t (α) = (1 − α)at + αbt. (11.16)
278 Chapter 11 - Uncertain Process

Example 11.10: The zigzag uncertain process Xt ∼ Z(at, bt, ct) has an
inverse uncertainty distribution,
(
−1 (1 − 2α)at + 2αbt, if α < 0.5
Φt (α) = (11.17)
(2 − 2α)bt + (2α − 1)ct, if α ≥ 0.5.

Example 11.11: The normal uncertain process Xt ∼ N (et, σt) has an


inverse uncertainty distribution,

−1 σt 3 α
Φt (α) = et + ln . (11.18)
π 1−α

Example 11.12: The lognormal uncertain process Xt ∼ LOGN (et, σt) has
an inverse uncertainty distribution,
√ !
−1 σt 3 α
Φt (α) = exp et + ln . (11.19)
π 1−α

Exercise 11.4: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure. Derive the inverse uncertainty distribution
of the uncertain process

Xt (γ) = t − γ, ∀γ ∈ Γ. (11.20)

Theorem 11.3 (Liu [94]) A function Φ−1 t (α) : T × (0, 1) → < is an in-
verse uncertainty distribution of uncertain process if at each time t, it is a
continuous and strictly increasing function with respect to α.

Proof: At each time t, since Φ−1 t (α) is a continuous and strictly increasing
function with respect to α, it follows from Theorem 2.5 that there exists an
uncertain variable ξt whose inverse uncertainty distribution is just Φ−1t (α).
Define
Xt = ξt , ∀t ∈ T.
Then Xt is an uncertain process and has the inverse uncertainty distribution
Φ−1
t (α). The theorem is proved.

11.3 Independence and Operational Law


Definition 11.7 (Liu [94]) Uncertain processes X1t , X2t , · · · , Xnt are said
to be independent if for any positive integer k and any times t1 , t2 , · · · , tk ,
the uncertain vectors

ξ i = (Xit1 , Xit2 , · · · , Xitk ), i = 1, 2, · · · , n (11.21)


Section 11.3 - Independence and Operational Law 279

are independent, i.e., for any Borel sets B1 , B2 , · · · , Bn of k-dimensional real


vectors, we have
( n ) n
\ ^
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi }. (11.22)
i=1 i=1

Exercise 11.5: Let X1t , X2t , · · · , Xnt be independent uncertain processes,


and let t1 , t2 , · · · , tn be any times. Show that

X1t1 , X2t2 , · · · , Xntn (11.23)

are independent uncertain variables.

Exercise 11.6: Let Xt and Yt be independent uncertain processes. For any


times t1 , t2 , · · · , tk and s1 , s2 , · · · , sm , show that

(Xt1 , Xt2 , · · · , Xtk ) and (Ys1 , Ys2 , · · · , Ysm ) (11.24)

are independent uncertain vectors.

Theorem 11.4 (Liu [94]) Uncertain processes X1t , X2t , · · · , Xnt are inde-
pendent if and only if for any positive integer k, any times t1 , t2 , · · · , tk , and
any Borel sets B1 , B2 , · · · , Bn of k-dimensional real vectors, we have
( n ) n
[ _
M (ξ i ∈ Bi ) = M{ξ i ∈ Bi } (11.25)
i=1 i=1

where ξ i = (Xit1 , Xit2 , · · · , Xitk ) for i = 1, 2, · · · , n.

Proof: It follows from Theorem 2.59 that ξ 1 , ξ 2 , · · · , ξ n are independent


uncertain vectors if and only if (11.25) holds. The theorem is thus verified.

Theorem 11.5 (Liu [94], Operational Law) Let X1t , X2t , · · · , Xnt be inde-
pendent uncertain processes with regular uncertainty distributions Φ1t , Φ2t ,
· · · , Φnt , respectively. If the function f (x1 , x2 , · · · , xn ) is strictly increasing
with respect to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 ,
xm+2 , · · · , xn , then
Xt = f (X1t , X2t , · · · , Xnt ) (11.26)
has an inverse uncertainty distribution

Φ−1 −1 −1 −1 −1
t (α) = f (Φ1t (α), · · · , Φmt (α), Φm+1,t (1 − α), · · · , Φnt (1 − α)). (11.27)

Proof: At any time t, it is clear that X1t , X2t , · · · , Xnt are independent un-
certain variables with inverse uncertainty distributions Φ−1 −1
1t (α), Φ2t (α), · · · ,
−1
Φnt (α), respectively. The theorem follows from the operational law of un-
certain variables immediately.
280 Chapter 11 - Uncertain Process

Theorem 11.6 (Operational Law) Let X1t , X2t , · · · , Xnt be independent un-
certain processes with continuous uncertainty distributions Φ1t , Φ2t , · · · , Φnt ,
respectively. If f (x1 , x2 , · · · , xn ) is continuous, strictly increasing with respect
to x1 , x2 , · · · , xm and strictly decreasing with respect to xm+1 , xm+2 , · · · , xn ,
then
Xt = f (X1t , X2t , · · · , Xnt ) (11.28)
has an uncertainty distribution
 
Φt (x) = sup min Φit (xi ) ∧ min (1 − Φit (xi )) . (11.29)
f (x1 ,x2 ,··· ,xn )=x 1≤i≤m m+1≤i≤n

Proof: At any time t, it is clear that X1t , X2t , · · · , Xnt are independent un-
certain variables. The theorem follows from the operational law of uncertain
variables immediately.

11.4 Independent Increment Process


An independent increment process is an uncertain process that has indepen-
dent increments. A formal definition is given below.

Definition 11.8 (Liu [78]) An uncertain process Xt is said to have inde-


pendent increments if

Xt1 , Xt2 − Xt1 , Xt3 − Xt2 , · · · , Xtk − Xtk−1 (11.30)

are independent uncertain variables where t1 , t2 , · · · , tk are any times with


t1 < t2 < · · · < tk .

That is, an independent increment process means that its increments are
independent uncertain variables whenever the time intervals do not overlap.
Please note that the increments are also independent of the initial state.

Theorem 11.7 (Liu [94]) Let Φ−1 t (α) be the inverse uncertainty distribution
of an independent increment process. Then (i) Φ−1 t (α) is a continuous and
strictly increasing function with respect to α at each time t, and (ii) Φ−1
t (α)−
Φ−1
s (α) is a monotone increasing function with respect to α for any times
s < t.

Proof: Since Φ−1 t (α) is the inverse uncertainty distribution of indepen-


dent increment process Xt , it follows from Theorem 11.3 that Φ−1 t (α) is
a continuous and strictly increasing function with respect to α. Since Xt =
Xs + (Xt − Xs ), for any α < β, we immediately have

Φ−1 −1 −1 −1
t (β) − Φt (α) ≥ Φs (β) − Φs (α).

That is,
Φ−1 −1 −1 −1
t (β) − Φs (β) ≥ Φt (α) − Φs (α).
Section 11.4 - Independent Increment Process 281

Hence Φ−1 −1
t (α) − Φs (α) is a monotone increasing function with respect to α.
The theorem is verified.

Remark 11.2: It follows from Theorem 11.7 that the uncertainty distribu-
tion of independent increment process has a horn-like shape. See Figure 11.3.

Φ−1
t (α)
....
.........
..
α = 0.9
..............
.................
.....................
..

... ............ α = 0.8 ..........


... . .. ................. . ........... .................
.. ....
... ......... ............
...
... .......
........
.......
..........
..........
........... α = 0.7..............
...............
...........
... ..
......... ..... .......... . .......................
... ......... ............ .... .............
...
... .....
.... ......
...... .............. ..................
. .
......... α = 0.6 ..................
................ ......
...................
.

... ..... ....... ................. ................


.... ...... . ............... ...... ..
... ................... ................ . ..... ........ . .
... .......................... .............. ... ...
.............
.................. ................ .. ..
.......................................... . .
...............................................................................................................................................................................
...............
.................................
..
.. α = 0.5 ..
..
.
. .. ... ..... ...
...................................... .......................... ...
... ....... ......... ............. ..............
............... .. ..
... ..... ...... .........
..... ....... . ............... ........ ..
... ..... ....... ................. ................
................. ...... .
... ...... ....... .......... ...... .
.................
...... ....... .......... ...........
...
...
...
......
......
.......
.........
........
.........
.. ...........
...........
............ α = 0.4
.............
....... .........
... ....... .......... ..............
........ .......... ...............
... ............ ........
...
...
........
.........
..........
...........
α = 0.3
............
..............
................
... .........
...
............
α = 0.2
..............
..................
...................
...
... α = 0.1
.....................................................................................................................................................................................................................................
. t

Figure 11.3: Inverse Uncertainty Distribution of Independent Increment Pro-


cess: A Horn-like Family of Functions of t indexed by α

Theorem 11.8 (Liu [94]) Let Φ−1 t (α) : T × (0, 1) → < be a function. If (i)
Φ−1
t (α) is a continuous and strictly increasing function with respect to α at
each time t, and (ii) Φ−1 −1
t (α)−Φs (α) is a monotone increasing function with
respect to α for any times s < t, then there exists an independent increment
process whose inverse uncertainty distribution is just Φ−1
t (α).

Proof: Without loss of generality, we only consider the range of t ∈ [0, 1]. Let
n be a positive integer. Since Φ−1t (α) is a continuous and strictly increasing
function and Φ−1
t (α)−Φ −1
s (α) is a monotone increasing function with respect
to α, there exist independent uncertain variables ξ0n , ξ1n , · · · , ξnn such that
ξ0n has an inverse uncertainty distribution
Υ−1 −1
0n (α) = Φ0 (α)

and ξin have uncertainty distributions


n o
Υin (x) = sup α | Φ−1
i/n (α) − Φ −1
(i−1)/n (α) = x ,

i = 1, 2, · · · , n, respectively. Define an uncertain process



k
 X k
ξin , if t = (k = 0, 1, · · · , n)


n n
Xt = i=0


linear, otherwise.

282 Chapter 11 - Uncertain Process

It may prove that Xtn converges in distribution as n → ∞. Furthermore, we


may verify that the limit is indeed an independent increment process and has
the inverse uncertainty distribution Φ−1t (α). The theorem is verified.

Theorem 11.9 Let Xt be a sample-continuous independent increment pro-


cess with regular uncertainty distribution Φt (x). Then for any α ∈ (0, 1), we
have
M{Xt ≤ Φ−1 t (α), ∀t} = α, (11.31)
M{Xt > Φ−1
t (α), ∀t} = 1 − α. (11.32)

Proof: It is still a conjecture.

Remark 11.3: It is also showed that for any α ∈ (0, 1), the following two
equations are true,
M{Xt < Φ−1t (α), ∀t} = α, (11.33)
M{Xt ≥ Φ−1
t (α), ∀t} = 1 − α. (11.34)
Please mention that {Xt < Φ−1
t (α), ∀t} and {Xt ≥ Φ−1
t (α), ∀t} are disjoint
events but not opposite. Although it is always true that

M{Xt < Φ−1 −1


t (α), ∀t} + M{Xt ≥ Φt (α), ∀t} ≡ 1, (11.35)

the union of {Xt < Φ−1 −1


t (α), ∀t} and {Xt ≥ Φt (α), ∀t} does not make the
universal set, and it is possible that

M{(Xt < Φ−1 −1


t (α), ∀t) ∪ (Xt ≥ Φt (α), ∀t)} < 1. (11.36)

11.5 Extreme Value Theorem


This section will present a series of extreme value theorems for sample-
continuous independent increment processes.

Theorem 11.10 (Liu [90], Extreme Value Theorem) Let Xt be a sample-


continuous independent increment process with uncertainty distribution Φt (x).
Then the supremum
sup Xt (11.37)
0≤t≤s

has an uncertainty distribution

Ψ(x) = inf Φt (x); (11.38)


0≤t≤s

and the infimum


inf Xt (11.39)
0≤t≤s

has an uncertainty distribution

Ψ(x) = sup Φt (x). (11.40)


0≤t≤s
Section 11.5 - Extreme Value Theorem 283

Proof: Let 0 = t1 < t2 < · · · < tn = s be a partition of the closed interval


[0, s]. It is clear that

Xti = Xt1 + (Xt2 − Xt1 ) + · · · + (Xti − Xti−1 )

for i = 1, 2, · · · , n. Since the increments

Xt1 , Xt2 − Xt1 , · · · , Xtn − Xtn−1

are independent uncertain variables, it follows from Theorem 2.18 that the
maximum
max Xti
1≤i≤n

has an uncertainty distribution

min Φti (x).


1≤i≤n

Since Xt is sample-continuous, we have

max Xti → sup Xt


1≤i≤n 0≤t≤s

and
min Φti (x) → inf Φt (x)
1≤i≤n 0≤t≤s

as n → ∞. Thus (11.38) is proved. Similarly, it follows from Theorem 2.18


that the minimum
min Xti
1≤i≤n

has an uncertainty distribution

max Φti (x).


1≤i≤n

Since Xt is sample-continuous, we have

min Xti → inf Xt


1≤i≤n 0≤t≤s

and
max Φti (x) → sup Φt (x)
1≤i≤n 0≤t≤s

as n → ∞. Thus (11.40) is verified.

Example 11.13: The sample-continuity condition in Theorem 11.10 cannot


be removed. For example, take an uncertainty space (Γ, L, M) to be [0, 1]
with Borel algebra and Lebesgue measure. Define a sample-discontinuous
uncertain process (
0, if γ 6= t
Xt (γ) = (11.41)
1, if γ = t.
284 Chapter 11 - Uncertain Process

Since all increments are 0 almost surely, Xt is an independent increment


process. On the one hand, Xt has an uncertainty distribution
(
0, if x < 0
Φt (x) = (11.42)
1, if x ≥ 0.

On the other hand, the supremum

sup Xt (γ) ≡ 1 (11.43)


0≤t≤1

has an uncertainty distribution


(
0, if x < 1
Ψ(x) = (11.44)
1, if x ≥ 1.

Thus
Ψ(x) 6= inf Φt (x). (11.45)
0≤t≤1

Therefore, the sample-continuity condition cannot be removed.

Exercise 11.7: Let Xt be a sample-continuous independent increment pro-


cess with uncertainty distribution Φt (x). Assume f is a continuous and
strictly increasing function. Show that the supremum

sup f (Xt ) (11.46)


0≤t≤s

has an uncertainty distribution

Ψ(x) = inf Φt (f −1 (x)); (11.47)


0≤t≤s

and the infimum


inf f (Xt ) (11.48)
0≤t≤s

has an uncertainty distribution

Ψ(x) = sup Φt (f −1 (x)). (11.49)


0≤t≤s

Exercise 11.8: Let Xt be a sample-continuous independent increment pro-


cess with continuous uncertainty distribution Φt (x). Assume f is a continu-
ous and strictly decreasing function. Show that the supremum

sup f (Xt ) (11.50)


0≤t≤s

has an uncertainty distribution

Ψ(x) = 1 − sup Φt (f −1 (x)); (11.51)


0≤t≤s
Section 11.6 - First Hitting Time 285

and the infimum


inf f (Xt ) (11.52)
0≤t≤s

has an uncertainty distribution


Ψ(x) = 1 − inf Φt (f −1 (x)). (11.53)
0≤t≤s

11.6 First Hitting Time


Definition 11.9 (Liu [90]) Let Xt be an uncertain process and let z be a
given level. Then the uncertain variable

τz = inf t ≥ 0 Xt = z (11.54)
is called the first hitting time that Xt reaches the level z.

X. t
....
........
....
... ...... .
... ... .. ..... .
.... .. ...... ..........
... ..... .... ..... ........ . ....
... ..... ...
... .. ........
z ............................................................................................. ... .....
... . ...
. ...
... .......... ......
. ....
. ...
..
... .. ... ........ ..
. .
....... ..
. . . . . . ..
... ........ . .... ......... ... ..... ....... ..... . ...
... ... ........... ... .. ........ ........ ..
... ... ........ . .
. .
. .
. . .
... . ... ..... ..... ... .. ..
...... .. ..... ......... ........... .... ..... ... ...
...
... ... .. .. ... . ..... ... .. ... .. ......
... ............. .... ... . .. .. .. ... ..
... ...... ...... ...... .... ..
... .. ..
... ... ..
... ...... .... ..
... ... .. ..
...... ..
....... .
..................................................................................................................................................................................................................................................
τz t

Figure 11.4: First Hitting Time

Theorem 11.11 (Liu [90]) Let Xt be a sample-continuous independent in-


crement process with continuous uncertainty distribution Φt (x). Then the
first hitting time τz that Xt reaches the level z has an uncertainty distribu-
tion, 
 1 − inf Φt (z), if z > X0

0≤t≤s
Υ(s) = (11.55)

 sup Φt (z), if z < X0 .
0≤t≤s

Proof: When X0 < z, it follows from the definition of first hitting time that
τz ≤ s if and only if sup Xt ≥ z.
0≤t≤s

Thus the uncertainty distribution of τz is


 
Υ(s) = M{τz ≤ s} = M sup Xt ≥ z .
0≤t≤s
286 Chapter 11 - Uncertain Process

By using the extreme value theorem, we obtain

Υ(s) = 1 − inf Φt (z).


0≤t≤s

When X0 > z, it follows from the definition of first hitting time that

τz ≤ s if and only if inf Xt ≤ z.


0≤t≤s

Thus the uncertainty distribution of τz is


 
Υ(s) = M{τz ≤ s} = M inf Xt ≤ z = sup Φt (z).
0≤t≤s 0≤t≤s

The theorem is verified.

Exercise 11.9: Let Xt be a sample-continuous independent increment pro-


cess with continuous uncertainty distribution Φt (x). Assume f is a continu-
ous and strictly increasing function. Show that the first hitting time τz that
f (Xt ) reaches the level z has an uncertainty distribution,

inf Φt (f −1 (z)), if z > f (X0 )



 1 − 0≤t≤s

Υ(s) = (11.56)

 sup Φt (f −1 (z)), if z < f (X0 ).
0≤t≤s

Exercise 11.10: Let Xt be a sample-continuous independent increment


process with continuous uncertainty distribution Φt (x). Assume f is a con-
tinuous and strictly decreasing function. Show that the first hitting time τz
that f (Xt ) reaches the level z has an uncertainty distribution,


 sup Φt (f −1 (z)), if z > f (X0 )
0≤t≤s
Υ(s) = (11.57)
 1 − inf Φt (f −1 (z)), if z < f (X0 ).

0≤t≤s

Exercise 11.11: Show that the sample-continuity condition in Theorem 11.11


cannot be removed.

11.7 Time Integral


This section will give a definition of time integral that is an integral of un-
certain process with respect to time.

Definition 11.10 (Liu [78]) Let Xt be an uncertain process. For any par-
tition of closed interval [a, b] with a = t1 < t2 < · · · < tk+1 = b, the mesh is
written as
∆ = max |ti+1 − ti |. (11.58)
1≤i≤k
Section 11.7 - Time Integral 287

Then the time integral of Xt with respect to t is


Z b k
X
Xt dt = lim Xti · (ti+1 − ti ) (11.59)
a ∆→0
i=1

provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be time integrable.

Since Xt is an uncertain variable at each time t, the limit in (11.59) is


also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is time integrable if and only if the
limit in (11.59) is an uncertain variable.

Theorem 11.12 If Xt is a sample-continuous uncertain process on [a, b],


then it is time integrable on [a, b].

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1

exists almost surely and is finite. On the other hand, since Xt is an uncertain
variable at each time t, the above limit is also a measurable function. Hence
the limit is an uncertain variable and then Xt is time integrable.

Theorem 11.13 If Xt is a time integrable uncertain process on [a, b], then


it is time integrable on each subinterval of [a, b]. Moreover, if c ∈ [a, b], then
Z b Z c Z b
Xt dt = Xt dt + Xt dt. (11.60)
a a c

Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is a time integrable


uncertain process on [a, b], for any partition

a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,

the limit
k
X
lim Xti (ti+1 − ti )
∆→0
i=1

exists almost surely and is finite. Thus the limit


n−1
X
lim Xti (ti+1 − ti )
∆→0
i=m
288 Chapter 11 - Uncertain Process

exists almost surely and is finite. Hence Xt is time integrable on the subin-
terval [a0 , b0 ]. Next, for the partition

a = t1 < · · · < tm = c < tm+1 < · · · < tk+1 = b,

we have
k
X m−1
X k
X
Xti (ti+1 − ti ) = Xti (ti+1 − ti ) + Xti (ti+1 − ti ).
i=1 i=1 i=m

Note that
Z b k
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z c m−1
X
Xt dt = lim Xti (ti+1 − ti ),
a ∆→0
i=1
Z b k
X
Xt dt = lim Xti (ti+1 − ti ).
c ∆→0
i=m

Hence the equation (11.60) is proved.

Theorem 11.14 (Linearity of Time Integral) Let Xt and Yt be time inte-


grable uncertain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dt = α Xt dt + β Yt dt. (11.61)
a a a

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of time integral that
Z b k
X
(αXt + βYt )dt = lim (αXti + βYti )(ti+1 − ti )
a ∆→0
i=1
k
X k
X
= lim α Xti (ti+1 − ti ) + lim β Yti (ti+1 − ti )
∆→0 ∆→0
i=1 i=1
Z b Z b
= α Xt dt + β Yt dt.
a a

Hence the equation (11.61) is proved.

Theorem 11.15 (Yao [189]) Let Xt be a sample-continuous independent


increment process with regular uncertainty distribution Φt (x). Then the time
integral Z s
Ys = Xt dt (11.62)
0
Section 11.8 - Stationary Increment Process 289

has an inverse uncertainty distribution


Z s
Ψ−1
s (α) = Φ−1
t (α)dt. (11.63)
0
Proof: For any given time s > 0, it follows from the basic property of time
integral that
Z s Z s 
Xt dt ≤ Φ−1
t (α)dt ⊃ {Xt ≤ Φ−1
t (α), ∀t}.
0 0
By using Theorem 11.9, we obtain
Z s Z s 
M Xt dt ≤ Φt (α)dt ≥ M{Xt ≤ Φ−1
−1
t (α), ∀t} = α.
0 0
Similarly, since
Z s Z s 
Xt dt > Φ−1
t (α)dt ⊃ {Xt > Φ−1
t (α), ∀t},
0 0
we have
Z s Z s 
M Xt dt > Φ−1
t (α)dt ≥ M{Xt > Φ−1
t (α), ∀t} = 1 − α.
0 0
It follows from the above two inequalities and the duality axiom that
Z s Z s 
−1
M Xt dt ≤ Φt (α)dt = α.
0 0

Thus the time integral Ys has the inverse uncertainty distribution Ψ−1
s (α).

Exercise 11.12: Let Xt be a sample-continuous independent increment


process with regular uncertainty distribution Φt (x), and let J(x) be a strictly
increasing function. Show that the time integral
Z s
Ys = J(Xt )dt (11.64)
0
has an inverse uncertainty distribution
Z s
Ψ−1
s (α) = J(Φ−1
t (α))dt. (11.65)
0

Exercise 11.13: Let Xt be a sample-continuous independent increment


process with regular uncertainty distribution Φt (x), and let J(x) be a strictly
decreasing function. Show that the time integral
Z s
Ys = J(Xt )dt (11.66)
0
has an inverse uncertainty distribution
Z s
−1
Ψs (α) = J(Φ−1
t (1 − α))dt. (11.67)
0
290 Chapter 11 - Uncertain Process

11.8 Stationary Increment Process


An uncertain process Xt is said to have stationary increments if its increments
are identically distributed uncertain variables whenever the time intervals
have the same length, i.e., for any given t > 0, the increments Xs+t − Xs are
identically distributed uncertain variables for all s > 0.

Definition 11.11 (Liu [78]) An uncertain process is said to be a stationary


independent increment process if it has not only stationary increments but
also independent increments.

It is clear that a stationary independent increment process is a special


independent increment process.

Theorem 11.16 Let Xt be a stationary independent increment process. Then


for any real numbers a and b, the uncertain process

Yt = aXt + b (11.68)

is also a stationary independent increment process.

Proof: Since Xt is an independent increment process, the uncertain variables

Xt1 , Xt2 − Xt1 , Xt3 − Xt2 , · · · , Xtk − Xtk−1

are independent. It follows from Yt = aXt + b and Theorem 2.7 that

Yt1 , Yt2 − Yt1 , Yt3 − Yt2 , · · · , Ytk − Ytk−1

are also independent. That is, Yt is an independent increment process. On


the other hand, since Xt is a stationary increment process, the increments
Xs+t − Xs are identically distributed uncertain variables for all s > 0. Thus

Ys+t − Ys = a(Xs+t − Xs )

are also identically distributed uncertain variables for all s > 0, and Yt is a
stationary increment process. Hence Yt is a stationary independent increment
process.

Remark 11.4: Generally speaking, a nonlinear function of stationary inde-


pendent increment process is not necessarily a stationary independent incre-
ment process. A typical example is the square of a stationary independent
increment process.

Theorem 11.17 (Chen [10]) Suppose Xt is a stationary independent in-


crement process. Then Xt and (1 − t)X0 + tX1 are identically distributed
uncertain variables for any time t ≥ 0.
Section 11.8 - Stationary Increment Process 291

Proof: We first prove the theorem when t is a rational number. Assume t =


q/p where p and q are irreducible integers. Let Φ be the common uncertainty
distribution of increments

X1/p − X0/p , X2/p − X1/p , X3/p − X2/p , · · ·

Then

Xt − X0 = (X1/p − X0/p ) + (X2/p − X1/p ) + · · · + (Xq/p − X(q−1)/p )

has an uncertainty distribution

Ψ(x) = Φ(x/q). (11.69)

In addition,

t(X1 − X0 ) = t((X1/p − X0/p ) + (X2/p − X1/p ) + · · · + (Xp/p − X(p−1)/p ))

has an uncertainty distribution

Υ(x) = Φ(x/p/t) = Φ(x/p/(q/p)) = Φ(x/q). (11.70)

It follows from (11.69) and (11.70) that Xt −X0 and t(X1 −X0 ) are identically
distributed, and so are Xt and (1 − t)X0 + tX1 .

Remark 11.5: If Xt is a stationary independent increment process with


X0 = 0, then Xt /t and X1 are identically distributed uncertain variables. In
other words, there is an uncertainty distribution Φ such that
Xt
∼ Φ(x) (11.71)
t
or equivalently, x
Xt ∼ Φ (11.72)
t
for any time t > 0. Note that Φ is just the uncertainty distribution of X1 .

Theorem 11.18 (Liu [94]) Let Xt be a stationary independent increment


process whose initial value and increments have inverse uncertainty distri-
butions. Then there exist two continuous and strictly increasing functions µ
and ν such that Xt has an inverse uncertainty distribution

Φ−1
t (α) = µ(α) + ν(α)t. (11.73)

Proof: Note that X0 and X1 − X0 are independent uncertain variables


whose inverse uncertainty distributions exist and are denoted by µ(α) and
ν(α), respectively. It is clear that µ(α) and ν(α) are continuous and strictly
increasing functions. Furthermore, it follows from Theorem 11.17 that Xt
and X0 + (X1 − X0 )t are identically distributed uncertain variables. Hence
292 Chapter 11 - Uncertain Process

Φ−1
t (α)
.....
....... α = 0.9 .......
.......
....
.... .......
...
...
α = 0.8 ....
......... ............
...... .............
....... ..
... ....... .............
α = 0.7 .......
... ....... .. ........
... ....
......... ............. ...............
... .......... ........... α = 0.6 ...........
. ... ........
... .... .... ......
... ....... ....... ............... ........
... ....... ....... .. .........
...
...
...
...
...... ....... ........
α = 0.5
....... .............. ............... ................
........ ..........
........ ...........
....... ....... ........ ................. ..........
... ....... ........ ......... . α = 0.4
.......... ..........
... ....... ....... ........ .......... .......... ............
... ....
...................................... ................ ................... .......................
...
...
.....
. . ... ........................... ............. ...............
............... ........ .......... ...........
. ..
............
.... ...... .....α = 0.3
.......
...............
..... ....... ... .........
... .............. ......... ......... .......... ...................... ...............
.............. ....... ........ .......... .. ............... α = 0.2 .................
... .................................................................................................. ............................. ............... ....
............................................. .............. .................. ...................
....................
........................................................................................................................................ α = 0.1 ..............................
................................................................................... ................................
...............................
.........................................................................................................
.........................
..
...............................................................................................................................................................................................................................................
t

Figure 11.5: Inverse Uncertainty Distribution of Stationary Independent In-


crement Process

Xt has the inverse uncertainty distribution Φ−1


t (α) = µ(α) + ν(α)t. The
theorem is verified.

Remark 11.6: The inverse uncertainty distribution of stationary indepen-


dent increment process is a family of linear functions of t indexed by α. See
Figure 11.5.

Theorem 11.19 (Liu [94]) Let µ and ν be continuous and strictly increasing
functions on (0, 1). Then there exists a stationary independent increment
process Xt whose inverse uncertainty distribution is

Φ−1
t (α) = µ(α) + ν(α)t. (11.74)

Furthermore, Xt has a Lipschitz continuous version.

Proof: Without loss of generality, we only consider the range of t ∈ [0, 1].
Let 
ξ(r) r represents rational numbers in [0, 1]
be a countable sequence of independent uncertain variables, where ξ(0) has
an inverse uncertainty distribution µ(α) and ξ(r) have a common inverse
uncertainty distribution ν(α) for all rational numbers r in (0, 1]. For each
positive integer n, we define an uncertain process

k  
 ξ(0) + 1 i k
 X
ξ , if t = (k = 1, 2, · · · , n)

n n i=1 n n
Xt =


linear, otherwise.

It may prove that Xtn converges in distribution as n → ∞. Furthermore, we


may verify that the limit is a stationary independent increment process and
has the inverse uncertainty distribution Φ−1
t (α). The theorem is verified.
Section 11.9 - Bibliographic Notes 293

Theorem 11.20 (Liu [84]) Let Xt be a stationary independent increment


process. Then there exist two real numbers a and b such that

E[Xt ] = a + bt (11.75)

for any time t ≥ 0.

Proof: It follows from Theorem 11.17 that Xt and X0 + (X1 − X0 )t are


identically distributed uncertain variables. Thus we have

E[Xt ] = E[X0 + (X1 − X0 )t].

Since X0 and X1 − X0 are independent uncertain variables, we obtain

E[Xt ] = E[X0 ] + E[X1 − X0 ]t.

Hence (11.75) holds for a = E[X0 ] and b = E[X1 − X0 ].

Theorem 11.21 (Liu [84]) Let Xt be a stationary independent increment


process with an initial value 0. Then for any times s and t, we have

E[Xs+t ] = E[Xs ] + E[Xt ]. (11.76)

Proof: It follows from Theorem 11.20 that there exists a real number b such
that E[Xt ] = bt for any time t ≥ 0. Hence

E[Xs+t ] = b(s + t) = bs + bt = E[Xs ] + E[Xt ].

Theorem 11.22 (Chen [10]) Let Xt be a stationary independent increment


process with a crisp initial value X0 . Then there exists a real number b such
that
V [Xt ] = bt2 (11.77)
for any time t ≥ 0.

Proof: It follows from Theorem 11.17 that Xt and (1 − t)X0 + tX1 are
identically distributed uncertain variables. Since X0 is a constant, we have

V [Xt ] = V [(1 − t)X0 + tX1 ] = t2 V [X1 ].

Hence (11.77) holds for b = V [X1 ].

Theorem 11.23 (Chen [10]) Let Xt be a stationary independent increment


process with a crisp initial value X0 . Then for any times s and t, we have
p p p
V [Xs+t ] = V [Xs ] + V [Xt ]. (11.78)

Proof: It follows from Theorem 11.22 that there exists a real number b such
that V [Xt ] = bt2 for any time t ≥ 0. Hence
p √ √ √ p p
V [Xs+t ] = b(s + t) = bs + bt = V [Xs ] + V [Xt ].
294 Chapter 11 - Uncertain Process

11.9 Bibliographic Notes


The study of uncertain process was started by Liu [78] in 2008 for modelling
the evolution of uncertain phenomena. In order to describe uncertain pro-
cess, Liu [94] proposed the uncertainty distribution and inverse uncertainty
distribution. In addition, the independence concept of uncertain processes
was introduced by Liu [94].
Independent increment process was initialized by Liu [78], and a sufficient
and necessary condition was proved by Liu [94] for its inverse uncertainty
distribution. In addition, Liu [90] presented an extreme value theorem and
obtained the uncertainty distribution of first hitting time, and Yao [189]
provided a formula for calculating the inverse uncertainty distribution of
time integral of independent increment process.
Stationary independent increment process was initialized by Liu [78], and
its inverse uncertainty distribution was investigated by Liu [94]. Furthermore,
Liu [84] showed that the expected value is a linear function of time, and Chen
[10] verified that the variance is proportional to the square of time.
Chapter 12

Uncertain Renewal
Process

Uncertain renewal process is an uncertain process in which events occur con-


tinuously and independently of one another in uncertain times. This chapter
will introduce uncertain renewal process, renewal reward process, and alter-
nating renewal process. This chapter will also provide block replacement
policy, age replacement policy, and an uncertain insurance model.

12.1 Uncertain Renewal Process


Definition 12.1 (Liu [78]) Let ξ1 , ξ2 , · · · be iid uncertain interarrival times.
Define S0 = 0 and Sn = ξ1 + ξ2 + · · · + ξn for n ≥ 1. Then the uncertain
process
Nt = max {n | Sn ≤ t} (12.1)
n≥0

is called an uncertain renewal process.

It is clear that Sn is a stationary independent increment process with re-


spect to n. Since ξ1 , ξ2 , · · · denote the interarrival times of successive events,
Sn can be regarded as the waiting time until the occurrence of the nth event.
In this case, the renewal process Nt is the number of renewals in (0, t]. Note
that Nt is not sample-continuous, but each sample path of Nt is a right-
continuous and increasing step function taking only nonnegative integer val-
ues. Furthermore, since the interarrival times are always assumed to be
positive uncertain variables, the size of each jump of Nt is always 1. In other
words, Nt has at most one renewal at each time. In particular, Nt does not
jump at time 0.

Theorem 12.1 (Fundamental Relationship) Let Nt be a renewal process


with uncertain interarrival times ξ1 , ξ2 , · · · , and Sn = ξ1 + ξ2 + · · · + ξn .
296 Chapter 12 - Uncertain Renewal Process

N. t
...
..........
...
..
..........
4 ....
..............................
..
... ..
.. ..
.......... .........................................................
3 ....
..
.. ..
..
..
... .. ..
.......... ....................................... ..
2 ...
...
.. .
..
..
..
... .. .. ..
... .. .. ..
.........................................................
1 .........
.... .. ..
..
..
..
..
..
..
... .. ..
.... .. .. .. .
.....................................................................................................................................................................................................................................
0 ... ... ... ... ... t
ξ ...
... 1 ... .... ξ 2 ξ ....
... 3 ... ξ
....
4 ....
...
.. .. .. .. ..

S0 S1 S2 S3 S4

Figure 12.1: A Sample Path of Renewal Process

Then we have
Nt ≥ n ⇔ Sn ≤ t (12.2)
for any time t and integer n. Furthermore, we also have

Nt ≤ n ⇔ Sn+1 > t. (12.3)

Proof: Since Nt is the largest n such that Sn ≤ t, we have SNt ≤ t < SNt +1 .
If Nt ≥ n, then Sn ≤ SNt ≤ t. Conversely, if Sn ≤ t, then Sn < SNt +1
that implies Nt ≥ n. Thus (12.2) is verified. Similarly, if Nt ≤ n, then
Nt + 1 ≤ n + 1 and Sn+1 ≥ SNt +1 > t. Conversely, if Sn+1 > t, then
Sn+1 > SNt that implies Nt ≤ n. Thus (12.3) is verified.

Exercise 12.1: Let Nt be a renewal process with uncertain interarrival times


ξ1 , ξ2 , · · · , and Sn = ξ1 + ξ2 + · · · + ξn . Show that

M{Nt ≥ n} = M{Sn ≤ t}, (12.4)

M{Nt ≤ n} = 1 − M{Sn+1 ≤ t}. (12.5)

Theorem 12.2 (Liu [84]) Let Nt be a renewal process with iid uncertain
interarrival times ξ1 , ξ2 , · · · If Φ is the common uncertainty distribution of
those interarrival times, then Nt has an uncertainty distribution
 
t
Υt (x) = 1 − Φ , ∀x ≥ 0 (12.6)
bxc + 1

where bxc represents the maximal integer less than or equal to x.

Proof: Note that Sn+1 has an uncertainty distribution Φ(x/(n + 1)). It


follows from (12.5) that
 
t
M{Nt ≤ n} = 1 − M{Sn+1 ≤ t} = 1 − Φ .
n+1
Section 12.1 - Uncertain Renewal Process 297

Since Nt takes integer values, for any x ≥ 0, we have


 
t
Υt (x) = M{Nt ≤ x} = M{Nt ≤ bxc} = 1 − Φ .
bxc + 1
The theorem is verified.
Υt (x)
....
........
....
...
... Υ (5) t
... • .........................................
... ..
... Υ (4) t ..
... • .
.. ...
...
...
...
...
...
...
...
...
...
...
.........
... Υ (3) .. ..
... t .. ..
..
... • ........................................... ..
... .. .
. ..
.. .. ..
... .
.. .. ..
... .. .
.. ..
... Υ (2) t .
.. .
.. ..
... . .. ..
... • ........................................... . ..
.. .. .. ..
... .. .. .
.. ..
... .. . .. ..
... .. .. . ..
.. .. .. ..
Υ (1)
... t .
.. .
.. .
.. ..
..
... •.......................................... . .
.. .. ..
... .. .. .
.. .
.. ..
... ..
.. .
.. .. .
.. ..
. ..
Υ (0) t • .
....
...
...
...
...
...
...
...
...
...
...
...
...
... .
.
.
..
..
.
..
.
..
.
..
..
..
... .. . .. . ..
............................................................................................................................................................................................................................................................................................ x
...
0 1 ..
....
2 3 4 5
.

Figure 12.2: Uncertainty Distribution Υt (x) of Renewal Process Nt

Theorem 12.3 (Liu [84], Elementary Renewal Theorem) Let Nt be a re-


newal process with iid uncertain interarrival times ξ1 , ξ2 , · · · Then the average
renewal number
Nt 1
→ (12.7)
t ξ1
in the sense of convergence in distribution as t → ∞.
Proof: The uncertainty distribution Υt of Nt has been given by Theo-
rem 12.2 as follows,  
t
Υt (x) = 1 − Φ
bxc + 1
where Φ is the uncertainty distribution of ξ1 . It follows from the operational
law that the uncertainty distribution of Nt /t is
 
t
Ψt (x) = 1 − Φ
btxc + 1
where btxc represents the maximal integer less than or equal to tx. Thus at
each continuity point x of 1 − Φ(1/x), we have
 
1
lim Ψt (x) = 1 − Φ
t→∞ x
which is just the uncertainty distribution of 1/ξ1 . Hence Nt /t converges in
distribution to 1/ξ1 as t → ∞.
298 Chapter 12 - Uncertain Renewal Process

Theorem 12.4 (Liu [84], Elementary Renewal Theorem) Let Nt be a re-


newal process with iid uncertain interarrival times ξ1 , ξ2 , · · · Then
 
E[Nt ] 1
lim =E . (12.8)
t→∞ t ξ1
If Φ is the common uncertainty distribution of those interarrival times, then
Z +∞  
E[Nt ] 1
lim = Φ dx. (12.9)
t→∞ t 0 x
If the uncertainty distribution Φ is regular, then
Z 1
E[Nt ] 1
lim = −1 (α)
dα. (12.10)
t→∞ t 0 Φ
Proof: Write the uncertainty distributions of Nt /t and 1/ξ1 by Ψt (x) and
G(x), respectively. Theorem 12.3 says that Ψt (x) → G(x) as t → ∞ at each
continuity point x of G(x). Note that Ψt (x) ≥ G(x). It follows from the
Lebesgue dominated convergence theorem and the existence of E[1/ξ1 ] that
Z +∞ Z +∞  
E[Nt ] 1
lim = lim (1 − Ψt (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1
Since 1/ξ1 has an uncertainty distribution 1 − Φ(1/x), we have
  Z +∞  
E[Nt ] 1 1
lim =E = Φ dx.
t→∞ t ξ1 0 x
Furthermore, since 1/ξ1 has an inverse uncertainty distribution
1
G−1 (α) = ,
Φ−1 (1 − α)
we get   Z 1 Z 1
1 1 1
E = −1 (1 − α)
dα = −1 (α)
dα.
ξ1 0 Φ 0 Φ
The theorem is proved.

Exercise 12.2: A renewal process Nt is called linear if ξ1 , ξ2 , · · · are iid


linear uncertain variables L(a, b) with a > 0. Show that
E[Nt ] ln b − ln a
lim = . (12.11)
t→∞ t b−a

Exercise 12.3: A renewal process Nt is called zigzag if ξ1 , ξ2 , · · · are iid


zigzag uncertain variables Z(a, b, c) with a > 0. Show that
 
E[Nt ] 1 ln b − ln a ln c − ln b
lim = + . (12.12)
t→∞ t 2 b−a c−b
Section 12.3 - Renewal Reward Process 299

Exercise 12.4: A renewal process Nt is called lognormal if ξ1 , ξ2 , · · · are iid


lognormal uncertain variables LOGN (e, σ). Show that
( √ √ √
E[Nt ] 3σ exp(−e) csc( 3σ), if σ < π/ 3
lim = √ (12.13)
t→∞ t +∞, if σ ≥ π/ 3.

12.2 Block Replacement Policy


Block replacement policy means that an element is always replaced at fail-
ure or periodically with time s. Assume that the lifetimes of elements are
iid uncertain variables ξ1 , ξ2 , · · · with a common uncertainty distribution Φ.
Then the replacement times form an uncertain renewal process Nt . Let a
denote the “failure replacement” cost of replacing an element when it fails
earlier than s, and b the “planned replacement” cost of replacing an element
at planned time s. Note that a > b > 0 is always assumed. It is clear that
the cost of one period is aNs + b and the average cost is
aNs + b
. (12.14)
s
Theorem 12.5 (Ke-Yao [67]) Assume the lifetimes of elements are iid un-
certain variables ξ1 , ξ2 , · · · with a common uncertainty distribution Φ, and Nt
is the uncertain renewal process representing the replacement times. Then the
average cost has an expected value

  !
aNs + b 1 X s
E = a Φ +b . (12.15)
s s n=1
n

Proof: Note that the uncertainty distribution of Nt is a step function. It


follows from Theorem 12.2 that
Z +∞   ∞
s X s
E[Ns ] = Φ dx = Φ .
0 bxc + 1 n=1
n

Thus (12.15) is verified by


 
aNs + b aE[Ns ] + b
E = . (12.16)
s s

What is the optimal time s?


When the block replacement policy is accepted, one problem is concerned
with finding an optimal time s in order to minimize the average cost, i.e.,

!
1 X s
min a Φ +b . (12.17)
s s n
n=1
300 Chapter 12 - Uncertain Renewal Process

12.3 Renewal Reward Process


Let (ξ1 , η1 ), (ξ2 , η2 ), · · · be a sequence of pairs of uncertain variables. We
shall interpret ηi as the rewards (or costs) associated with the i-th interarrival
times ξi for i = 1, 2, · · · , respectively.
Definition 12.2 (Liu [84]) Let ξ1 , ξ2 , · · · be iid uncertain interarrival times,
and let η1 , η2 , · · · be iid uncertain rewards. Then
Nt
X
Rt = ηi (12.18)
i=1

is called a renewal reward process, where Nt is the renewal process with un-
certain interarrival times ξ1 , ξ2 , · · ·
A renewal reward process Rt denotes the total reward earned by time t.
In addition, if ηi ≡ 1, then Rt degenerates to a renewal process Nt . Please
also note that Rt = 0 whenever Nt = 0.
Theorem 12.6 (Liu [84]) Let Rt be a renewal reward process with iid uncer-
tain interarrival times ξ1 , ξ2 , · · · and iid uncertain rewards η1 , η2 , · · · Assume
(ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are independent uncertain vectors, and those in-
terarrival times and rewards have uncertainty distributions Φ and Ψ, respec-
tively. Then Rt has an uncertainty distribution
  
t x
Υt (x) = max 1 − Φ ∧Ψ . (12.19)
k≥0 k+1 k
Here we set x/k = +∞ and Ψ(x/k) = 1 when k = 0.
Proof: It follows from the definition of renewal reward process that the
renewal process Nt is independent of uncertain rewards η1 , η2 , · · · , and Rt
has an uncertainty distribution
(N ) (∞ k
)
X t [ X
Υt (x) = M ηi ≤ x = M (Nt = k) ∩ ηi ≤ x
i=1 k=0 i=1

( k
)
[ X
=M (Nt ≤ k) ∩ ηi ≤ x (this is a polyrectangle)
k=0 i=1
( k
)
X
= max M (Nt ≤ k) ∩ ηi ≤ x (polyrectangular theorem)
k≥0
i=1
( k
)
X
= max M {Nt ≤ k} ∧ M ηi ≤ x (independence)
k≥0
i=1
  
t x
= max 1 − Φ ∧Ψ .
k≥0 k+1 k
Section 12.3 - Renewal Reward Process 301

Υt (x)
....
........ .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
... ........ .. .
... ...... .. .. .. .. .. . .. .. .. .. .. .. .
. ..
... .
. .... .. .. ........ ...... .. ..
... ...
..
. . ... ..
.. ........ ................
... . ...
.
. .... .
.. .. .... ..
..
....
................
.
... .. .. .. .. .. .. .. .. .. .. ....... .. .. .. .. .. .. .. .. .. ......... .. .. .. .. .. .. .. .. ..................................................................
.. .
.... ..
...
... ...... .....
.. ...... ....
... .. ... ...... ....
.. .. ... ..... ....
.... .. .. .. .. .. .. .. ....... .. .. .. .. .. .. ................................................ .
. ....
.... . .. ... ....
... ..
.. ... ... ....
.. ... ... ....
... .. ... ... ...
... . ...
. .. . . .
... .. ... ... ...
... ..
.. ... ... ...
.... .. .. .. .. .................................. ... ...
...
... .
. .
. .... ..
. . .
... .. .. ...
. ...
... ... .. ... ...
.. .. .. ...
... .. .. ...
... ... ..
. .
. .
..
.. .. ...
... .. .. ... .
... ..... .... ..... ......
............... .. .... ....
... ... ... .... ......
.. .. .. ... ....
................................................................................................................................................................................................................................................................... x
....
0 ....

Figure 12.3: Uncertainty Distribution Υt (x) of Renewal Reward Process Rt


in which the dashed horizontal lines are 1 − Φ(t/(k + 1)) and the dashed
curves are Ψ(x/k) for k = 0, 1, 2, · · ·

The theorem is proved.

Theorem 12.7 (Liu [84], Renewal Reward Theorem) Let Rt be a renewal


reward process with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncer-
tain rewards η1 , η2 , · · · Assume (ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are independent
uncertain vectors. Then the reward rate
Rt η1
→ (12.20)
t ξ1
in the sense of convergence in distribution as t → ∞.

Proof: Assume those interarrival times and rewards have uncertainty dis-
tributions Φ and Ψ, respectively. It follows from Theorem 12.6 that the
uncertainty distribution of Rt is
  
t x
Υt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
Then Rt /t has an uncertainty distribution
    
t tx
Ψt (x) = max 1 − Φ ∧Ψ .
k≥0 k+1 k
When t → ∞, we have

Ψt (x) → sup(1 − Φ(y)) ∧ Ψ(xy)


y≥0

which is just the uncertainty distribution of η1 /ξ1 . Hence Rt /t converges in


distribution to η1 /ξ1 as t → ∞.
302 Chapter 12 - Uncertain Renewal Process

Theorem 12.8 (Liu [84], Renewal Reward Theorem) Let Rt be a renewal


reward process with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncer-
tain rewards η1 , η2 , · · · Assume (ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are independent
uncertain vectors. Then
 
E[Rt ] η1
lim =E . (12.21)
t→∞ t ξ1
If those interarrival times and rewards have regular uncertainty distributions
Φ and Ψ, respectively, then
Z 1
E[Rt ] Ψ−1 (α)
lim = −1 (1 − α)
dα. (12.22)
t→∞ t 0 Φ

Proof: It follows from Theorem 12.6 that Rt /t has an uncertainty distribu-


tion     
t tx
Ft (x) = max 1 − Φ ∧Ψ
k≥0 k+1 k
and η1 /ξ1 has an uncertainty distribution

G(x) = sup(1 − Φ(y)) ∧ Ψ(xy).


y≥0

Note that Ft (x) → G(x) and Ft (x) ≥ G(x). It follows from Lebesgue domi-
nated convergence theorem and the existence of E[η1 /ξ1 ] that
Z +∞ Z +∞  
E[Rt ] η1
lim = lim (1 − Ft (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1

Finally, since η1 /ξ1 has an inverse uncertainty distribution

Ψ−1 (α)
G−1 (α) = ,
Φ−1 (1 − α)
we get  Z 1
Ψ−1 (α)

η1
E = −1 (1 − α)
dα.
ξ1 0 Φ
The theorem is proved.

12.4 Uncertain Insurance Model


Liu [90] assumed that a is the initial capital of an insurance company, b is the
premium rate, bt is the total income up to time t, and the uncertain claim
process is a renewal reward process
Nt
X
Rt = ηi (12.23)
i=1
Section 12.4 - Uncertain Insurance Model 303

with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncertain claim amounts
η1 , η2 , · · · Then the capital of the insurance company at time t is

Zt = a + bt − Rt (12.24)

and Zt is called an insurance risk process.

Z. t
..
........
...
... ... ..
.. ........ .........
... ...... .. ...... ..
... ...... .... .......... ....
... ..
..
...... . ..... ...
.. ...
... ...... ..
... ...... .. ...
... ..
..
...... .. ...
...
. . .... ..
... .
...
... ..
..
... ... . .
.....
. ... ........
... .......... ... ..
..
. .
..
.. .. ... .......... ....
... ....... . ...
..... .... ........ .. ... ....... ...
....... .. .... ......... .. .
. ...
a ... ...
.
.....
.....
.
.. .
.. ..
..
. .
.. ..
.
...
...
... .. .......... ..
.. ..
..
.. ...
... ......... .. .. .. ...
... .. .. .. .. ...
... .. .. .. .. ...
... .. .. .. ..
...
... .. .. .. ..
...
... .. ...
... .. .. .. .. ...
... . .. .. .. ..
............................................................................................................................................................................................................................................................................................
.... .... .... t
0 S ...
... 1 S 2 S S 3 4 ....
.. .
.
.....
... ... .....
... ... ......
... .. .
. ........

Figure 12.4: An Insurance Risk Process

Ruin Index
Ruin index is the uncertain measure that the capital of the insurance company
becomes negative.

Definition 12.3 (Liu [90]) Let Zt be an insurance risk process. Then the
ruin index is defined as the uncertain measure that Zt eventually becomes
negative, i.e.,  
Ruin = M inf Zt < 0 . (12.25)
t≥0

It is clear that the ruin index is a special case of the risk index in the
sense of Liu [83].

Theorem 12.9 (Liu [90], Ruin Index Theorem) Let Zt = a + bt − Rt be


an insurance risk process where a and b are positive numbers, and Rt is a
renewal reward process with iid uncertain interarrival times ξ1 , ξ2 , · · · and
iid uncertain claim amounts η1 , η2 , · · · Assume (ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · )
are independent uncertain vectors, and those interarrival times and claim
amounts have continuous uncertainty distributions Φ and Ψ, respectively.
Then the ruin index is
  
x−a  x 
Ruin = max sup Φ ∧ 1−Ψ . (12.26)
k≥1 x≥0 kb k
304 Chapter 12 - Uncertain Renewal Process

Proof: For each positive integer k, it is clear that the arrival time of the kth
claim is
Sk = ξ1 + ξ2 + · · · + ξk

whose uncertainty distribution is Φ(s/k). Define an uncertain process in-


dexed by k as follows,

Yk = a + bSk − (η1 + η2 + · · · + ηk ).

It is easy to verify that Yk is an independent increment process with respect


to k. In addition, Yk is just the capital at the arrival time Sk and has an
uncertainty distribution
 
z+x−a   x 
Fk (z) = sup Φ ∧ 1−Ψ .
x≥0 kb k

Since a ruin occurs only at the arrival times, we have


   
Ruin = M inf Zt < 0 = M min Yk < 0 .
t≥0 k≥1

It follows from the extreme value theorem that


  
x−a  x 
Ruin = max Fk (0) = max sup Φ ∧ 1−Ψ .
k≥1 k≥1 x≥0 kb k

The theorem is proved.

Ruin Time

Definition 12.4 (Liu [90]) Let Zt be an insurance risk process. Then the
ruin time is defined as the first hitting time that the total capital Zt becomes
negative, i.e.,

τ = inf t ≥ 0 Zt < 0 . (12.27)

Theorem 12.10 (Yao [185]) Let Zt = a + bt − Rt be an insurance risk


process where a and b are positive numbers, and Rt is a renewal reward pro-
cess with iid uncertain interarrival times ξ1 , ξ2 , · · · and iid uncertain claim
amounts η1 , η2 , · · · Assume (ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are independent un-
certain vectors, and those interarrival times and claim amounts have contin-
uous uncertainty distributions Φ and Ψ, respectively. Then the ruin time has
an uncertainty distribution
x   
a + bx
Υ(t) = max sup Φ ∧ 1−Ψ . (12.28)
k≥1 x≤t k k
Section 12.4 - Uncertain Insurance Model 305

Proof: For each positive integer k, let us write Sk = ξ1 + ξ2 + · · · + ξk ,


Yk = a + bSk − (η1 + η2 + · · · + ηk ) and


x  
a + bx
αk = sup Φ ∧ 1−Ψ .
x≤t k k

Then

αk = sup α | kΦ−1 (α) ≤ t ∧ sup α | a + kΦ−1 (α) − kΨ−1 (1 − α) < 0 .


 

On the one hand, it follows from the definition of the ruin time τ that for
each t, we have

τ ≤ t if and only if inf Zs < 0.


0≤s≤t

Thus

  (∞ )
[
M{τ ≤ t} = M inf Zs < 0 = M (Sk ≤ t, Yk < 0)
0≤s≤t
k=1

( k k k
!)
[ X X X
=M ξi ≤ t, a + b ξi − ηi < 0
k=1 i=1 i=1 i=1
∞ \
( k
)
[
≥M (ξi ≤ Φ −1
(αk )) ∩ (ηi > Ψ −1
(1 − αk ))
k=1 i=1

( k
)
_ \
≥ M (ξi ≤ Φ −1
(αk )) ∩ (ηi > Ψ −1
(1 − αk ))
k=1 i=1
∞ ^
_ k
M (ξi ≤ Φ−1 (αk )) ∩ (ηi > Ψ−1 (1 − αk ))

=
k=1 i=1
∞ ^
_ k
M ξi ≤ Φ−1 (αk ) ∧ M ηi > Ψ−1 (1 − αk )
 
=
k=1 i=1
∞ ^
_ k ∞
_
= αk ∧ αk = αk .
k=1 i=1 k=1
306 Chapter 12 - Uncertain Renewal Process

On the other hand, we have


(∞ k k k
!)
[ X X X
M{τ ≤ t} = M ξi ≤ t, a + b ξi − ηi < 0
k=1 i=1 i=1 i=1
∞ [
( k
)
[
≤M (ξi ≤ Φ −1
(αk )) ∪ (ηi > Ψ −1
(1 − αk ))
k=1 i=1
(∞ ∞ )
[[
=M (ξi ≤ Φ−1 (αk )) ∪ (ηi > Ψ−1 (1 − αk ))
i=1 k=i
(∞ ∞
! ∞
!)
[ _ ^
≤M ξi ≤ Φ −1
(αk ) ∪ ηi > Ψ −1
(1 − αk )
i=1 k=i k=i
∞ ∞ ∞
( ) ( )
_ _ ^
= M ξi ≤ Φ −1
(αk ) ∨ M ηi > Ψ −1
(1 − αk )
i=1 k=i k=i

∞ _ ∞ ∞
!
_ ^ _
= αk ∨ 1− (1 − αk ) = αk .
i=1 k=i k=i k=1

Thus we obtain

_
M{τ ≤ t} = αk
k=1

and the theorem is verified.

12.5 Age Replacement Policy


Age replacement means that an element is always replaced at failure or at
an age s. Assume that the lifetimes of the elements are iid uncertain vari-
ables ξ1 , ξ2 , · · · with a common uncertainty distribution Φ. Then the actual
lifetimes of the elements are iid uncertain variables

ξ1 ∧ s, ξ2 ∧ s, · · · (12.29)

which may generate an uncertain renewal process


( n
)
X
Nt = max n (ξi ∧ s) ≤ t . (12.30)
n≥0
i=1

Let a denote the “failure replacement” cost of replacing an element when


it fails earlier than s, and b the “planned replacement” cost of replacing an
element at the age s. Note that a > b > 0 is always assumed. Define
(
a, if x < s
f (x) = (12.31)
b, if x = s.
Section 12.5 - Age Replacement Policy 307

Then f (ξi ∧ s) is just the cost of replacing the ith element, and the average
replacement cost before the time t is
N
t
1X
f (ξi ∧ s). (12.32)
t i=1

Theorem 12.11 (Yao-Ralescu [172]) Assume ξ1 , ξ2 , · · · are iid uncertain


lifetimes and s is a positive number. Then
Nt
1X f (ξ1 ∧ s)
f (ξi ∧ s) → (12.33)
t i=1 ξ1 ∧ s

in the sense of convergence in distribution as t → ∞.

Proof: At first, the average replacement cost before time t may be rewritten
as
XNt XNt

Nt
f (ξi ∧ s) (ξi ∧ s)
1X i=1 i=1
f (ξi ∧ s) = N × . (12.34)
t i=1 X t t
(ξi ∧ s)
i=1

For any real number x, on the one hand, we have


(N Nt
)
X t X
f (ξi ∧ s)/ (ξi ∧ s) ≤ x
i=1 i=1

( n n
!)
[ X X
= (Nt = n) ∩ f (ξi ∧ s)/ (ξi ∧ s) ≤ x
n=1 i=1 i=1

( n
)
[ \
⊃ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
∞ ∞
( )
[ \
⊃ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
\∞
⊃ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
i=1

and
 
 Nt 
X
f (ξi ∧ s)
 


 
 (∞ )  

i=1
 \ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤ x ≥ M ≤ x = M ≤ x .

 XNt 
 i=1
ξi ∧ s ξ1 ∧ s



 (ξi ∧ s) 



i=1
308 Chapter 12 - Uncertain Renewal Process

On the other hand, we have


(N Nt
)
X t X
f (ξi ∧ s)/ (ξi ∧ s) ≤ x
i=1 i=1

( n n
!)
[ X X
= (Nt = n) ∩ f (ξi ∧ s)/ (ξi ∧ s) ≤ x
n=1 i=1 i=1

( n
)
[ [
⊂ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
∞ ∞
( )
[ [
⊂ (Nt = n) ∩ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
n=1 i=1
[∞
⊂ (f (ξi ∧ s)/(ξi ∧ s) ≤ x)
i=1

and
 
 XNt 
f (ξi ∧ s)

 


 
 (∞ )  

i=1
 [ f (ξi ∧ s) f (ξ1 ∧ s)
M ≤x ≤M ≤x =M ≤x .

 XNt 
 i=1
ξi ∧ s ξ1 ∧ s



 (ξi ∧ s) 



i=1

Thus for any real number x, we have


 
 XNt 
f (ξi ∧ s)

 


 
  

i=1
 f (ξ1 ∧ s)
M ≤x =M ≤x .
 Nt  ξ1 ∧ s
 X(ξ ∧ s)
 



 i 


i=1

Hence
Nt
X
f (ξi ∧ s)
i=1 f (ξ1 ∧ s)
and
Nt
X ξ1 ∧ s
(ξi ∧ s)
i=1
are identically distributed uncertain variables. Since
Nt
X
(ξi ∧ s)
i=1
→1
t
as t → ∞, it follows from (12.34) that (12.33) holds. The theorem is verified.
Section 12.5 - Age Replacement Policy 309

Theorem 12.12 (Yao-Ralescu [172]) Assume ξ1 , ξ2 , · · · are iid uncertain


lifetimes with a common continuous uncertainty distribution Φ, and s is a
positive number. Then the long-run average replacement cost is
" N # Z s
t
1X b a−b Φ(x)
lim E f (ξi ∧ s) = + Φ(s) + a dx. (12.35)
t→∞ t i=1 s s 0 x2

Proof: Let Ψ(x) be the uncertainty distribution of f (ξ1 ∧ s)/(ξ1 ∧ s). It


follows from (12.31) that f (ξ1 ∧ s) ≥ b and ξ1 ∧ s ≤ s. Thus we have
f (ξ1 ∧ s) b

ξ1 ∧ s s
almost surely. If x < b/s, then
 
f (ξ1 ∧ s)
Ψ(x) = M ≤x = 0.
ξ1 ∧ s
If b/s ≤ x < a/s, then
 
f (ξ1 ∧ s)
Ψ(x) = M ≤ x = M{ξ1 ≥ s} = 1 − Φ(s).
ξ1 ∧ s
If x ≥ a/s, then
   
f (ξ1 ∧ s) a n ao a
Ψ(x) = M ≤x =M ≤ x = M ξ1 ≥ =1−Φ .
ξ1 ∧ s ξ1 x x
Hence we have


 0, if x < b/s

Ψ(x) = 1 − Φ(s), if b/s ≤ x < a/s


1 − Φ(a/x), if x ≥ a/s

and
  Z +∞ Z s
f (ξ1 ∧ s) b a−b Φ(x)
E = (1 − Ψ(x))dx = + Φ(s) + a dx.
ξ1 ∧ s 0 s s 0 x2
Since
Nt
X
(ξi ∧ s)
i=1
≤ 1,
t
it follows from (12.34) that
( N )
t  
1X f (ξ1 ∧ s)
M f (ξi ∧ s) ≤ x ≥ M ≤x
t i=1 ξ∧s
310 Chapter 12 - Uncertain Renewal Process

for any real number x. By using the Lebesgue dominated convergence theo-
rem, we get
" N # Z +∞ ( N )!
t t
1X 1X
lim E f (ξi ∧ s) = lim 1−M f (ξi ∧ s) ≤ x dx
t→∞ t i=1 t→∞ 0 t i=1
Z +∞   
f (ξ1 ∧ s)
= 1−M ≤x dx
0 ξ1 ∧ s
 
f (ξ1 ∧ s)
=E .
ξ1 ∧ s
Hence the theorem is proved.

What is the optimal age s?


When the age replacement policy is accepted, one problem is to find the
optimal age s such that the average replacement cost is minimized. That is,
the optimal age s should solve
 Z s 
b a−b Φ(x)
min + Φ(s) + a dx . (12.36)
s≥0 s s 0 x2

12.6 Alternating Renewal Process


Let (ξ1 , η1 ), (ξ2 , η2 ), · · · be a sequence of pairs of uncertain variables. We
shall interpret ξi as the “on-times” and ηi as the “off-times” for i = 1, 2, · · · ,
respectively. In this case, the i-th cycle consists of an on-time ξi followed by
an off-time ηi .

Definition 12.5 (Yao-Li [169]) Let ξ1 , ξ2 , · · · be iid uncertain on-times, and


let η1 , η2 , · · · be iid uncertain off-times. Then

XNt XNt XNt

t − η , if (ξ + η ) ≤ t < (ξi + ηi ) + ξNt +1

i i i




i=1 i=1 i=1
At = (12.37)
 Nt +1 Nt N t +1
 X
 X X


 ξi , if (ξi + ηi ) + ξNt +1 ≤ t < (ξi + ηi )
i=1 i=1 i=1

is called an alternating renewal process, where Nt is the renewal process with


uncertain interarrival times ξ1 + η1 , ξ2 + η2 , · · ·

Note that the alternating renewal process At is just the total time at which
the system is on up to time t. It is clear that
Nt
X N
Xt +1

ξ i ≤ At ≤ ξi (12.38)
i=1 i=1
Section 12.6 - Alternating Renewal Process 311

for each time t. We are interested in the limit property of the rate at which
the system is on.

Theorem 12.13 (Yao-Li [169], Alternating Renewal Theorem) Let At be


an alternating renewal process with iid uncertain on-times ξ1 , ξ2 , · · · and iid
uncertain off-times η1 , η2 , · · · Assume (ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are inde-
pendent uncertain vectors. Then the availability rate

At ξ1
→ (12.39)
t ξ1 + η1

in the sense of convergence in distribution as t → ∞.

Proof: Write the uncertainty distributions of ξ1 and η1 by Φ and Ψ, respec-


tively. Then the uncertainty distribution of ξ1 /(ξ1 + η1 ) is

Υ(x) = sup Φ(xy) ∧ (1 − Ψ(y − xy)). (12.40)


y>0

On the one hand, we have


( N
)
t
1X
M ξi ≤ x
t i=1

( k
!)
[ 1X
=M (Nt = k) ∩ ξi ≤ x
t i=1
k=0
( ∞ k+1 ! k
!)
[ X 1X
≤M (ξi + ηi ) > t ∩ ξi ≤ x
t i=1
k=0 i=1
(∞ k+1
! k
!)
[ X 1X
≤M tx + ξk+1 + ηi > t ∩ ξi ≤ x
i=1
t i=1
k=0
(∞ k+1
! k
!)
[ ξk+1 1X 1X
=M + ηi > 1 − x ∩ ξi ≤ x .
t t i=1 t i=1
k=0

Since
ξk+1
→ 0, as t → ∞
t
and
k+1
X k
X
ηi ∼ (k + 1)η1 , ξi ∼ kξ1 ,
i=1 i=1
312 Chapter 12 - Uncertain Renewal Process

we have
( Nt
)
1X
lim M ξi ≤ x
t→∞ t i=1
(∞    )
[ t(1 − x) tx
≤ lim M η1 > ∩ ξ1 ≤
t→∞ k+1 k
k=0
   
t(1 − x) tx
= lim sup M η1 > ∧ M ξ1 ≤
t→∞ k≥0 k+1 k
    
t(1 − x) tx
= lim sup 1 − Ψ ∧Φ
t→∞ k≥0 k+1 k
= sup Φ(xy) ∧ (1 − Ψ(y − xy)) = Υ(x).
y>0

That is,
( N
)
t
1X
lim M ξi ≤ x ≤ Υ(x). (12.41)
t→∞ t i=1

On the other hand, we have


( Nt +1
)
1 X
M ξi > x
t i=1

( k+1
!)
[ 1X
=M (Nt = k) ∩ ξi > x
t i=1
k=0
(∞ k
! k+1
!)
[ X 1X
≤M (ξi + ηi ) ≤ t ∩ ξi > x
t i=1
k=0 i=1
(∞ k
! k+1
!)
[ X 1X
≤M tx − ξk+1 + ηi ≤ t ∩ ξi > x
i=1
t i=1
k=0
(∞ k
! k+1
!)
[ 1X ξk+1 1X
=M ηi − ≤1−x ∩ ξi > x .
t i=1 t t i=1
k=0

Since
ξk+1
→ 0, as t → ∞
t
and
k
X k+1
X
ηi ∼ kη1 , ξi ∼ (k + 1)ξ1 ,
i=1 i=1
Section 12.6 - Alternating Renewal Process 313

we have
(
Nt +1
)
1 X
lim M ξi > x
t→∞ t i=1
(∞    )
[ t(1 − x) tx
≤ lim M η1 ≤ ∩ ξ1 >
t→∞ k k+1
k=0
   
t(1 − x) tx
= lim sup M η1 ≤ ∧ M ξ1 >
t→∞ k≥0 k k+1
    
t(1 − x) tx
= lim sup Ψ ∧ 1−Φ
t→∞ k≥0 k+1 k+1
= sup(1 − Φ(xy)) ∧ Ψ(y − xy).
y>0

By using the duality of uncertain measure, we get


( N +1 )
t
1 X
lim M ξi ≤ x ≥ 1 − sup(1 − Φ(xy)) ∧ Ψ(y − xy)
t→∞ t i=1 y>0

= inf Φ(xy) ∨ (1 − Ψ(y − xy)) = Υ(x).


y>0

That is, ( )
Nt +1
1 X
lim M ξi ≤ x ≥ Υ(x). (12.42)
t→∞ t i=1

Since
Nt Nt +1
1X At 1 X
ξi ≤ ≤ ξi ,
t i=1 t t i=1

we obtain
( N
) ( Nt +1
)
t  
1X At 1 X
M ξi ≤ x ≥M ≤x ≥M ξi ≤ x .
t i=1 t t i=1

It follows from (12.41) and (12.42) that for any real number x, we have
 
At
lim ≤ x = Υ(x).
t→∞ t

Hence the availability rate At /t converges in distribution to ξ1 /(ξ1 +η1 ). The


theorem is proved.

Theorem 12.14 (Yao-Li [169], Alternating Renewal Theorem) Let At be


an alternating renewal process with iid uncertain on-times ξ1 , ξ2 , · · · and iid
314 Chapter 12 - Uncertain Renewal Process

uncertain off-times η1 , η2 , · · · Assume (ξ1 , ξ2 , · · · ) and (η1 , η2 , · · · ) are inde-


pendent uncertain vectors. Then
 
E[At ] ξ1
lim =E . (12.43)
t→∞ t ξ1 + η1

If those on-times and off-times have regular uncertainty distributions Φ and


Ψ, respectively, then
Z 1
E[At ] Φ−1 (α)
lim = −1
dα. (12.44)
t→∞ t 0 Φ (α) + Ψ−1 (1 − α)

Proof: Write the uncertainty distributions of At /t and ξ1 /(ξ1 + η1 ) by Ft (x)


and G(x), respectively. Since At /t converges in distribution to ξ1 /(ξ1 + η1 ),
we have Ft (x) → G(x) as t → ∞. It follows from the Lebesgue dominated
convergence theorem that
Z 1 Z 1  
E[At ] ξ1
lim = lim (1 − Ft (x))dx = (1 − G(x))dx = E .
t→∞ t t→∞ 0 0 ξ1 + η 1

Finally, since the uncertain variable ξ1 /(ξ1 + η1 ) is strictly increasing with


respect to ξ1 and strictly decreasing with respect to η1 , it has an inverse
uncertainty distribution

Φ−1 (α)
G−1 (α) = .
Φ−1 (α) + Ψ(1 − α)

The equation (12.44) is thus obtained.

12.7 Bibliographic Notes


Uncertain renewal process was first proposed by Liu [78] in 2008. Two years
later, Liu [84] proved some elementary renewal theorems for determining the
average renewal number. Liu [84] also provided uncertain renewal reward
process and verified some renewal reward theorems for determining the long-
run reward rate. In addition, Yao-Li [169] presented uncertain alternating
renewal process and proved some alternating renewal theorems for determin-
ing the availability rate.
Based on the theory of uncertain renewal process, Liu [90] presented an
uncertain insurance model by assuming the claim is an uncertain renewal
reward process, and proved a formula for calculating ruin index. In addition,
Yao [185] derived the uncertainty distribution of ruin time. Furthermore,
Ke-Yao [67] and Zhang-Guo [199] discussed the uncertain block replacement
policy, and Yao-Ralescu [172] investigated the uncertain age replacement pol-
icy and obtained the long-run average replacement cost.
Chapter 13

Uncertain Calculus

Uncertain calculus is a branch of mathematics that deals with differentiation


and integration of uncertain processes. This chapter will introduce Liu pro-
cess, Liu integral, fundamental theorem, chain rule, change of variables, and
integration by parts.

13.1 Liu Process


In 2009, Liu [80] investigated a type of stationary independent increment
process whose increments are normal uncertain variables. Later, this process
was named by the academic community as Liu process due to its importance
and usefulness. A formal definition is given below.

Definition 13.1 (Liu [80]) An uncertain process Ct is said to be a Liu pro-


cess if
(i) C0 = 0 and almost all sample paths are Lipschitz continuous,
(ii) Ct has stationary and independent increments,
(iii) every increment Cs+t − Cs is a normal uncertain variable with expected
value 0 and variance t2 .

It is clear that a Liu process Ct is a stationary independent increment


process and has a normal uncertainty distribution with expected value 0 and
variance t2 . The uncertainty distribution of Ct is
  −1
πx
Φt (x) = 1 + exp − √ (13.1)
3t

and inverse uncertainty distribution is



−1 t 3 α
Φt (α) = ln (13.2)
π 1−α
316 Chapter 13 - Uncertain Calculus

Φ−1
t (α)
.....
....... α = 0.9 ........
........
.........
.... .........
... . ...............
...
... .........
........
...
... ........
.........
........ α = 0.8 .............
.............
.........
... ..... .............
... .............
.............. ...
. .
...
... .........
...... ....
........ ..........................
.
......... ..... .
α = 0.7
.....................
.....................
.....................
... . . ................................. .........................................
. ... ..... ........
.. . . . . . . .... ......
... ........................................................................................................................................................................
................
α = 0.6 ............ ....................
. .
................................................................................................................................................................................................................................................................................
0 .. ..................................................................................................
....................... ...................... .... α = 0.5
... ......... .............. ..................... ..................................................................................
... ......... . ....
α = 0.4
... ......... .......................... .........................................
......... ............. .....................
... ......... .............
.............
.....................
.....
...
...
.........
.........
.........
.........
α = 0.3
.............
.............
.............
... ......... .............
... ......... ..........
...
...
α = 0.2
.........
.........
.........
... .........
.........
... .........
.........
....
..
...
α = 0.1 ..

...................................................................................................................................................................................................................................................................
.. t

Figure 13.1: Inverse Uncertainty Distribution of Liu Process

that are homogeneous linear functions of time t for any given α. See Fig-
ure 13.1.
A Liu process is described by three properties in the above definition.
Does such an uncertain process exist? The following theorem will answer
this question.
Theorem 13.1 (Liu [84], Existence Theorem) There exists a Liu process.
Proof: It follows from Theorem 11.19 that there exists a stationary inde-
pendent increment process Ct whose inverse uncertainty distribution is

σ 3 α
Φ−1
t (α) = ln t.
π 1−α
Furthermore, Ct has a Lipschitz continuous version. It is also easy to verify
that every increment Cs+t − Cs is a normal uncertain variable with expected
value 0 and variance t2 . Hence there exists a Liu process.
Theorem 13.2 Let Ct be a Liu process. Then for each time t > 0, the ratio
Ct /t is a normal uncertain variable with expected value 0 and variance 1.
That is,
Ct
∼ N (0, 1) (13.3)
t
for any t > 0.
Proof: Since Ct is a normal uncertain variable N (0, t), the operational law
tells us that Ct /t has an uncertainty distribution
  −1
πx
Ψ(x) = Φt (tx) = 1 + exp − √ .
3
Hence Ct /t is a normal uncertain variable with expected value 0 and variance
1. The theorem is verified.
Section 13.1 - Liu Process 317

Theorem 13.3 (Liu [84]) Let Ct be a Liu process. Then for each time t, we
have
t2
≤ E[Ct2 ] ≤ t2 . (13.4)
2

Proof: Note that Ct is a normal uncertain variable and has an uncertainty


distribution Φt (x) in (13.1). It follows from the definition of expected value
that
Z +∞ Z +∞ √ √
E[Ct2 ] = M{Ct2 ≥ x}dx = M{(Ct ≥ x) ∪ (Ct ≤ − x)}dx.
0 0

On the one hand, we have


Z +∞ √ √
E[Ct2 ] ≤ (M{Ct ≥ x} + M{Ct ≤ − x})dx
0
Z +∞ √ √
= (1 − Φt ( x) + Φt (− x))dx = t2 .
0

On the other hand, we have

+∞ +∞
√ √ t2
Z Z
E[Ct2 ] ≥ M{Ct ≥ x}dx = (1 − Φt ( x))dx = .
0 0 2

Hence (13.4) is proved.

Theorem 13.4 (Iwamura-Xu [59]) Let Ct be a Liu process. Then for each
time t, we have
1.24t4 < V [Ct2 ] < 4.31t4 . (13.5)

Proof: Let q be the expected value of Ct2 . On the one hand, it follows from
the definition of variance that
Z +∞
V [Ct2 ] = M{(Ct2 − q)2 ≥ x}dx
0
Z +∞  q


≤ M Ct ≥ q+ x dx
0
Z +∞  q


+ M Ct ≤ − q + x dx
0
Z +∞  q

q


+ M − q − x ≤ Ct ≤ q − x dx.
0
318 Chapter 13 - Uncertain Calculus

Since t2 /2 ≤ q ≤ t2 , we have

Z +∞  q


First Term = M Ct ≥ q + x dx
0
Z +∞  q √

≤ M Ct ≥ t2 /2 + x dx
0
√ !!−1
 
+∞
p
t2 /2 + x
Z
π
= 1 − 1 + exp − √  dx
0 3t

≤ 1.725t4 ,

Z +∞  q


Second Term = M Ct ≤ − q + x dx
0
Z +∞  q √

≤ M Ct ≤ − t2 /2 + x dx
0

+∞
p √ !!−1
t2 /2 + x
Z
π
= 1 + exp √ dx
0 3t

≤ 1.725t4 ,

Z +∞  q

q


Third Term = M − q − x ≤ Ct ≤ q − x dx
0
Z +∞  q


≤ M Ct ≤ q − x dx
0
Z +∞  q


≤ M Ct ≤ t2 − x dx
0

+∞
p √ !!−1
t2 + x
Z
π
= 1 + exp − √ dx
0 3t

< 0.86t4 .

It follows from the above three upper bounds that

V [Ct2 ] < 1.725t4 + 1.725t4 + 0.86t4 = 4.31t4 .


Section 13.1 - Liu Process 319

On the other hand, we have


Z +∞
V [Ct2 ] = M{(Ct2 − q)2 ≥ x}dx
0
Z +∞  q


≥ M Ct ≥ q+ x dx
0
Z +∞  q


≥ M Ct ≥ t2 + x dx
0
√ !!−1
 p 
+∞
t2 +
Z
π x
= 1 − 1 + exp − √  dx
0 3t

> 1.24t4 .

The theorem is thus verified. An open problem is to improve the bounds of


the variance of the square of Liu process.

Definition 13.2 Let Ct be a Liu process. Then for any real numbers e and
σ > 0, the uncertain process

At = et + σCt (13.6)

is called an arithmetic Liu process, where e is called the drift and σ is called
the diffusion.

It is clear that the arithmetic Liu process At is a type of stationary in-


dependent increment process. In addition, the arithmetic Liu process At has
a normal uncertainty distribution with expected value et and variance σ 2 t2 ,
i.e.,
At ∼ N (et, σt) (13.7)
whose uncertainty distribution is
  −1
π(et − x)
Φt (x) = 1 + exp √ (13.8)
3σt
and inverse uncertainty distribution is

σt 3 α
Φ−1
t (α) = et + ln . (13.9)
π 1−α
Definition 13.3 Let Ct be a Liu process. Then for any real numbers e and
σ > 0, the uncertain process

Gt = exp(et + σCt ) (13.10)

is called a geometric Liu process, where e is called the log-drift and σ is called
the log-diffusion.
320 Chapter 13 - Uncertain Calculus

Note that the geometric Liu process Gt has a lognormal uncertainty dis-
tribution, i.e.,
Gt ∼ LOGN (et, σt) (13.11)

whose uncertainty distribution is


  −1
π(et − ln x)
Φt (x) = 1 + exp √ (13.12)
3σt

and inverse uncertainty distribution is


√ !
σt 3 α
Φ−1
t (α) = exp et + ln . (13.13)
π 1−α

Furthermore, the geometric Liu process Gt has an expected value,


( √ √ √
σt 3 exp(et) csc(σt 3), if t < π/(σ 3)
E[Gt ] = √ (13.14)
+∞, if t ≥ π/(σ 3).

13.2 Liu Integral


As the most popular topic of uncertain integral, Liu integral allows us to
integrate an uncertain process (the integrand) with respect to Liu process
(the integrator). The result of Liu integral is another uncertain process.

Definition 13.4 (Liu [80]) Let Xt be an uncertain process and let Ct be a


Liu process. For any partition of closed interval [a, b] with a = t1 < t2 <
· · · < tk+1 = b, the mesh is written as

∆ = max |ti+1 − ti |. (13.15)


1≤i≤k

Then Liu integral of Xt with respect to Ct is defined as


Z b k
X
Xt dCt = lim Xti · (Cti+1 − Cti ) (13.16)
a ∆→0
i=1

provided that the limit exists almost surely and is finite. In this case, the
uncertain process Xt is said to be integrable.

Since Xt and Ct are uncertain variables at each time t, the limit in (13.16)
is also an uncertain variable provided that the limit exists almost surely and
is finite. Hence an uncertain process Xt is integrable with respect to Ct if
and only if the limit in (13.16) is an uncertain variable.
Section 13.2 - Liu Integral 321

Example 13.1: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (13.16) that
Z s k
X
dCt = lim (Cti+1 − Cti ) ≡ Cs − C0 = Cs .
0 ∆→0
i=1

That is,
Z s
dCt = Cs . (13.17)
0

Example 13.2: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (13.16) that

k 
X 
Cs2 = Ct2i+1 − Ct2i
i=1
k k
X 2 X 
= Cti+1 − Cti +2 Cti Cti+1 − Cti
i=1 i=1
Z s
→0+2 Ct dCt
0

as ∆ → 0. That is,
Z s
1 2
Ct dCt = C . (13.18)
0 2 s

Example 13.3: For any partition 0 = t1 < t2 < · · · < tk+1 = s, it follows
from (13.16) that

k
X 
sCs = ti+1 Cti+1 − ti Cti
i=1
k
X k
X
= Cti+1 (ti+1 − ti ) + ti (Cti+1 − Cti )
i=1 i=1
Z s Z s
→ Ct dt + tdCt
0 0

as ∆ → 0. That is,
Z s Z s
Ct dt + tdCt = sCs . (13.19)
0 0

Theorem 13.5 If Xt is a sample-continuous uncertain process on [a, b], then


it is integrable with respect to Ct on [a, b].
322 Chapter 13 - Uncertain Calculus

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. Since the uncertain process Xt is sample-continuous, almost all sample
paths are continuous functions with respect to t. Hence the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1

exists almost surely and is finite. On the other hand, since Xt and Ct are
uncertain variables at each time t, the above limit is also a measurable func-
tion. Hence the limit is an uncertain variable and then Xt is integrable with
respect to Ct .

Theorem 13.6 If Xt is an integrable uncertain process on [a, b], then it is


integrable on each subinterval of [a, b]. Moreover, if c ∈ [a, b], then
Z b Z c Z b
Xt dCt = Xt dCt + Xt dCt . (13.20)
a a c

Proof: Let [a0 , b0 ] be a subinterval of [a, b]. Since Xt is an integrable uncer-


tain process on [a, b], for any partition

a = t1 < · · · < tm = a0 < tm+1 < · · · < tn = b0 < tn+1 < · · · < tk+1 = b,

the limit
k
X
lim Xti (Cti+1 − Cti )
∆→0
i=1

exists almost surely and is finite. Thus the limit


n−1
X
lim Xti (Cti+1 − Cti )
∆→0
i=m

exists almost surely and is finite. Hence Xt is integrable on the subinterval


[a0 , b0 ]. Next, for the partition

a = t1 < · · · < tm = c < tm+1 < · · · < tk+1 = b,

we have
k
X m−1
X k
X
Xti (Cti+1 − Cti ) = Xti (Cti+1 − Cti ) + Xti (Cti+1 − Cti ).
i=1 i=1 i=m

Note that
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Section 13.2 - Liu Integral 323

Z c m−1
X
Xt dCt = lim Xti (Cti+1 − Cti ),
a ∆→0
i=1
Z b k
X
Xt dCt = lim Xti (Cti+1 − Cti ).
c ∆→0
i=m
Hence the equation (13.20) is proved.
Theorem 13.7 (Linearity of Liu Integral) Let Xt and Yt be integrable un-
certain processes on [a, b], and let α and β be real numbers. Then
Z b Z b Z b
(αXt + βYt )dCt = α Xt dCt + β Yt dCt . (13.21)
a a a

Proof: Let a = t1 < t2 < · · · < tk+1 = b be a partition of the closed interval
[a, b]. It follows from the definition of Liu integral that
Z b Xk
(αXt + βYt )dCt = lim (αXti + βYti )(Cti+1 − Cti )
a ∆→0
i=1
k
X k
X
= lim α Xti (Cti+1 − Cti ) + lim β Yti (Cti+1 − Cti )
∆→0 ∆→0
i=1 i=1
Z b Z b
=α Xt dCt + β Yt dCt .
a a

Hence the equation (13.21) is proved.


Theorem 13.8 Let f (t) be an integrable function with respect to t. Then
the Liu integral Z s
f (t)dCt (13.22)
0
is a normal uncertain variable at each time s, and
Z s  Z s 
f (t)dCt ∼ N 0, |f (t)|dt . (13.23)
0 0

Proof: Since the increments of Ct are stationary and independent normal


uncertain variables, for any partition of closed interval [0, s] with 0 = t1 <
t2 < · · · < tk+1 = s, it follows from Theorem 2.11 that
k k
!
X X
f (ti )(Cti+1 − Cti ) ∼ N 0, |f (ti )|(ti+1 − ti ) .
i=1 i=1

That is, the sum is also a normal uncertain variable. Since f is an integrable
function, we have
Xk Z s
|f (ti )|(ti+1 − ti ) → |f (t)|dt
i=1 0
324 Chapter 13 - Uncertain Calculus

as the mesh ∆ → 0. Hence we obtain


Z s k
X  Z s 
f (t)dCt = lim f (ti )(Cti+1 − Cti ) ∼ N 0, |f (t)|dt .
0 ∆→0 0
i=1

The theorem is proved.

Exercise 13.1: Let s be a given time with s > 0. Show that the Liu integral
Z s
tdCt (13.24)
0

is a normal uncertain variable N (0, s2 /2) and has an uncertainty distribution


  −1
2πx
Φs (x) = 1 + exp − √ . (13.25)
3s2

Exercise 13.2: For any real number α with 0 < α < 1, the uncertain process
Z s
Fs = (s − t)−α dCt (13.26)
0

is called a fractional Liu process with index α. Show that Fs is a normal


uncertain variable and
s1−α
 
Fs ∼ N 0, (13.27)
1−α
whose uncertainty distribution is
  −1
π(1 − α)x
Φs (x) = 1 + exp − √ . (13.28)
3s1−α
Definition 13.5 (Chen-Ralescu [13]) Let Ct be a Liu process and let Zt be
an uncertain process. If there exist uncertain processes µt and σt such that
Z t Z t
Zt = Z0 + µs ds + σs dCs (13.29)
0 0

for any t ≥ 0, then Zt is called a general Liu process with drift µt and
diffusion σt . Furthermore, Zt has an uncertain differential

dZt = µt dt + σt dCt . (13.30)

Example 13.4: It follows from the equation (13.17) that Liu process Ct can
be written as Z t
Ct = dCs .
0
Section 13.3 - Fundamental Theorem 325

Thus Ct is a general Liu process with drift 0 and diffusion 1, and has an
uncertain differential dCt .

Example 13.5: It follows from the equation (13.18) that Ct2 can be written
as Z t
Ct2 = 2 Cs dCs .
0

Thus Ct2
is a general Liu process with drift 0 and diffusion 2Ct , and has an
uncertain differential
d(Ct2 ) = 2Ct dCt .

Example 13.6: It follows from the equation (13.19) that tCt can be written
as Z Z t t
tCt = Cs ds + sdCs .
0 0

Thus tCt is a general Liu process with drift Ct and diffusion t, and has an
uncertain differential
d(tCt ) = Ct dt + tdCt .

Theorem 13.9 (Chen-Ralescu [13]) Any general Liu process is a sample-


continuous uncertain process.

Proof: Let Zt be a general Liu process with drift µt and diffusion σt . Then
we immediately have
Z t Z t
Zt = Z0 + µs ds + σs dCs .
0 0

For each γ ∈ Γ, it is obvious that


Z t Z t

|Zt (γ) − Zr (γ)| = µs (γ)ds + σs (γ)dCs (γ) → 0
r r

as r → t. Hence Zt is sample-continuous and the theorem is proved.

13.3 Fundamental Theorem


Theorem 13.10 (Liu [80], Fundamental Theorem of Uncertain Calculus)
Let h(t, c) be a continuously differentiable function. Then Zt = h(t, Ct ) is a
general Liu process and has an uncertain differential

∂h ∂h
dZt = (t, Ct )dt + (t, Ct )dCt . (13.31)
∂t ∂c
326 Chapter 13 - Uncertain Calculus

Proof: Write ∆Ct = Ct+∆t − Ct = C∆t . It follows from Theorems 13.3


and 13.4 that ∆t and ∆Ct are infinitesimals with the same order. Since the
function h is continuously differentiable, by using Taylor series expansion,
the infinitesimal increment of Zt has a first-order approximation,
∂h ∂h
∆Zt = (t, Ct )∆t + (t, Ct )∆Ct .
∂t ∂c
Hence we obtain the uncertain differential (13.31) because it makes
Z s Z s
∂h ∂h
Zs = Z0 + (t, Ct )dt + (t, Ct )dCt . (13.32)
0 ∂t 0 ∂c

This formula is an integral form of the fundamental theorem.

Example 13.7: Let us calculate the uncertain differential of tCt . In this


case, we have h(t, c) = tc whose partial derivatives are
∂h ∂h
(t, c) = c, (t, c) = t.
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that

d(tCt ) = Ct dt + tdCt . (13.33)

Thus tCt is a general Liu process with drift Ct and diffusion t.

Example 13.8: Let us calculate the uncertain differential of the arithmetic


Liu process At = et + σCt . In this case, we have h(t, c) = et + σc whose
partial derivatives are
∂h ∂h
(t, c) = e, (t, c) = σ.
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that

dAt = edt + σdCt . (13.34)

Thus At is a general Liu process with drift e and diffusion σ.

Example 13.9: Let us calculate the uncertain differential of the geometric


Liu process Gt = exp(et + σCt ). In this case, we have h(t, c) = exp(et + σc)
whose partial derivatives are
∂h ∂h
(t, c) = eh(t, c), (t, c) = σh(t, c).
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that

dGt = eGt dt + σGt dCt . (13.35)

Thus Gt is a general Liu process with drift eGt and diffusion σGt .
Section 13.5 - Change of Variables 327

13.4 Chain Rule


Chain rule is a special case of the fundamental theorem of uncertain calculus.

Theorem 13.11 (Liu [80], Chain Rule) Let f (c) be a continuously differen-
tiable function. Then f (Ct ) has an uncertain differential

df (Ct ) = f 0 (Ct )dCt . (13.36)

Proof: Since f (c) is a continuously differentiable function, we immediately


have
∂ ∂
f (c) = 0, f (c) = f 0 (c).
∂t ∂c
It follows from the fundamental theorem of uncertain calculus that the equa-
tion (13.36) holds.

Example 13.10: Let us calculate the uncertain differential of Ct2 . In this


case, we have f (c) = c2 and f 0 (c) = 2c. It follows from the chain rule that

dCt2 = 2Ct dCt . (13.37)

Example 13.11: Let us calculate the uncertain differential of sin(Ct ). In


this case, we have f (c) = sin(c) and f 0 (c) = cos(c). It follows from the chain
rule that
d sin(Ct ) = cos(Ct )dCt . (13.38)

Example 13.12: Let us calculate the uncertain differential of exp(Ct ). In


this case, we have f (c) = exp(c) and f 0 (c) = exp(c). It follows from the chain
rule that
d exp(Ct ) = exp(Ct )dCt . (13.39)

13.5 Change of Variables


Theorem 13.12 (Liu [80], Change of Variables) Let f be a continuously
differentiable function. Then for any s > 0, we have
Z s Z Cs
f 0 (Ct )dCt = f 0 (c)dc. (13.40)
0 C0

That is, Z s
f 0 (Ct )dCt = f (Cs ) − f (C0 ). (13.41)
0

Proof: Since f is a continuously differentiable function, it follows from the


chain rule that
df (Ct ) = f 0 (Ct )dCt .
328 Chapter 13 - Uncertain Calculus

This formula implies that


Z s
f (Cs ) = f (C0 ) + f 0 (Ct )dCt .
0

Hence the theorem is verified.

Example 13.13: Since the function f 0 (c) = c has an antiderivative f (c) =


c2 /2, it follows from the change of variables of integral that
Z s
1 1 1
Ct dCt = Cs2 − C02 = Cs2 .
0 2 2 2

Example 13.14: Since the function f 0 (c) = c2 has an antiderivative f (c) =


c3 /3, it follows from the change of variables of integral that
Z s
1 1 1
Ct2 dCt = Cs3 − C03 = Cs3 .
0 3 3 3

Example 13.15: Since the function f 0 (c) = exp(c) has an antiderivative


f (c) = exp(c), it follows from the change of variables of integral that
Z s
exp(Ct )dCt = exp(Cs ) − exp(C0 ) = exp(Cs ) − 1.
0

13.6 Integration by Parts


Theorem 13.13 (Liu [80], Integration by Parts) Suppose Xt and Yt are
general Liu processes. Then

d(Xt Yt ) = Yt dXt + Xt dYt . (13.42)

Proof: Note that ∆Xt and ∆Yt are infinitesimals with the same order. Since
the function xy is a continuously differentiable function with respect to x and
y, by using Taylor series expansion, the infinitesimal increment of Xt Yt has
a first-order approximation,

∆(Xt Yt ) = Yt ∆Xt + Xt ∆Yt .

Hence we obtain the uncertain differential (13.42) because it makes


Z s Z s
Xs Ys = X0 Y0 + Yt dXt + Xt dYt . (13.43)
0 0

The theorem is thus proved.


Section 13.7 - Bibliographic Notes 329

Example 13.16: In order to illustrate the integration by parts, let us cal-


culate the uncertain differential of

Zt = exp(t)Ct2 .

In this case, we define

Xt = exp(t), Yt = Ct2 .

Then
dXt = exp(t)dt, dYt = 2Ct dCt .
It follows from the integration by parts that

dZt = exp(t)Ct2 dt + 2 exp(t)Ct dCt .

Example 13.17: The integration by parts may also calculate the uncertain
differential of Z t
Zt = sin(t + 1) sdCs .
0

In this case, we define


Z t
Xt = sin(t + 1), Yt = sdCs .
0

Then
dXt = cos(t + 1)dt, dYt = tdCt .
It follows from the integration by parts that
Z t 
dZt = sdCs cos(t + 1)dt + sin(t + 1)tdCt .
0

Example 13.18: Let f and g be continuously differentiable functions. It is


clear that
Zt = f (t)g(Ct )
is an uncertain process. In order to calculate the uncertain differential of Zt ,
we define
Xt = f (t), Yt = g(Ct ).
Then
dXt = f 0 (t)dt, dYt = g 0 (Ct )dCt .
It follows from the integration by parts that

dZt = f 0 (t)g(Ct )dt + f (t)g 0 (Ct )dCt .


330 Chapter 13 - Uncertain Calculus

13.7 Bibliographic Notes


Uncertain integral was proposed by Liu [78] in 2008 in order to integrate
uncertain processes with respect to Liu process. One year later, Liu [80]
presented the fundamental theorem of uncertain calculus from which the
techniques of chain rule, change of variables, and integration by parts were
derived.
Note that uncertain integral may also be defined with respect to other
integrators. For example, Yao [168] defined an uncertain integral with respect
to uncertain renewal process, and Chen [16] investigated an uncertain integral
with respect to finite variation processes. Since then, the theory of uncertain
calculus was well developed.
Chapter 14

Uncertain Differential
Equation

Uncertain differential equation is a type of differential equation involving


uncertain processes. This chapter will discuss the existence, uniqueness and
stability of solutions of uncertain differential equations, and introduce Yao-
Chen formula that represents the solution of an uncertain differential equation
by a family of solutions of ordinary differential equations. On the basis of
this formula, some formulas to calculate extreme value, first hitting time, and
time integral of solution are provided. Furthermore, some numerical methods
for solving general uncertain differential equations are designed.

14.1 Uncertain Differential Equation


Definition 14.1 (Liu [78]) Suppose Ct is a Liu process, and f and g are
two functions. Then
dXt = f (t, Xt )dt + g(t, Xt )dCt (14.1)
is called an uncertain differential equation. A solution is an uncertain process
Xt that satisfies (14.1) identically in t.

Remark 14.1: The uncertain differential equation (14.1) is equivalent to


the uncertain integral equation
Z s Z s
Xs = X0 + f (t, Xt )dt + g(t, Xt )dCt . (14.2)
0 0

Theorem 14.1 Let ut and vt be two integrable uncertain processes. Then


the uncertain differential equation
dXt = ut dt + vt dCt (14.3)
332 Chapter 14 - Uncertain Differential Equation

has a solution Z t Z t
Xt = X0 + us ds + vs dCs . (14.4)
0 0

Proof: This theorem is essentially the definition of uncertain differential or


a direct deduction of the fundamental theorem of uncertain calculus.

Example 14.1: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = adt + bdCt . (14.5)
It follows from Theorem 14.1 that the solution is
Z t Z t
Xt = X0 + ads + bdCs .
0 0

That is,
Xt = X0 + at + bCt . (14.6)
Theorem 14.2 Let ut and vt be two integrable uncertain processes. Then
the uncertain differential equation
dXt = ut Xt dt + vt Xt dCt (14.7)
has a solution Z t Z t 
Xt = X0 exp us ds + vs dCs . (14.8)
0 0

Proof: At first, the original uncertain differential equation is equivalent to


dXt
= ut dt + vt dCt .
Xt
It follows from the fundamental theorem of uncertain calculus that
dXt
d ln Xt = = ut dt + vt dCt
Xt
and then Z t Z t
ln Xt = ln X0 + us ds + vs dCs .
0 0
Therefore the uncertain differential equation has a solution (14.8).

Example 14.2: Let a and b be real numbers. Consider the uncertain differ-
ential equation
dXt = aXt dt + bXt dCt . (14.9)
It follows from Theorem 14.2 that the solution is
Z t Z t 
Xt = X0 exp ads + bdCs .
0 0

That is,
Xt = X0 exp (at + bCt ) . (14.10)
Section 14.1 - Uncertain Differential Equation 333

Linear Uncertain Differential Equation


Theorem 14.3 (Chen-Liu [5]) Let u1t , u2t , v1t , v2t be integrable uncertain
processes. Then the linear uncertain differential equation

dXt = (u1t Xt + u2t )dt + (v1t Xt + v2t )dCt (14.11)

has a solution
 Z t Z t 
u2s v2s
Xt = Ut X0 + ds + dCs (14.12)
0 Us 0 Us
where Z t Z t 
Ut = exp u1s ds + v1s dCs . (14.13)
0 0

Proof: At first, we define two uncertain processes Ut and Vt via uncertain


differential equations,
u2t v2t
dUt = u1t Ut dt + v1t Ut dCt , dVt = dt + dCt .
Ut Ut
It follows from the integration by parts that

d(Ut Vt ) = Vt dUt + Ut dVt = (u1t Ut Vt + u2t )dt + (v1t Ut Vt + v2t )dCt .

That is, the uncertain process Xt = Ut Vt is a solution of the uncertain


differential equation (14.11). Note that
Z t Z t 
Ut = U0 exp u1s ds + v1s dCs ,
0 0
Z t Z t
u2s v2s
Vt = V0 + ds + dCs .
0 Us 0 Us
Taking U0 = 1 and V0 = X0 , we get the solution (14.12). The theorem is
proved.

Example 14.3: Let m, a, σ be real numbers. Consider a linear uncertain


differential equation

dXt = (m − aXt )dt + σdCt . (14.14)

At first, we have
Z t Z t 
Ut = exp (−a)ds + 0dCs = exp(−at).
0 0

It follows from Theorem 14.3 that the solution is


 Z t Z t 
Xt = exp(−at) X0 + m exp(as)ds + σ exp(as)dCs .
0 0
334 Chapter 14 - Uncertain Differential Equation

That is,
Z t
m  m
Xt = + exp(−at) X0 − + σ exp(−at) exp(as)dCs (14.15)
a a 0

provided that a 6= 0. Note that Xt is a normal uncertain variable, i.e.,


m  m σ σ
Xt ∼ N + exp(−at) X0 − , − exp(−at) . (14.16)
a a a a

Example 14.4: Let m and σ be real numbers. Consider a linear uncertain


differential equation
dXt = mdt + σXt dCt . (14.17)
At first, we have
Z t Z t 
Ut = exp 0ds + σdCs = exp(σCt ).
0 0

It follows from Theorem 14.3 that the solution is


 Z t Z t 
Xt = exp(σCt ) X0 + m exp(−σCs )ds + 0dCs .
0 0

That is,  Z t 
Xt = exp(σCt ) X0 + m exp(−σCs )ds . (14.18)
0

14.2 Analytic Methods


This section will provide two analytic methods for solving some nonlinear
uncertain differential equations.

First Analytic Method


This subsection will introduce an analytic method for solving nonlinear un-
certain differential equations like

dXt = f (t, Xt )dt + σt Xt dCt (14.19)

and
dXt = αt Xt dt + g(t, Xt )dCt . (14.20)

Theorem 14.4 (Liu [105]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation

dXt = f (t, Xt )dt + σt Xt dCt (14.21)


Section 14.2 - Analytic Methods 335

has a solution
Xt = Yt−1 Zt (14.22)
where  Z t 
Yt = exp − σs dCs (14.23)
0
and Zt is the solution of the uncertain differential equation
dZt = Yt f (t, Yt−1 Zt )dt (14.24)
with initial value Z0 = X0 .
Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
 Z t 
dYt = − exp − σs dCs σt dCt = −Yt σt dCt .
0

It follows from the integration by parts that


d(Xt Yt ) = Xt dYt + Yt dXt = −Xt Yt σt dCt + Yt f (t, Xt )dt + Yt σt Xt dCt .
That is,
d(Xt Yt ) = Yt f (t, Xt )dt.
Defining Zt = Xt Yt , we obtain Xt = Yt−1 Zt and dZt = Yt f (t, Yt−1 Zt )dt.
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.

Example 14.5: Let α and σ be real numbers with α 6= 1. Consider the


uncertain differential equation
dXt = Xtα dt + σXt dCt . (14.25)
At first, we have Yt = exp(−σCt ) and Zt satisfies the uncertain differential
equation,
dZt = exp(−σCt )(exp(σCt )Zt )α dt = exp((α − 1)σCt )Ztα dt.
Since α 6= 1, we have
dZt1−α = (1 − α) exp((α − 1)σCt )dt.
It follows from the fundamental theorem of uncertain calculus that
Z t
Zt1−α = Z01−α + (1 − α) exp((α − 1)σCs )ds.
0

Since the initial value Z0 is just X0 , we have


 Z t 1/(1−α)
1−α
Zt = X0 + (1 − α) exp((α − 1)σCs )ds .
0
336 Chapter 14 - Uncertain Differential Equation

Theorem 14.4 says the uncertain differential equation (14.25) has a solution
Xt = Yt−1 Zt , i.e.,
 Z t 1/(1−α)
Xt = exp(σCt ) X01−α + (1 − α) exp((α − 1)σCs )ds .
0

Theorem 14.5 (Liu [105]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation

dXt = αt Xt dt + g(t, Xt )dCt (14.26)

has a solution
Xt = Yt−1 Zt (14.27)
where  Z t 
Yt = exp − αs ds (14.28)
0
and Zt is the solution of the uncertain differential equation

dZt = Yt g(t, Yt−1 Zt )dCt (14.29)

with initial value Z0 = X0 .

Proof: At first, by using the chain rule, the uncertain process Yt has an
uncertain differential
 Z t 
dYt = − exp − αs ds αt dt = −Yt αt dt.
0

It follows from the integration by parts that

d(Xt Yt ) = Xt dYt + Yt dXt = −Xt Yt αt dt + Yt αt Xt dt + Yt g(t, Xt )dCt .

That is,
d(Xt Yt ) = Yt g(t, Xt )dCt .
Defining Zt = Xt Yt , we obtain Xt = Yt−1 Zt and dZt = Yt g(t, Yt−1 Zt )dCt .
Furthermore, since Y0 = 1, the initial value Z0 is just X0 . The theorem is
thus verified.

Example 14.6: Let α and β be real numbers with β 6= 1. Consider the


uncertain differential equation

dXt = αXt dt + Xtβ dCt . (14.30)

At first, we have Yt = exp(−αt) and Zt satisfies the uncertain differential


equation,

dZt = exp(−αt)(exp(αt)Zt )β dCt = exp((β − 1)αt)Ztβ dCt .


Section 14.2 - Analytic Methods 337

Since β 6= 1, we have

dZt1−β = (1 − β) exp((β − 1)αt)dCt .

It follows from the fundamental theorem of uncertain calculus that


Z t
Zt1−β = Z01−β + (1 − β) exp((β − 1)αs)dCs .
0

Since the initial value Z0 is just X0 , we have


 Z t 1/(1−β)
Zt = X01−β + (1 − β) exp((β − 1)αs)dCs .
0

Theorem 14.5 says the uncertain differential equation (14.30) has a solution
Xt = Yt−1 Zt , i.e.,
 Z t 1/(1−β)
Xt = exp(αt) X01−β + (1 − β) exp((β − 1)αs)dCs .
0

Second Analytic Method


This subsection will introduce an analytic method for solving nonlinear un-
certain differential equations like

dXt = f (t, Xt )dt + σt dCt (14.31)

and
dXt = αt dt + g(t, Xt )dCt . (14.32)

Theorem 14.6 (Yao [174]) Let f be a function of two variables and let σt
be an integrable uncertain process. Then the uncertain differential equation

dXt = f (t, Xt )dt + σt dCt (14.33)

has a solution
Xt = Yt + Zt (14.34)
where Z t
Yt = σs dCs (14.35)
0

and Zt is the solution of the uncertain differential equation

dZt = f (t, Yt + Zt )dt (14.36)

with initial value Z0 = X0 .


338 Chapter 14 - Uncertain Differential Equation

Proof: At first, Yt has an uncertain differential dYt = σt dCt . It follows that


d(Xt − Yt ) = dXt − dYt = f (t, Xt )dt + σt dCt − σt dCt .
That is,
d(Xt − Yt ) = f (t, Xt )dt.
Defining Zt = Xt − Yt , we obtain Xt = Yt + Zt and dZt = f (t, Yt + Zt )dt.
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.

Example 14.7: Let α and σ be real numbers with α 6= 0. Consider the


uncertain differential equation
dXt = α exp(Xt )dt + σdCt . (14.37)
At first, we have Yt = σCt and Zt satisfies the uncertain differential equation,
dZt = α exp(σCt + Zt )dt.
Since α 6= 0, we have
d exp(−Zt ) = −α exp(σCt )dt.
It follows from the fundamental theorem of uncertain calculus that
Z t
exp(−Zt ) = exp(−Z0 ) − α exp(σCs )ds.
0

Since the initial value Z0 is just X0 , we have


 Z t 
Zt = X0 − ln 1 − α exp(X0 + σCs )ds .
0

Hence  Z t 
Xt = X0 + σCt − ln 1 − α exp(X0 + σCs )ds .
0

Theorem 14.7 (Yao [174]) Let g be a function of two variables and let αt
be an integrable uncertain process. Then the uncertain differential equation
dXt = αt dt + g(t, Xt )dCt (14.38)
has a solution
Xt = Yt + Zt (14.39)
where Z t
Yt = αs ds (14.40)
0
and Zt is the solution of the uncertain differential equation
dZt = g(t, Yt + Zt )dCt (14.41)
with initial value Z0 = X0 .
Section 14.3 - Existence and Uniqueness 339

Proof: The uncertain process Yt has an uncertain differential dYt = αt dt. It


follows that

d(Xt − Yt ) = dXt − dYt = αt dt + g(t, Xt )dCt − αt dt.

That is,
d(Xt − Yt ) = g(t, Xt )dCt .
Defining Zt = Xt − Yt , we obtain Xt = Yt + Zt and dZt = g(t, Yt + Zt )dCt .
Furthermore, since Y0 = 0, the initial value Z0 is just X0 . The theorem is
proved.

Example 14.8: Let α and σ be real numbers with σ 6= 0. Consider the


uncertain differential equation

dXt = αdt + σ exp(Xt )dCt . (14.42)

At first, we have Yt = αt and Zt satisfies the uncertain differential equation,

dZt = σ exp(αt + Zt )dCt .

Since σ 6= 0, we have

d exp(−Zt ) = σ exp(αt)dCt .

It follows from the fundamental theorem of uncertain calculus that


Z t
exp(−Zt ) = exp(−Z0 ) + σ exp(αs)dCs .
0

Since the initial value Z0 is just X0 , we have


 Z t 
Zt = X0 − ln 1 − σ exp(X0 + αs)dCs .
0

Hence  Z t 
Xt = X0 + αt − ln 1 − σ exp(X0 + αs)dCs .
0

14.3 Existence and Uniqueness


Theorem 14.8 (Chen-Liu [5], Existence and Uniqueness Theorem) The un-
certain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (14.43)

has a unique solution if the coefficients f (t, x) and g(t, x) satisfy the linear
growth condition

|f (t, x)| + |g(t, x)| ≤ L(1 + |x|), ∀x ∈ <, t ≥ 0 (14.44)


340 Chapter 14 - Uncertain Differential Equation

and Lipschitz condition

|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L|x − y|, ∀x, y ∈ <, t ≥ 0 (14.45)

for some constant L. Moreover, the solution is sample-continuous.

Proof: We first prove the existence of solution by a successive approximation


(0)
method. Define Xt = X0 , and
Z t   Z t  
(n) (n−1)
Xt = X0 + f s, Xs ds + g s, Xs(n−1) dCs
0 0

for n = 1, 2, · · · and write



(n)
Dt (γ) = max Xs(n+1) (γ) − Xs(n) (γ)

0≤s≤t

for each γ ∈ Γ. It follows from the linear growth condition and Lipschitz
condition that
Z s Z s
(0)
Dt (γ) = max f (v, X0 )dv + g(v, X0 )dCv (γ)
0≤s≤t 0 0
Z t Z t
≤ |f (v, X0 )| dv + Kγ |g(v, X0 )| dv
0 0

≤ (1 + |X0 |)L(1 + Kγ )t

where Kγ is the Lipschitz constant to the sample path Ct (γ). In fact, by


using the induction method, we may verify

(n) Ln+1 (1 + Kγ )n+1 n+1


Dt (γ) ≤ (1 + |X0 |) t
(n + 1)!
(k)
for each n. This means that, for each γ ∈ Γ, the sample paths Xt (γ)
converges uniformly on any given time interval. Write the limit by Xt (γ)
that is just a solution of the uncertain differential equation because
Z t Z t
Xt = X0 + f (s, Xs )ds + g(s, Xs )dCs .
0 0

Next we prove that the solution is unique. Assume that both Xt and Xt∗
are solutions of the uncertain differential equation. Then for each γ ∈ Γ, it
follows from the linear growth condition and Lipschitz condition that
Z t
|Xt (γ) − Xt∗ (γ)| ≤ L(1 + Kγ ) |Xv (γ) − Xv∗ (γ)|dv.
0

By using Gronwall inequality, we obtain

|Xt (γ) − Xt∗ (γ)| ≤ 0 · exp(L(1 + Kγ )t) = 0.


Section 14.4 - Stability 341

Hence Xt = Xt∗ . The uniqueness is verified. Finally, for each γ ∈ Γ, we have


Z t Z t

|Xt (γ) − Xr (γ)| =
f (s, Xs (γ))ds + g(s, Xs (γ))dCs (γ) → 0
r r

as r → t. Thus Xt is sample-continuous and the theorem is proved.

14.4 Stability
Definition 14.2 (Liu [80]) An uncertain differential equation is said to be
stable if for any two solutions Xt and Yt , we have
lim M{|Xt − Yt | < ε for all t ≥ 0} = 1 (14.46)
|X0 −Y0 |→0

for any given number ε > 0.

Example 14.9: In order to illustrate the concept of stability, let us consider


the uncertain differential equation
dXt = adt + bdCt . (14.47)
It is clear that two solutions with initial values X0 and Y0 are
Xt = X0 + at + bCt ,
Yt = Y0 + at + bCt .
Then for any given number ε > 0, we have
lim M{|Xt − Yt | < ε for all t ≥ 0} = lim M{|X0 − Y0 | < ε} = 1.
|X0 −Y0 |→0 |X0 −Y0 |→0

Hence the uncertain differential equation (14.47) is stable.

Example 14.10: Some uncertain differential equations are not stable. For
example, consider
dXt = Xt dt + bdCt . (14.48)
It is clear that two solutions with different initial values X0 and Y0 are
Z t
Xt = exp(t)X0 + b exp(t) exp(−s)dCs ,
0
Z t
Yt = exp(t)Y0 + b exp(t) exp(−s)dCs .
0
Then for any given number ε > 0, we have
lim M{|Xt − Yt | < ε for all t ≥ 0}
|X0 −Y0 |→0

= lim M{exp(t)|X0 − Y0 | < ε for all t ≥ 0} = 0.


|X0 −Y0 |→0

Hence the uncertain differential equation (14.48) is unstable.


342 Chapter 14 - Uncertain Differential Equation

Theorem 14.9 (Yao-Gao-Gao [170], Stability Theorem) The uncertain dif-


ferential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt (14.49)
is stable if the coefficients f (t, x) and g(t, x) satisfy the linear growth condition

|f (t, x)| + |g(t, x)| ≤ K(1 + |x|), ∀x ∈ <, t ≥ 0 (14.50)

for some constant K and strong Lipschitz condition

|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ L(t)|x − y|, ∀x, y ∈ <, t ≥ 0 (14.51)

for some bounded and integrable function L(t) on [0, +∞).

Proof: Since L(t) is bounded on [0, +∞), there is a constant R such that
L(t) ≤ R for any t. Then the strong Lipschitz condition (14.51) implies the
following Lipschitz condition,

|f (t, x) − f (t, y)| + |g(t, x) − g(t, y)| ≤ R|x − y|, ∀x, y ∈ <, t ≥ 0. (14.52)

It follows from the linear growth condition (14.50), the Lipschitz condition
(14.52) and the existence and uniqueness theorem that the uncertain differ-
ential equation (14.49) has a unique solution. Let Xt and Yt be two solutions
with initial values X0 and Y0 , respectively. Then for each γ, we have

d|Xt (γ) − Yt (γ)| ≤ |f (t, Xt (γ)) − f (t, Yt (γ))| + |g(t, Xt (γ)) − g(t, Yt (γ))|
≤ L(t)|Xt (γ) − Yt (γ)|dt + L(t)K(γ)|Xt (γ) − Yt (γ)|dt
= L(t)(1 + K(γ))|Xt (γ) − Yt (γ)|dt

where K(γ) is the Lipschitz constant of the sample path Ct (γ). It follows
that
 Z +∞ 
|Xt (γ) − Yt (γ)| ≤ |X0 − Y0 | exp (1 + K(γ)) L(s)ds .
0

Thus for any given ε > 0, we always have

M{|Xt − Yt | < ε for all t ≥ 0}


  Z +∞  
≥ M |X0 − Y0 | exp (1 + K(γ)) L(s)ds < ε .
0

Since   Z +∞  
M |X0 − Y0 | exp (1 + K(γ)) L(s)ds < ε → 1
0

as |X0 − Y0 | → 0, we obtain

lim M{|Xt − Yt | < ε for all t ≥ 0} = 1.


|X0 −Y0 |→0
Section 14.6 - Yao-Chen Formula 343

Hence the uncertain differential equation is stable.

Exercise 14.1: Suppose u1t , u2t , v1t , v2t are bounded functions with respect
to t such that
Z +∞ Z +∞
|u1t |dt < +∞, |v1t |dt < +∞. (14.53)
0 0

Show that the linear uncertain differential equation

dXt = (u1t Xt + u2t )dt + (v1t Xt + v2t )dCt (14.54)

is stable.

14.5 α-Path
Definition 14.3 (Yao-Chen [173]) Let α be a number with 0 < α < 1. An
uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (14.55)

is said to have an α-path Xtα if it solves the corresponding ordinary differen-


tial equation
dXtα = f (t, Xtα )dt + |g(t, Xtα )|Φ−1 (α)dt (14.56)
where Φ−1 (α) is the inverse standard normal uncertainty distribution, i.e.,

−1 3 α
Φ (α) = ln . (14.57)
π 1−α

Remark 14.2: Note that each α-path Xtα is a real-valued function of time t,
but is not necessarily one of sample paths. Furthermore, almost all α-paths
are continuous functions with respect to time t.

Example 14.11: The uncertain differential equation dXt = adt + bdCt with
X0 = 0 has an α-path
Xtα = at + |b|Φ−1 (α)t (14.58)
where Φ−1 is the inverse standard normal uncertainty distribution.

Example 14.12: The uncertain differential equation dXt = aXt dt+bXt dCt
with X0 = 1 has an α-path

Xtα = exp at + |b|Φ−1 (α)t



(14.59)

where Φ−1 is the inverse standard normal uncertainty distribution.


344 Chapter 14 - Uncertain Differential Equation

Xtα
..
.........
...
.... α = 0.9............
.............
...
.. ............
... .............
... .. .. ..... ..................
... .............
..............
... ..............
... ...............
...............
................................
..............................................
... ........................................ .......................
... ........................................... ...............
...............
... .................................... ........... ................
................
... ................................... ............ ................
... ...................... ........ .......... ................
... ................ ....... ....... ........
........
.................
.................
................. ...... ........
...
...
. .
... ........ ..... ...... .......
... .... ..... ..... ....... ........
... ..... ...... ...... ...... ........
.... ........
.........
.........
α = 0.8 ....
... ... ..... ..... ...... ....... ....... ..........
... ... ..... ..... ...... ...... ... .. ............
... ... ..... ..... ..... ....... ............. .........
..........
... ... .... ..... ...... ....... ....... ..........
... ..... ...... ...... ...... .......
... ... ..... ..... ...... ....... .......
........
..........
..........
... ... ..... ...... ...... ......
...
...
.... ..... ..... ......
.... .... ..... .....
.... ..... ..... ...... .............
....... ........
........... α = 0.7
.........
..

... ..... .... ..... ..... ....... .........


..... ...... ...... ...... ....... .........
... ..... .. .. .... ....... .........
... ..... .......... ........... ............ ........
α = 0.6
........
...
..... . .. ..
... ..... .......... ........... .............. .........
.........
... .....
.....
......
......
.......
..
........
... .........
.
...
...
.....
......
......
...... ....
....... ..............
....... ........
α = 0.5
............
.........
.........
..
... ...... ....... ........ .......
...
...
......
.......
.......
.......
........ α = 0.4
........
.........
.........
..........
...
...
.......
........
.........
α = 0.3
.........
..........
..........
...
... α = 0.2
..........
..........
............
..
... .....
...
...
α = 0.1
...
..............................................................................................................................................................................................................................................
t

Figure 14.1: A Spectrum of α-Paths of dXt = aXt dt + bXt dCt

14.6 Yao-Chen Formula


Yao-Chen formula relates uncertain differential equations and ordinary dif-
ferential equations, just like that Feynman-Kac formula relates stochastic
differential equations and partial differential equations.

Theorem 14.10 (Yao-Chen Formula [173]) Let Xt and Xtα be the solution
and α-path of the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.60)

respectively. Then
M{Xt ≤ Xtα , ∀t} = α, (14.61)
M{Xt > Xtα , ∀t} = 1 − α. (14.62)

Proof: At first, for each α-path Xtα , we divide the time interval into two
parts,
T + = t g (t, Xtα ) ≥ 0 ,


T − = t g (t, Xtα ) < 0 .




It is obvious that T + ∩ T − = ∅ and T + ∪ T − = [0, +∞). Write


 
+
dCt (γ) −1 +
Λ1 = γ ≤ Φ (α) for any t ∈ T ,
dt
Section 14.6 - Yao-Chen Formula 345

 
dCt (γ)
Λ−
1 = γ ≥ Φ−1 (1 − α) for any t ∈ T −
dt
where Φ−1 is the inverse standard normal uncertainty distribution. Since T +
and T − are disjoint sets and Ct has independent increments, we get

M{Λ+
1 } = α, M{Λ−
1 } = α, M{Λ+ −
1 ∩ Λ1 } = α.


For any γ ∈ Λ+
1 ∩ Λ1 , we always have

dCt (γ)
g(t, Xt (γ)) ≤ |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) ≤ Xtα for all t and

M{Xt ≤ Xtα , ∀t} ≥ M{Λ+
1 ∩ Λ1 } = α. (14.63)

On the other hand, let us define


 
dCt (γ) −1
Λ+
2 = γ > Φ (α) for any t ∈ T +
,
dt
 

dCt (γ) −1 −
Λ2 = γ < Φ (1 − α) for any t ∈ T .
dt
Since T + and T − are disjoint sets and Ct has independent increments, we
obtain

M{Λ+
2 } = 1 − α, M{Λ−
2 } = 1 − α, M{Λ+ −
2 ∩ Λ2 } = 1 − α.


For any γ ∈ Λ+
2 ∩ Λ2 , we always have

dCt (γ)
g(t, Xt (γ)) > |g(t, Xtα )|Φ−1 (α), ∀t.
dt
Hence Xt (γ) > Xtα for all t and

M{Xt > Xtα , ∀t} ≥ M{Λ+
2 ∩ Λ2 } = 1 − α. (14.64)

Note that {Xt ≤ Xtα , ∀t} and {Xt 6≤ Xtα , ∀t} are opposite events with each
other. By using the duality axiom, we obtain

M{Xt ≤ Xtα , ∀t} + M{Xt 6≤ Xtα , ∀t} = 1.

It follows from {Xt > Xtα , ∀t} ⊂ {Xt 6≤ Xtα , ∀t} and monotonicity theorem
that
M{Xt ≤ Xtα , ∀t} + M{Xt > Xtα , ∀t} ≤ 1. (14.65)
Thus (14.61) and (14.62) follow from (14.63), (14.64) and (14.65) immedi-
ately.
346 Chapter 14 - Uncertain Differential Equation

Remark 14.3: It is also showed that for any α ∈ (0, 1), the following two
equations are true,
M{Xt < Xtα , ∀t} = α, (14.66)
M{Xt ≥ Xtα , ∀t} = 1 − α. (14.67)
Please mention that {Xt < Xtα ,
∀t} and {Xt ≥ Xtα , ∀t} are disjoint events
but not opposite. Although it is always true that
M{Xt < Xtα , ∀t} + M{Xt ≥ Xtα , ∀t} ≡ 1, (14.68)
the union of {Xt < Xtα , ∀t} and {Xt ≥ Xtα , ∀t} does not make the universal
set, and it is possible that
M{(Xt < Xtα , ∀t) ∪ (Xt ≥ Xtα , ∀t)} < 1. (14.69)

Uncertainty Distribution of Solution


Theorem 14.11 (Yao-Chen [173]) Let Xt and Xtα be the solution and α-
path of the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.70)
respectively. Then the solution Xt has an inverse uncertainty distribution
Ψ−1 α
t (α) = Xt . (14.71)
Proof: Note that {Xt ≤ Xtα } ⊃ {Xs ≤ Xsα , ∀s} holds. By using the
monotonicity theorem and Yao-Chen formula, we obtain
M{Xt ≤ Xtα } ≥ M{Xs ≤ Xsα , ∀s} = α. (14.72)
Similarly, we also have
M{Xt > Xtα } ≥ M{Xs > Xsα , ∀s} = 1 − α. (14.73)
In addition, since {Xt ≤ Xtα }
and {Xt > Xtα }
are opposite events, the duality
axiom makes
M{Xt ≤ Xtα } + M{Xt > Xtα } = 1. (14.74)
It follows from (14.72), (14.73) and (14.74) that M{Xt ≤ Xtα } = α. The
theorem is thus verified.

Exercise 14.2: Show that the solution of the uncertain differential equation
dXt = adt + bdCt with X0 = 0 has an inverse uncertainty distribution
Ψ−1
t (α) = at + |b|Φ
−1
(α)t (14.75)
−1
where Φ is the inverse standard normal uncertainty distribution.

Exercise 14.3: Show that the solution of the uncertain differential equation
dXt = aXt dt + bXt dCt with X0 = 1 has an inverse uncertainty distribution
Ψ−1 −1

t (α) = exp at + |b|Φ (α)t (14.76)
where Φ−1 is the inverse standard normal uncertainty distribution.
Section 14.6 - Yao-Chen Formula 347

Expected Value of Solution


Theorem 14.12 (Yao-Chen [173]) Let Xt and Xtα be the solution and α-
path of the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.77)

respectively. Then for any monotone (increasing or decreasing) function J,


we have Z 1
E[J(Xt )] = J(Xtα )dα. (14.78)
0

Proof: At first, it follows from Yao-Chen formula that Xt has an uncertainty


distribution Ψ−1 α
t (α) = Xt . Next, we may have a monotone function become
a strictly monotone function by a small perturbation. When J is a strictly
increasing function, it follows from Theorem 2.8 that J(Xt ) has an inverse
uncertainty distribution
Υ−1 α
t (α) = J(Xt ).

Thus we have
Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xtα )dα.
0 0

When J is a strictly decreasing function, it follows from Theorem 2.13 that


J(Xt ) has an inverse uncertainty distribution

Υ−1 1−α
t (α) = J(Xt ).

Thus we have
Z 1 Z 1 Z 1
E[J(Xt )] = Υ−1
t (α)dα = J(Xt1−α )dα = J(Xtα )dα.
0 0 0

The theorem is thus proved.

Exercise 14.4: Let Xt and Xtα be the solution and α-path of some uncertain
differential equation. Show that
Z 1
E[Xt ] = Xtα dα, (14.79)
0

Z 1
E[(Xt − K)+ ] = (Xtα − K)+ dα, (14.80)
0
Z 1
+
E[(K − Xt ) ] = (K − Xtα )+ dα. (14.81)
0
348 Chapter 14 - Uncertain Differential Equation

Extreme Value of Solution


Theorem 14.13 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.82)

respectively. Then for any time s > 0 and strictly increasing function J(x),
the supremum
sup J(Xt ) (14.83)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 α
s (α) = sup J(Xt ); (14.84)
0≤t≤s

and the infimum


inf J(Xt ) (14.85)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 α
s (α) = inf J(Xt ). (14.86)
0≤t≤s

Proof: Since J(x) is a strictly increasing function with respect to x, it is


always true that
 
sup J(Xt ) ≤ sup J(Xtα ) ⊃ {Xt ≤ Xtα , ∀t}.
0≤t≤s 0≤t≤s

By using Yao-Chen formula, we obtain


 
M sup J(Xt ) ≤ sup J(Xtα ) ≥ M{Xt ≤ Xtα , ∀t} = α. (14.87)
0≤t≤s 0≤t≤s

Similarly, we have
 
M sup J(Xt ) > sup J(Xtα ) ≥ M{Xt > Xtα , ∀t} = 1 − α. (14.88)
0≤t≤s 0≤t≤s

It follows from (14.87), (14.88) and the duality axiom that


 
M sup J(Xt ) ≤ sup J(Xtα ) = α (14.89)
0≤t≤s 0≤t≤s

which proves (14.84). Next, it is easy to verify that


 
inf J(Xt ) ≤ inf J(Xt ) ⊃ {Xt ≤ Xtα , ∀t}.
α
0≤t≤s 0≤t≤s
Section 14.6 - Yao-Chen Formula 349

By using Yao-Chen formula, we obtain


 
M inf J(Xt ) ≤ inf J(Xtα ) ≥ M{Xt ≤ Xtα , ∀t} = α. (14.90)
0≤t≤s 0≤t≤s

Similarly, we have
 
M inf J(Xt ) > inf J(Xt ) ≥ M{Xt > Xtα , ∀t} = 1 − α.
α
(14.91)
0≤t≤s 0≤t≤s

It follows from (14.90), (14.91) and the duality axiom that


 
M inf J(Xt ) ≤ inf J(Xt ) = α α
(14.92)
0≤t≤s 0≤t≤s

which proves (14.86). The theorem is thus verified.

Exercise 14.5: Let r and K be real numbers. Show that the supremum

sup exp(−rt)(Xt − K)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 α
s (α) = sup exp(−rt)(Xt − K)
0≤t≤s

for any given time s > 0.

Theorem 14.14 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.93)

respectively. Then for any time s > 0 and strictly decreasing function J(x),
the supremum
sup J(Xt ) (14.94)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 1−α
s (α) = sup J(Xt ); (14.95)
0≤t≤s

and the infimum


inf J(Xt ) (14.96)
0≤t≤s

has an inverse uncertainty distribution

Ψ−1 1−α
s (α) = inf J(Xt ). (14.97)
0≤t≤s
350 Chapter 14 - Uncertain Differential Equation

Proof: Since J(x) is a strictly decreasing function with respect to x, it is


always true that
 
sup J(Xt ) ≤ sup J(Xt ) ⊃ {Xt ≥ Xt1−α , ∀t}.
1−α
0≤t≤s 0≤t≤s

By using Yao-Chen formula, we obtain


 
M sup J(Xt ) ≤ sup J(Xt1−α ) ≥ M{Xt ≥ Xt1−α , ∀t} = α. (14.98)
0≤t≤s 0≤t≤s

Similarly, we have
 
M sup J(Xt ) > sup J(Xt ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α.
1−α
(14.99)
0≤t≤s 0≤t≤s

It follows from (14.98), (14.99) and the duality axiom that


 
M sup J(Xt ) ≤ sup J(Xt ) = α 1−α
(14.100)
0≤t≤s 0≤t≤s

which proves (14.95). Next, it is easy to verify that


 
inf J(Xt ) ≤ inf J(Xt ) ⊃ {Xt ≥ Xt1−α , ∀t}.
1−α
0≤t≤s 0≤t≤s

By using Yao-Chen formula, we obtain


 
M inf J(Xt ) ≤ inf J(Xt ) ≥ M{Xt ≥ Xt1−α , ∀t} = α.
1−α
(14.101)
0≤t≤s 0≤t≤s

Similarly, we have
 
M inf J(Xt ) > inf J(Xt1−α ) ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (14.102)
0≤t≤s 0≤t≤s

It follows from (14.101), (14.102) and the duality axiom that


 
M inf J(Xt ) ≤ inf J(Xt ) = α1−α
(14.103)
0≤t≤s 0≤t≤s

which proves (14.97). The theorem is thus verified.

Exercise 14.6: Let r and K be real numbers. Show that the supremum
sup exp(−rt)(K − Xt )
0≤t≤s

has an inverse uncertainty distribution


Ψ−1 1−α
s (α) = sup exp(−rt)(K − Xt )
0≤t≤s

for any given time s > 0.


Section 14.6 - Yao-Chen Formula 351

First Hitting Time of Solution


Theorem 14.15 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (14.104)

with an initial value X0 , respectively. Then for any given level z and strictly
increasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution
  
α

 1 − inf α sup J(Xt ) ≥ z , if z > J(X0 )



0≤t≤s
Ψ(s) =   (14.105)

α

sup α inf J(Xt ) ≤ z ,


 if z < J(X0 ).
0≤t≤s

Proof: At first, assume z > J(X0 ) and write


 
α0 = inf α sup J(Xtα ) ≥ z .

0≤t≤s

Then we have
sup J(Xtα0 ) = z,
0≤t≤s
 
{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s
 
{τz > s} = sup J(Xt ) < z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s

By using Yao-Chen formula, we obtain

M{τz ≤ s} ≥ M{Xt ≥ Xtα0 , ∀t} = 1 − α0 ,

M{τz > s} ≥ M{Xt < Xtα0 , ∀t} = α0 .


It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = 1 − α0 . Hence
the first hitting time τz has an uncertainty distribution
 
Ψ(s) = M{τz ≤ s} = 1 − inf α sup J(Xt ) ≥ z .α


0≤t≤s

Similarly, assume z < J(X0 ) and write


 
α0 = sup α inf J(Xtα ) ≤ z .

0≤t≤s

Then we have
inf J(Xtα0 ) = z,
0≤t≤s
352 Chapter 14 - Uncertain Differential Equation

 
{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s
 
{τz > s} = inf J(Xt ) > z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s

By using Yao-Chen formula, we obtain

M{τz ≤ s} ≥ M{Xt ≤ Xtα0 , ∀t} = α0 ,

M{τz > s} ≥ M{Xt > Xtα0 , ∀t} = 1 − α0 .


It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = α0 . Hence
the first hitting time τz has an uncertainty distribution
 
Ψ(s) = M{τz ≤ s} = sup α inf J(Xtα ) ≤ z .

0≤t≤s

The theorem is verified.

Theorem 14.16 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt (14.106)

with an initial value X0 , respectively. Then for any given level z and strictly
decreasing function J(x), the first hitting time τz that J(Xt ) reaches z has
an uncertainty distribution
  
α



 sup α sup J(X t ) ≥ z , if z > J(X0 )
0≤t≤s

Ψ(s) =   (14.107)

 1 − inf α inf J(Xtα ) ≤ z , if z < J(X0 ).


0≤t≤s

Proof: At first, assume z > J(X0 ) and write


 
α0 = sup α sup J(Xtα ) ≥ z .

0≤t≤s

Then we have
sup J(Xtα0 ) = z,
0≤t≤s
 
{τz ≤ s} = sup J(Xt ) ≥ z ⊃ {Xt ≤ Xtα0 , ∀t},
0≤t≤s
 
{τz > s} = sup J(Xt ) < z ⊃ {Xt > Xtα0 , ∀t}.
0≤t≤s
Section 14.6 - Yao-Chen Formula 353

By using Yao-Chen formula, we obtain


M{τz ≤ s} ≥ M{Xt ≤ Xtα0 , ∀t} = α0 ,
M{τz > s} ≥ M{Xt > Xtα0 , ∀t} = 1 − α0 .
It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = α0 . Hence
the first hitting time τz has an uncertainty distribution
 
Ψ(s) = M{τz ≤ s} = sup α sup J(Xtα ) ≥ z .

0≤t≤s

Similarly, assume z < J(X0 ) and write


 
α0 = inf α inf J(Xtα ) ≤ z .

0≤t≤s

Then we have
inf J(Xtα0 ) = z,
0≤t≤s
 
{τz ≤ s} = inf J(Xt ) ≤ z ⊃ {Xt ≥ Xtα0 , ∀t},
0≤t≤s
 
{τz > s} = inf J(Xt ) > z ⊃ {Xt < Xtα0 , ∀t}.
0≤t≤s

By using Yao-Chen formula, we obtain


M{τz ≤ s} ≥ M{Xt ≥ Xtα0 , ∀t} = 1 − α0 ,
M{τz > s} ≥ M{Xt < Xtα0 , ∀t} = α0 .
It follows from M{τz ≤ s} + M{τz > s} = 1 that M{τz ≤ s} = 1 − α0 . Hence
the first hitting time τz has an uncertainty distribution
 
Ψ(s) = M{τz ≤ s} = 1 − inf α inf J(Xtα ) ≤ z .

0≤t≤s

The theorem is verified.

Time Integral of Solution


Theorem 14.17 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation
dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.108)
respectively. Then for any time s > 0 and strictly increasing function J(x),
the time integral Z s
J(Xt )dt (14.109)
0
has an inverse uncertainty distribution
Z s
−1
Ψs (α) = J(Xtα )dt. (14.110)
0
354 Chapter 14 - Uncertain Differential Equation

Proof: Since J(x) is a strictly increasing function with respect to x, it is


always true that
Z s Z s 
J(Xt )dt ≤ J(Xtα )dt ⊃ {J(Xt ) ≤ J(Xtα ), ∀t} ⊃ {Xt ≤ Xtα , ∀t}.
0 0

By using Yao-Chen formula, we obtain


Z s Z s 
M J(Xt )dt ≤ J(Xtα )dt ≥ M{Xt ≤ Xtα , ∀t} = α. (14.111)
0 0

Similarly, we have
Z s Z s 
M J(Xt )dt > J(Xtα )dt ≥ M{Xt > Xtα , ∀t} = 1 − α. (14.112)
0 0

It follows from (14.111), (14.112) and the duality axiom that


Z s Z s 
M J(Xt )dt ≤ α
J(Xt )dt = α. (14.113)
0 0

The theorem is thus verified.

Exercise 14.7: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(Xt − K)dt
0

has an inverse uncertainty distribution


Z s
Ψ−1
s (α) = exp(−rt)(Xtα − K)dt
0

for any given time s > 0.

Theorem 14.18 (Yao [171]) Let Xt and Xtα be the solution and α-path of
the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt , (14.114)

respectively. Then for any time s > 0 and strictly decreasing function J(x),
the time integral Z s
J(Xt )dt (14.115)
0

has an inverse uncertainty distribution


Z s
−1
Ψs (α) = J(Xt1−α )dt. (14.116)
0
Section 14.7 - Numerical Methods 355

Proof: Since J(x) is a strictly decreasing function with respect to x, it is


always true that
Z s Z s 
J(Xt )dt ≤ J(Xt1−α )dt ⊃ {Xt ≥ Xt1−α , ∀t}.
0 0

By using Yao-Chen formula, we obtain


Z s Z s 
M J(Xt )dt ≤ J(Xt )dt ≥ M{Xt ≥ Xt1−α , ∀t} = α.
1−α
(14.117)
0 0

Similarly, we have
Z s Z s 
M J(Xt )dt > J(Xt )dt ≥ M{Xt < Xt1−α , ∀t} = 1 − α. (14.118)
1−α
0 0

It follows from (14.117), (14.118) and the duality axiom that


Z s Z s 
M J(Xt )dt ≤ 1−α
J(Xt )dt = α. (14.119)
0 0

The theorem is thus verified.

Exercise 14.8: Let r and K be real numbers. Show that the time integral
Z s
exp(−rt)(K − Xt )dt
0

has an inverse uncertainty distribution


Z s
Ψ−1
s (α) = exp(−rt)(K − Xt1−α )dt
0

for any given time s > 0.

14.7 Numerical Methods


It is almost impossible to find analytic solutions for general uncertain differ-
ential equations. This fact provides a motivation to design some numerical
methods to solve the uncertain differential equation

dXt = f (t, Xt )dt + g(t, Xt )dCt . (14.120)

In order to do so, a key point is to obtain a spectrum of α-paths of the


uncertain differential equation. For this purpose, Yao-Chen [173] designed
an Euler method:

Step 1. Fix α on (0, 1).


356 Chapter 14 - Uncertain Differential Equation

Step 2. Solve dXtα = f (t, Xtα )dt + |g(t, Xtα )|Φ−1 (α)dt by any method of or-
dinary differential equation and obtain the α-path Xtα , for example,
by using the recursion formula
α
Xi+1 = Xiα + f (ti , Xiα )h + |g(ti , Xiα )|Φ−1 (α)h (14.121)

where Φ−1 is the inverse standard normal uncertainty distribution


and h is the step length.
Step 3. The α-path Xtα is obtained.

Remark 14.4: Yang-Shen [161] designed a Runge-Kutta method that re-


places the recursion formula (14.121) with

α h
Xi+1 = Xiα + (k1 + 2k2 + 2k3 + k4 ) (14.122)
6
where
k1 = f (ti , Xiα ) + |g(ti , Xiα )|Φ−1 (α), (14.123)

k2 = f (ti + h/2, Xiα + hk1 /2) + |g(ti + h/2, Xiα + hk1 /2)|Φ−1 (α), (14.124)

k3 = f (ti + h/2, Xiα + hk2 /2) + |g(ti + h/2, Xiα + hk2 /2)|Φ−1 (α), (14.125)

k4 = f (ti + h, Xiα + hk3 ) + |g(ti + h, Xiα + hk3 )|Φ−1 (α). (14.126)

Example 14.13: In order to illustrate the numerical method, let us consider


an uncertain differential equation
p
dXt = (t − Xt )dt + 1 + Xt dCt , X0 = 1. (14.127)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


solve this equation successfully and obtain all α-paths of the uncertain dif-
ferential equation. Furthermore, we may get

E[X1 ] ≈ 0.870. (14.128)

Example 14.14: Now we consider a nonlinear uncertain differential equa-


tion p
dXt = Xt dt + (1 − t)Xt dCt , X0 = 1. (14.129)
Note that (1 − t)Xt takes not only positive values but also negative values.
The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may
obtain all α-paths of the uncertain differential equation. Furthermore, we
may get
E[(X2 − 3)+ ] ≈ 2.845. (14.130)
Section 14.8 - Bibliographic Notes 357

14.8 Bibliographic Notes


The study of uncertain differential equation was pioneered by Liu [78] in 2008.
This work was immediately followed upon by many researchers. Nowadays,
the uncertain differential equation has achieved fruitful results in both theory
and practice.
The existence and uniqueness theorem of solution of uncertain differential
equation was first proved by Chen-Liu [5] under linear growth condition and
Lipschitz condition. Later, the theorem was verified again by Gao [47] under
local linear growth condition and local Lipschitz condition.
The first concept of stability of uncertain differential equation was pre-
sented by Liu [80], and some stability theorems were proved by Yao-Gao-
Gao [170]. Following that, different types of stability of uncertain differential
equations were explored, for example, stability in mean (Yao-Ke-Sheng [177]),
stability in moment (Sheng-Wang [137]), stability in distribution (Yang-Ni-
Zhang [163]), almost sure stability (Liu-Ke-Fei [101]), and exponential sta-
bility (Sheng-Gao [141]).
In order to solve uncertain differential equations, Chen-Liu [5] obtained
an analytic solution to linear uncertain differential equations. In addition,
Liu [105] and Yao [174] presented a spectrum of analytic methods to solve
some special classes of nonlinear uncertain differential equations.
More importantly, Yao-Chen [173] showed that the solution of an uncer-
tain differential equation can be represented by a family of solutions of ordi-
nary differential equations, thus relating uncertain differential equations and
ordinary differential equations. On the basis of Yao-Chen formula, Yao [171]
presented some formulas to calculate extreme value, first hitting time, and
time integral of solution of uncertain differential equation. Furthermore, some
numerical methods for solving general uncertain differential equations were
designed among others by Yao-Chen [173], Yang-Shen [161], Yang-Ralescu
[160], Gao [32], and Zhang-Gao-Huang [202].
Uncertain differential equation has been successfully extended in many
directions, including uncertain delay differential equation (Barbacioru [2],
Ge-Zhu [50] and Liu-Fei [100]), higher-order uncertain differential equation
(Yao [184]), multifactor uncertain differential equation (Li-Peng-Zhang [71]),
uncertain differential equation with jumps (Yao [168]), and uncertain partial
differential equation (Yang-Yao [164]).
Uncertain differential equation has been widely applied in many fields such
as finance (Liu [89]), optimal control (Zhu [207]), differential game (Yang-
Gao [158]), heat conduction (Yang-Yao [164]), population growth (Sheng-
Gao-Zhang [143]), string vibration (Gao [37]), and spring vibration (Jia-Dai
[63]).
For further explorations on the development and applications of uncertain
differential equation, the interested reader may consult Yao’s book [184].
Chapter 15

Uncertain Finance

This chapter will introduce uncertain stock model, uncertain interest rate
model, and uncertain currency model by using the tool of uncertain differen-
tial equation.

15.1 Uncertain Stock Model

In 2009 Liu [80] first supposed that the stock price follows an uncertain
differential equation and presented an uncertain stock model in which the
bond price Xt and the stock price Yt are determined by
(
dXt = rXt dt
(15.1)
dYt = eYt dt + σYt dCt

where r is the riskless interest rate, e is the log-drift, σ is the log-diffusion,


and Ct is a Liu process. Note that the bond price is Xt = X0 exp(rt) and
the stock price is
Yt = Y0 exp(et + σCt ) (15.2)

whose inverse uncertainty distribution is


√ !
σt 3 α
Φ−1
t (α) = Y0 exp et + ln . (15.3)
π 1−α

15.2 European Options

This section will price European call and put options for the financial market
determined by the uncertain stock model (15.1).
360 Chapter 15 - Uncertain Finance

European Call Option


Definition 15.1 A European call option is a contract that gives the holder
the right to buy a stock at an expiration time s for a strike price K.

Let fc represent the price of this contract. Then the investor pays fc for
buying the contract at time 0, and has a payoff (Ys − K)+ at time s since
the option is rationally exercised if and only if Ys > K. Considering the time
value of money resulted from the bond, the present value of the payoff is
exp(−rs)(Ys − K)+ . Thus the net return of the investor at time 0 is

− fc + exp(−rs)(Ys − K)+ . (15.4)

On the other hand, the bank receives fc for selling the contract at time 0,
and pays (Ys − K)+ at the expiration time s. Thus the net return of the
bank at the time 0 is

fc − exp(−rs)(Ys − K)+ . (15.5)

The fair price of this contract should make the investor and the bank have an
identical expected return (we will call it fair price principle hereafter), i.e.,

− fc + exp(−rs)E[(Ys − K)+ ] = fc − exp(−rs)E[(Ys − K)+ ]. (15.6)

Thus fc = exp(−rs)E[(Ys − K)+ ]. That is, the European call option price is
just the expected present value of the payoff.

Definition 15.2 (Liu [80]) Assume a European call option has a strike price
K and an expiration time s. Then the European call option price is

fc = exp(−rs)E[(Ys − K)+ ]. (15.7)

Y.t
...
..........
... ....
.... ... ... ....
... .. ...... .. ........
Y ................................................................................................................................................................... .... ...... . ....
s ... .. ........
.
...
........
.... ... .... ...
... . .. ... ...
... .
..
......... .
. .........
. .. .
... .
. ...
... ....... . .. ........ ..... ..... .....
. .......... ...
... ............. ..
. ...... ........ ..... ..
.
. . .. .... . ...
... .. .......
. .. ... ...... ............ .. ............ .
. .
... .. . .........
.. ...... . . ..
...................................................................................................................................................................................................
K ... .... ..... ...... . ..... . .
... .... . . ..
... .. ... ..
... ......... .
..
... ... ..
...... ..
Y 0 .... ... ..
..
.. .
... ..
.
.....................................................................................................................................................................................................................................................................................
.
..
s t
0 ...
...
.

Figure 15.1: Payoff (Ys − K)+ from European Call Option


Section 15.2 - European Options 361

Theorem 15.1 (Liu [80]) Assume a European call option for the uncertain
stock model (15.1) has a strike price K and an expiration time s. Then the
European call option price is
Z 1
√ ! !+
σs 3 α
fc = exp(−rs) Y0 exp es + ln −K dα. (15.8)
0 π 1−α

Proof: Since (Ys − K)+ is an increasing function with respect to Ys , it has


an inverse uncertainty distribution
√ ! !+
σs 3 α
Ψ−1
s (α) = Y0 exp es + ln −K .
π 1−α

It follows from Definition 15.2 that the European call option price formula is
just (15.8).

Remark 15.1: It is clear that the European call option price is a decreasing
function of interest rate r. That is, the European call option will devaluate
if the interest rate is raised; and the European call option will appreciate in
value if the interest rate is reduced. In addition, the European call option
price is also a decreasing function of the strike price K.

Example 15.1: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 20, the strike price
K = 25 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the European call option price
fc = 6.91.

European Put Option


Definition 15.3 A European put option is a contract that gives the holder
the right to sell a stock at an expiration time s for a strike price K.

Let fp represent the price of this contract. Then the investor pays fp for
buying the contract at time 0, and has a payoff (K − Ys )+ at time s since
the option is rationally exercised if and only if Ys < K. Considering the time
value of money resulted from the bond, the present value of the payoff is
exp(−rs)(K − Ys )+ . Thus the net return of the investor at time 0 is

− fp + exp(−rs)(K − Ys )+ . (15.9)

On the other hand, the bank receives fp for selling the contract at time 0,
and pays (K − Ys )+ at the expiration time s. Thus the net return of the
bank at the time 0 is

fp − exp(−rs)(K − Ys )+ . (15.10)
362 Chapter 15 - Uncertain Finance

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,

− fp + exp(−rs)E[(K − Ys )+ ] = fp − exp(−rs)E[(K − Ys )+ ]. (15.11)

Thus fp = exp(−rs)E[(K − Ys )+ ]. That is, the European put option price is


just the expected present value of the payoff.

Definition 15.4 (Liu [80]) Assume a European put option has a strike price
K and an expiration time s. Then the European put option price is

fp = exp(−rs)E[(K − Ys )+ ]. (15.12)

Theorem 15.2 (Liu [80]) Assume a European put option for the uncertain
stock model (15.1) has a strike price K and an expiration time s. Then the
European put option price is
Z 1
√ !!+
σs 3 α
fp = exp(−rs) K − Y0 exp es + ln dα. (15.13)
0 π 1−α

Proof: Since (K − Ys )+ is a decreasing function with respect to Ys , it has


an inverse uncertainty distribution
√ !!+
σs 3 1 − α
Ψ−1
s (α) = K − Y0 exp es + ln .
π α

It follows from Definition 15.4 that the European put option price is

1
√ !!+
σs 3 1 − α
Z
fp = exp(−rs) K − Y0 exp es + ln dα
0 π α
Z 1
√ !!+
σs 3 α
= exp(−rs) K − Y0 exp es + ln dα.
0 π 1−α

The European put option price formula is verified.

Remark 15.2: It is easy to verify that the option price is a decreasing


function of the interest rate r, and is an increasing function of the strike
price K.

Example 15.2: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 20, the strike price
K = 25 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the European put option price
fp = 4.40.
Section 15.3 - American Options 363

15.3 American Options


This section will price American call and put options for the financial market
determined by the uncertain stock model (15.1).

American Call Option


Definition 15.5 An American call option is a contract that gives the holder
the right to buy a stock at any time prior to an expiration time s for a strike
price K.
Let fc represent the price of this contract. Then the investor pays fc for
buying the contract at time 0, and has a present value of the payoff,
sup exp(−rt)(Yt − K)+ . (15.14)
0≤t≤s

Thus the net return of the investor at time 0 is


− fc + sup exp(−rt)(Yt − K)+ . (15.15)
0≤t≤s

On the other hand, the bank receives fc for selling the contract at time 0,
and pays
sup exp(−rt)(Yt − K)+ . (15.16)
0≤t≤s

Thus the net return of the bank at the time 0 is


fc − sup exp(−rt)(Yt − K)+ . (15.17)
0≤t≤s

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
   
−fc + E sup exp(−rt)(Yt − K)+ = fc − E sup exp(−rt)(Yt − K)+ .
0≤t≤s 0≤t≤s

Thus the American call option price is just the expected present value of the
payoff.
Definition 15.6 (Chen [6]) Assume an American call option has a strike
price K and an expiration time s. Then the American call option price is
 
fc = E sup exp(−rt)(Yt − K)+ . (15.18)
0≤t≤s

Theorem 15.3 (Chen [6]) Assume an American call option for the uncer-
tain stock model (15.1) has a strike price K and an expiration time s. Then
the American call option price is
Z 1 √ ! !+
σt 3 α
fc = sup exp(−rt) Y0 exp et + ln −K dα.
0 0≤t≤s π 1−α
364 Chapter 15 - Uncertain Finance

Proof: It follows from Theorem 14.13 that sup0≤t≤s exp(−rt)(Yt − K)+ has
an inverse uncertainty distribution
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−rt) Y0 exp et + ln −K .
0≤t≤s π 1−α
Hence the American call option price formula follows from Definition 15.6
immediately.

Remark 15.3: It is easy to verify that the option price is a decreasing


function with respect to either the interest rate r or the strike price K.

Example 15.3: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 40, the strike price
K = 38 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the American call option price
fc = 19.8.

American Put Option


Definition 15.7 An American put option is a contract that gives the holder
the right to sell a stock at any time prior to an expiration time s for a strike
price K.
Let fp represent the price of this contract. Then the investor pays fp for
buying the contract at time 0, and has a present value of the payoff,
sup exp(−rt)(K − Yt )+ . (15.19)
0≤t≤s

Thus the net return of the investor at time 0 is


− fp + sup exp(−rt)(K − Yt )+ . (15.20)
0≤t≤s

On the other hand, the bank receives fp for selling the contract at time 0,
and pays
sup exp(−rt)(K − Yt )+ . (15.21)
0≤t≤s

Thus the net return of the bank at the time 0 is


fp − sup exp(−rt)(K − Yt )+ . (15.22)
0≤t≤s

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
   
+ +
−fp + E sup exp(−rt)(K − Yt ) = fp − E sup exp(−rt)(K − Yt ) .
0≤t≤s 0≤t≤s

Thus the American put option price is just the expected present value of the
payoff.
Section 15.4 - Asian Options 365

Definition 15.8 (Chen [6]) Assume an American put option has a strike
price K and an expiration time s. Then the American put option price is
 
fp = E sup exp(−rt)(K − Yt )+ . (15.23)
0≤t≤s

Theorem 15.4 (Chen [6]) Assume an American put option for the uncer-
tain stock model (15.1) has a strike price K and an expiration time s. Then
the American put option price is
Z 1
√ !!+
σt 3 α
fp = sup exp(−rt) K − Y0 exp et + ln dα.
0 0≤t≤s π 1−α

Proof: It follows from Theorem 14.14 that sup0≤t≤s exp(−rt)(K − Yt )+ has


an inverse uncertainty distribution
√ !!+
σt 3 1 − α
Ψ−1
s (α) = sup exp(−rt) K − Y0 exp et + ln .
0≤t≤s π α

Hence the American put option price formula follows from Definition 15.8
immediately.

Remark 15.4: It is easy to verify that the option price is a decreasing


function of the interest rate r, and is an increasing function of the strike
price K.

Example 15.4: Assume the interest rate r = 0.08, the log-drift e =


0.06, the log-diffusion σ = 0.32, the initial price Y0 = 40, the strike price
K = 38 and the expiration time s = 2. The Matlab Uncertainty Toolbox
(http://orsc.edu.cn/liu/resources.htm) yields the American put option price
fp = 3.90.

15.4 Asian Options


This section will price Asian call and put options for the financial market
determined by the uncertain stock model (15.1).

Asian Call Option


Definition 15.9 An Asian call option is a contract whose payoff at the ex-
piration time s is
 Z s +
1
Yt dt − K (15.24)
s 0
where K is a strike price.
366 Chapter 15 - Uncertain Finance

Let fc represent the price of this contract. Then the investor pays fc for
buying the contract at time 0, and has a payoff
 Z s +
1
Yt dt − K (15.25)
s 0
at time s. Considering the time value of money resulted from the bond, the
present value of the payoff is
 Z s +
1
exp(−rs) Yt dt − K . (15.26)
s 0
Thus the net return of the investor at time 0 is
 Z s +
1
− fc + exp(−rs) Yt dt − K . (15.27)
s 0
On the other hand, the bank receives fc for selling the contract at time 0,
and pays
 Z s +
1
Yt dt − K (15.28)
s 0
at the expiration time s. Thus the net return of the bank at the time 0 is
 Z s +
1
fc − exp(−rs) Yt dt − K . (15.29)
s 0
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
" Z + #
1 s
−fc + exp(−rs)E Yt dt − K
s 0
" Z + # (15.30)
1 s
= fc − exp(−rs)E Yt dt − K .
s 0
Thus the Asian call option price is just the expected present value of the
payoff.
Definition 15.10 (Sun-Chen [144]) Assume an Asian call option has a strike
price K and an expiration time s. Then the Asian call option price is
" Z + #
1 s
fc = exp(−rs)E Yt dt − K . (15.31)
s 0

Theorem 15.5 (Sun-Chen [144]) Assume an Asian call option for the un-
certain stock model (15.1) has a strike price K and an expiration time s.
Then the Asian call option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) exp et + ln dt − K dα.
0 s 0 π 1−α
Section 15.4 - Asian Options 367

Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral Z s
Yt dt
0

is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α

Hence the Asian call option price formula follows from Definition 15.10 im-
mediately.

Asian Put Option


Definition 15.11 An Asian put option is a contract whose payoff at the
expiration time s is
+
1 s
 Z
K− Yt dt (15.32)
s 0
where K is a strike price.

Let fp represent the price of this contract. Then the investor pays fp for
buying the contract at time 0, and has a payoff
 Z s +
1
K− Yt dt (15.33)
s 0

at time s. Considering the time value of money resulted from the bond, the
present value of the payoff is
+
1 s
 Z
exp(−rs) K − Yt dt . (15.34)
s 0

Thus the net return of the investor at time 0 is


+
1 s
 Z
− fp + exp(−rs) K − Yt dt . (15.35)
s 0

On the other hand, the bank receives fp for selling the contract at time 0,
and pays
+
1 s
 Z
K− Yt dt (15.36)
s 0
at the expiration time s. Thus the net return of the bank at the time 0 is
+
1 s
 Z
fp − exp(−rs) K − Yt dt . (15.37)
s 0
368 Chapter 15 - Uncertain Finance

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
" + #
1 s
Z
−fp + exp(−rs)E K− Yt dt
s 0
" + # (15.38)
1 s
Z
= fp − exp(−rs)E K− Yt dt .
s 0

Thus the Asian put option price should be the expected present value of the
payoff.
Definition 15.12 (Sun-Chen [144]) Assume an Asian put option has a strike
price K and an expiration time s. Then the Asian put option price is
" + #
1 s
Z
fp = exp(−rs)E K− Yt dt . (15.39)
s 0

Theorem 15.6 (Sun-Chen [144]) Assume an Asian put option for the un-
certain stock model (15.1) has a strike price K and an expiration time s.
Then the Asian put option price is
Z 1 √ ! !+
Y0 s
Z
σt 3 α
fc = exp(−rs) K− exp et + ln dt dα.
0 s 0 π 1−α

Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral Z s
Yt dt
0
is √ !
Z s
σt 3 α
Ψ−1
s (α) = Y0 exp et + ln dt.
0 π 1−α
Hence the Asian put option price formula follows from Definition 15.12 im-
mediately.

15.5 General Stock Model


Generally, we may assume the stock price follows a general uncertain differ-
ential equation and obtain a general stock model in which the bond price Xt
and the stock price Yt are determined by
(
dXt = rXt dt
(15.40)
dYt = F (t, Yt )dt + G(t, Yt )dCt
where r is the riskless interest rate, F and G are two functions, and Ct is a
Liu process.
Section 15.5 - General Stock Model 369

Theorem 15.7 (Liu [95]) Assume a European option for the uncertain stock
model (15.40) has a strike price K and an expiration time s. Then the
European call option price is
Z 1
fc = exp(−rs) (Ysα − K)+ dα (15.41)
0

and the European put option price is


Z 1
fp = exp(−rs) (K − Ysα )+ dα (15.42)
0

where Ysα is the α-path of the corresponding uncertain differential equation.


Proof: It follows from the fair price principle that the European call option
price is
fc = exp(−rs)E[(Ys − K)+ ]. (15.43)
By using Theorem 14.12, we get the formula (15.41). Similarly, it follows
from the fair price principle that the European put option price is
fp = exp(−rs)E[(K − Ys )+ ]. (15.44)
By using Theorem 14.12, we get the formula (15.42).
Theorem 15.8 (Liu [95]) Assume an American option for the uncertain
stock model (15.40) has a strike price K and an expiration time s. Then the
American call option price is
Z 1
fc = sup exp(−rt)(Ytα − K)+ dα (15.45)
0 0≤t≤s

and the American put option price is


Z 1
fp = sup exp(−rt)(K − Ytα )+ dα (15.46)
0 0≤t≤s

where Ytα is the α-path of the corresponding uncertain differential equation.


Proof: It follows from the fair price principle that the American call option
price is  
+
fc = E sup exp(−rt)(Yt − K) . (15.47)
0≤t≤s

By using Theorem 14.13, we get the formula (15.45). Similarly, it follows


from the fair price principle that the American put option price is
 
fp = E sup exp(−rt)(K − Yt )+ . (15.48)
0≤t≤s

By using Theorem 14.14, we get the formula (15.46).


370 Chapter 15 - Uncertain Finance

Theorem 15.9 (Liu [95]) Assume an Asian option for the uncertain stock
model (15.40) has a strike price K and an expiration time s. Then the Asian
call option price is
Z 1  Z s +
1 α
fc = exp(−rs) Y dt − K dα (15.49)
0 s 0 t

and the Asian put option price is


1 +
1 s α
Z  Z
fp = exp(−rs) K− Y dt dα (15.50)
0 s 0 t

where Ytα is the α-path of the corresponding uncertain differential equation.

Proof: It follows from the fair price principle that the Asian call option price
is " Z + #
1 s
fc = exp(−rs)E Yt dt − K . (15.51)
s 0

By using Theorem 14.17, we get the formula (15.49). Similarly, it follows


from the fair price principle that the Asian put option price is
" + #
1 s
Z
fp = exp(−rs)E K− Yt dt . (15.52)
s 0

By using Theorem 14.18, we get the formula (15.50).

15.6 Multifactor Stock Model


Now we assume that there are multiple stocks whose prices are determined
by multiple Liu processes. In this case, we have a multifactor stock model in
which the bond price Xt and the stock prices Yit are determined by


 dXt = rXt dt

n
X (15.53)


 dYit = e i Yit dt + σij Yit dCjt , i = 1, 2, · · · , m
j=1

where r is the riskless interest rate, ei are the log-drifts, σij are the log-
diffusions, and Cjt are independent Liu processes, i = 1, 2, · · · , m, j =
1, 2, · · · , n.

Portfolio Selection
For the multifactor stock model (15.53), we have the choice of m + 1 different
investments. At each time t we may choose a portfolio (βt , β1t , · · · , βmt ) (i.e.,
Section 15.6 - Multifactor Stock Model 371

the investment fractions meeting βt + β1t + · · · + βmt = 1). Then the wealth
Zt at time t should follow the uncertain differential equation
m
X m X
X n
dZt = rβt Zt dt + ei βit Zt dt + σij βit Zt dCjt . (15.54)
i=1 i=1 j=1

That is,
 
m
Z tX n Z tX
X m
Zt = Z0 exp(rt) exp  (ei − r)βis ds + σij βis dCjs  .
0 i=1 j=1 0 i=1

Portfolio selection problem is to find an optimal portfolio (βt , β1t , · · · , βmt )


such that the wealth Zs is maximized in the sense of expected value.

No-Arbitrage
The stock model (15.53) is said to be no-arbitrage if there is no portfolio
(βt , β1t , · · · , βmt ) such that for some time s > 0, we have

M{exp(−rs)Zs ≥ Z0 } = 1 (15.55)

and
M{exp(−rs)Zs > Z0 } > 0 (15.56)
where Zt is determined by (15.54) and represents the wealth at time t.

Theorem 15.10 (Yao’s No-Arbitrage Theorem [176]) The multifactor stock


model (15.53) is no-arbitrage if and only if the system of linear equations
    
σ11 σ12 · · · σ1n x1 e1 − r
 σ21 σ22 · · · σ2n   x2   e2 − r 
..   ..  =  (15.57)
    
 .. .. .. .. 
 . . . .   .   . 
σm1 σm2 ··· σmn xn em − r

has a solution, i.e., (e1 −r, e2 −r, · · · , em −r) is a linear combination of column
vectors (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · , (σ1n , σ2n , · · · , σmn ).

Proof: When the portfolio (βt , β1t , · · · , βmt ) is accepted, the wealth at each
time t is
 
Z tXm Xn Z tXm
Zt = Z0 exp(rt) exp  (ei − r)βis ds + σij βis dCjs  .
0 i=1 j=1 0 i=1

Thus
m
Z tX n Z tX
X m
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)βis ds + σij βis dCjs
0 i=1 j=1 0 i=1
372 Chapter 15 - Uncertain Finance

is a normal uncertain variable with expected value


Z tXm
(ei − r)βis ds
0 i=1

and variance  2
n Z t X
X m
σij βis ds .


0
j=1

i=1

Assume the system (15.57) has a solution. The argument breaks down
into two cases. Case I: for any given time t and portfolio (βt , β1t , · · · , βmt ),
suppose m
Xn Z t X
σij βis ds = 0.


0
j=1 i=1

Then
m
X
σij βis = 0, j = 1, 2, · · · , n, s ∈ (0, t].
i=1

Since the system (15.57) has a solution, we have


m
X
(ei − r)βis = 0, s ∈ (0, t]
i=1

and
m
Z tX
(ei − r)βis ds = 0.
0 i=1

This fact implies that

ln(exp(−rt)Zt ) − ln Z0 = 0

and
M{exp(−rt)Zt > Z0 } = 0.
That is, the stock model (15.53) is no-arbitrage. Case II: for any given time
t and portfolio (βt , β1t , · · · , βmt ), suppose
n Z t X
m
X
σij βis ds 6= 0.


j=1 0
i=1

Then ln(exp(−rt)Zt ) − ln Z0 is a normal uncertain variable with nonzero


variance and
M{ln(exp(−rt)Zt ) − ln Z0 ≥ 0} < 1.
That is,
M{exp(−rt)Zt ≥ Z0 } < 1
Section 15.7 - Uncertain Interest Rate Model 373

and the multifactor stock model (15.53) is no-arbitrage.


Conversely, assume the system (15.57) has no solution. Then there exist
real numbers α1 , α2 , · · · , αm such that
m
X
σij αi = 0, j = 1, 2, · · · , n
i=1

and
m
X
(ei − r)αi > 0.
i=1
Now we take a portfolio
(βt , β1t , · · · , βmt ) ≡ (1 − (α1 + α2 + · · · + αm ), α1 , α2 , · · · , αm ).
Then
m
Z tX
ln(exp(−rt)Zt ) − ln Z0 = (ei − r)αi ds > 0.
0 i=1

Thus we have
M{exp(−rt)Zt > Z0 } = 1.
Hence the multifactor stock model (15.53) is arbitrage. The theorem is thus
proved.
Theorem 15.11 The multifactor stock model (15.53) is no-arbitrage if its
log-diffusion matrix
 
σ11 σ12 · · · σ1n
 σ21 σ22 · · · σ2n 
(15.58)
 
 .. .. .. .. 
 . . . . 
σm1 σm2 ··· σmn
has rank m, i.e., the row vectors are linearly independent.
Proof: If the log-diffusion matrix (15.58) has rank m, then the system of
equations (15.57) has a solution. It follows from Theorem 15.10 that the
multifactor stock model (15.53) is no-arbitrage.
Theorem 15.12 The multifactor stock model (15.53) is no-arbitrage if its
log-drifts are all equal to the interest rate r, i.e.,
ei = r, i = 1, 2, · · · , m. (15.59)
Proof: Since the log-drifts ei = r for any i = 1, 2, · · · , m, we immediately
have
(e1 − r, e2 − r, · · · , em − r) ≡ (0, 0, · · · , 0)
that is a linear combination of (σ11 , σ21 , · · · , σm1 ), (σ12 , σ22 , · · · , σm2 ), · · · ,
(σ1n , σ2n , · · · , σmn ). It follows from Theorem 15.10 that the multifactor stock
model (15.53) is no-arbitrage.
374 Chapter 15 - Uncertain Finance

15.7 Uncertain Interest Rate Model


Real interest rates do not remain unchanged. Chen-Gao [14] assumed that
the interest rate follows an uncertain differential equation and presented an
uncertain interest rate model,
dXt = (m − aXt )dt + σdCt (15.60)
where m, a, σ are positive numbers. Besides, Jiao-Yao [64] investigated the
uncertain interest rate model,
p
dXt = (m − aXt )dt + σ Xt dCt . (15.61)
More generally, we may assume the interest rate Xt follows a general uncer-
tain differential equation and obtain a general interest rate model,
dXt = F (t, Xt )dt + G(t, Xt )dCt (15.62)
where F and G are two functions, and Ct is a Liu process.

Zero-Coupon Bond
A zero-coupon bond is a bond bought at a price lower than its face value that
is the amount it promises to pay at the maturity date. For simplicity, we
assume the face value is always 1 dollar.
Let f represent the price of this zero-coupon bond. Then the investor
pays f for buying it at time 0, and receives 1 dollar at the maturity date s.
Since the interest rate is Xt , the present value of 1 dollar is
 Z s 
exp − Xt dt . (15.63)
0

Thus the net return of the investor at time 0 is


 Z s 
− f + exp − Xt dt . (15.64)
0

On the other hand, the bank receives f for selling the zero-coupon bond at
time 0, and pays 1 dollar at the maturity date s. Thus the net return of the
bank at the time 0 is  Z s 
f − exp − Xt dt . (15.65)
0
The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
  Z s    Z s 
− f + E exp − Xt dt = f − E exp − Xt dt (15.66)
0 0

Thus the price of the zero-coupon bond is just the expected present value of
its face value.
Section 15.7 - Uncertain Interest Rate Model 375

Definition 15.13 (Chen-Gao [14]) Let Xt be the uncertain interest rate.


Then the price of a zero-coupon bond with a maturity date s is
  Z s 
f = E exp − Xt dt . (15.67)
0

Theorem 15.13 (Jiao-Yao [64]) Assume the uncertain interest rate Xt fol-
lows the uncertain differential equation (15.62). Then the price of a zero-
coupon bond with maturity date s is
Z 1  Z s 
f= exp − Xtα dt dα (15.68)
0 0

where Xtα is the α-path of the corresponding uncertain differential equation.

Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral Z s
Xt dt
0
is Z s
Ψ−1
s (α) = Xtα dt.
0
Hence the price formula of zero-coupon bond follows from Theorem 2.26
immediately.

Interest Rate Ceiling


An interest rate ceiling is a derivative contract in which the borrower will not
pay any more than a predetermined level of interest on his loan. Assume K
is the maximum interest rate and s is the maturity date. For simplicity, we
also assume the amount of loan is always 1 dollar.
Let f represent the price of this contract. Then the borrower pays f for
buying the contract at time 0, and has a payoff
Z s  Z s 
exp Xt dt − exp Xt ∧ Kdt (15.69)
0 0

at the maturity date s. Considering the time value of money, the present
value of the payoff is
 Z s  Z s  Z s 
exp − Xt dt exp Xt dt − exp Xt ∧ Kdt
0 0 0
 Z s Z s 
= 1 − exp − Xt dt + Xt ∧ Kdt
0 0
 Z s 
+
= 1 − exp − (Xt − K) dt .
0
376 Chapter 15 - Uncertain Finance

Thus the net return of the borrower at time 0 is


 Z s 
− f + 1 − exp − (Xt − K)+ dt . (15.70)
0

Similarly, we may verify that the net return of the bank at the time 0 is
 Z s 
f − 1 + exp − (Xt − K)+ dt . (15.71)
0

The fair price of this contract should make the borrower and the bank have
an identical expected return, i.e.,
  Z s    Z s 
+ +
−f +1−E exp − (Xt − K) dt = f −1+E exp − (Xt − K) dt .
0 0

Thus we have the following definition of the price of interest rate ceiling.

Definition 15.14 (Zhang-Ralescu-Liu [205]) Assume an interest rate ceiling


has a maximum interest rate K and a maturity date s. Then the price of the
interest rate ceiling is
  Z s 
+
f = 1 − E exp − (Xt − K) dt . (15.72)
0

Theorem 15.14 (Zhang-Ralescu-Liu [205]) Assume the uncertain interest


rate Xt follows the uncertain differential equation (15.62). Then the price of
the interest rate ceiling with a maximum interest rate K and a maturity date
s is
Z 1  Z s 
α +
f =1− exp − (Xt − K) dt dα (15.73)
0 0

where Xtα is the α-path of the corresponding uncertain differential equation.

Proof: It follows from Theorem 14.17 that the inverse uncertainty distribu-
tion of the time integral
Z s
(Xt − K)+ dt
0

is
Z s
Ψ−1
s (α) = (Xtα − K)+ dt.
0

Hence the price formula of the interest rate ceiling follows from Theorem 2.26
immediately.
Section 15.7 - Uncertain Interest Rate Model 377

Interest Rate Floor


An interest rate floor is a derivative contract in which the investor will not
receive any less than a predetermined level of interest on his investment.
Assume K is the minimum interest rate and s is the maturity date. For
simplicity, we also assume the amount of investment is always 1 dollar.
Let f represent the price of this contract. Then the investor pays f for
buying the contract at time 0, and has a payoff
Z s  Z s 
exp Xt ∨ Kdt − exp Xt dt (15.74)
0 0

at the maturity date s. Considering the time value of money, the present
value of the payoff is
 Z s  Z s  Z s 
exp − Xt dt exp Xt ∨ Kdt − exp Xt dt
0 0 0
 Z s Z s 
= exp − Xt dt + Xt ∨ Kdt − 1
0 0
Z s 
= exp (K − Xt )+ dt − 1.
0

Thus the net return of the investor at time 0 is


Z s 
+
− f + exp (K − Xt ) dt − 1. (15.75)
0

Similarly, we may verify that the net return of the bank at the time 0 is
Z s 
+
f − exp (K − Xt ) dt + 1. (15.76)
0

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
 Z s   Z s 
+ +
−f + E exp (K − Xt ) dt − 1 = f − E exp (K − Xt ) dt + 1.
0 0

Thus we have the following definition of the price of interest rate floor.

Definition 15.15 (Zhang-Ralescu-Liu [205]) Assume an interest rate floor


has a minimum interest rate K and a maturity date s. Then the price of the
interest rate floor is
 Z s 
+
f = E exp (K − Xt ) dt − 1. (15.77)
0
378 Chapter 15 - Uncertain Finance

Theorem 15.15 (Zhang-Ralescu-Liu [205]) Assume the uncertain interest


rate Xt follows the uncertain differential equation (15.62). Then the price of
the interest rate floor with a minimum interest rate K and a maturity date s
is Z 1 Z s 
α +
f= exp (K − Xt ) dt dα − 1 (15.78)
0 0
where Xtα is the α-path of the corresponding uncertain differential equation.

Proof: It follows from Theorem 14.18 that the inverse uncertainty distribu-
tion of the time integral Z s
(K − Xt )+ dt
0
is Z s
Ψ−1
s (α) = (K − Xt1−α )+ dt.
0
Hence the price formula of the interest rate floor follows from Theorem 2.26
immediately.

15.8 Uncertain Currency Model


Liu-Chen-Ralescu [109] assumed that the exchange rate follows an uncertain
differential equation and proposed an uncertain currency model,


 dXt = uXt dt (Domestic Currency)

dYt = vYt dt (Foreign Currency) (15.79)


dZt = eZt dt + σZt dCt (Exchange Rate)

where Xt represents the domestic currency with domestic interest rate u, Yt


represents the foreign currency with foreign interest rate v, and Zt represents
the exchange rate that is domestic currency price of one unit of foreign cur-
rency at time t. Note that the domestic currency price is Xt = X0 exp(ut),
the foreign currency price is Yt = Y0 exp(vt), and the exchange rate is

Zt = Z0 exp(et + σCt ) (15.80)

whose inverse uncertainty distribution is


√ !
σt 3 α
Φ−1
t (α) = Z0 exp et + ln . (15.81)
π 1−α

European Currency Option


Definition 15.16 A European currency option is a contract that gives the
holder the right to exchange one unit of foreign currency at an expiration
time s for K units of domestic currency.
Section 15.8 - Uncertain Currency Model 379

Suppose that the price of this contract is f in domestic currency. Then


the investor pays f for buying the contract at time 0, and receives (Zs − K)+
in domestic currency at the expiration time s. Thus the net return of the
investor at time 0 is

− f + exp(−us)(Zs − K)+ . (15.82)

On the other hand, the bank receives f for selling the contract at time 0, and
pays (1 − K/Zs )+ in foreign currency at the expiration time s. Thus the net
return of the bank at the time 0 is

f − exp(−vs)Z0 (1 − K/Zs )+ . (15.83)

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,

− f + exp(−us)E[(Zs − K)+ ] = f − exp(−vs)Z0 E[(1 − K/Zs )+ ]. (15.84)

Thus the European currency option price is given by the definition below.

Definition 15.17 (Liu-Chen-Ralescu [109]) Assume a European currency


option has a strike price K and an expiration time s. Then the European
currency option price is
1 1
f= exp(−us)E[(Zs − K)+ ] + exp(−vs)Z0 E[(1 − K/Zs )+ ]. (15.85)
2 2
Theorem 15.16 (Liu-Chen-Ralescu [109]) Assume a European currency op-
tion for the uncertain currency model (15.79) has a strike price K and an
expiration time s. Then the European currency option price is
Z 1
√ ! !+
1 σs 3 α
f = exp(−us) Z0 exp es + ln −K dα
2 0 π 1−α
Z 1
√ !!+
1 σs 3 α
+ exp(−vs) Z0 − K/ exp es + ln dα.
2 0 π 1−α

Proof: Since (Zs − K)+ and Z0 (1 − K/Zs )+ are increasing functions with
respect to Zs , they have inverse uncertainty distributions
√ ! !+
σs 3 α
Ψ−1
s (α) = Z0 exp es + ln −K ,
π 1−α

√ !!+
σs 3 α
Υ−1
s (α) = Z0 − K/ exp es + ln ,
π 1−α
380 Chapter 15 - Uncertain Finance

respectively. Thus the European currency option price formula follows from
Definition 15.17 immediately.

Remark 15.5: The European currency option price of the uncertain cur-
rency model (15.79) is a decreasing function of K, u and v.

Example 15.5: Assume the domestic interest rate u = 0.08, the foreign in-
terest rate v = 0.07, the log-drift e = 0.06, the log-diffusion σ = 0.32, the ini-
tial exchange rate Z0 = 5, the strike price K = 6 and the expiration time s =
2. The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm)
yields the European currency option price f = 0.977.

American Currency Option


Definition 15.18 An American currency option is a contract that gives the
holder the right to exchange one unit of foreign currency at any time prior
to an expiration time s for K units of domestic currency.

Suppose that the price of this contract is f in domestic currency. Then


the investor pays f for buying the contract, and receives

sup exp(−ut)(Zt − K)+ (15.86)


0≤t≤s

in domestic currency. Thus the net return of the investor at time 0 is

− f + sup exp(−ut)(Zt − K)+ . (15.87)


0≤t≤s

On the other hand, the bank receives f for selling the contract, and pays

sup exp(−vt)(1 − K/Zt )+ . (15.88)


0≤t≤s

in foreign currency. Thus the net return of the bank at time 0 is

f − sup exp(−vt)Z0 (1 − K/Zt )+ . (15.89)


0≤t≤s

The fair price of this contract should make the investor and the bank have
an identical expected return, i.e.,
 
+
−f + E sup exp(−ut)(Zt − K)
0≤t≤s
  (15.90)
+
=f −E sup exp(−vt)Z0 (1 − K/Zt ) .
0≤t≤s

Thus the American currency option price is given by the definition below.
Section 15.8 - Uncertain Currency Model 381

Definition 15.19 (Liu-Chen-Ralescu [109]) Assume an American currency


option has a strike price K and an expiration time s. Then the American
currency option price is
   
1 1
f = E sup exp(−ut)(Zt − K)+ + E sup exp(−vt)Z0 (1 − K/Zt )+ .
2 0≤t≤s 2 0≤t≤s
Theorem 15.17 (Liu-Chen-Ralescu [109]) Assume an American currency
option for the uncertain currency model (15.79) has a strike price K and an
expiration time s. Then the American currency option price is
√ ! !+
1 1
Z
σt 3 α
f = sup exp(−ut) Z0 exp et + ln −K dα
2 0 0≤t≤s π 1−α
√ !!+
1 1
Z
σt 3 α
+ sup exp(−vt) Z0 − K/ exp et + ln dα.
2 0 0≤t≤s π 1−α

Proof: It follows from Theorem 14.13 that sup0≤t≤s exp(−ut)(Zt − K)+ and
sup0≤t≤s exp(−vt)Z0 (1 − K/Zt )+ have inverse uncertainty distributions
√ ! !+
−1 σt 3 α
Ψs (α) = sup exp(−ut) Z0 exp et + ln −K ,
0≤t≤s π 1−α
√ !!+
σt 3 α
Υ−1
s (α) = sup exp(−vt) Z0 − K/ exp et + ln ,
0≤t≤s π 1−α
respectively. Thus the American currency option price formula follows from
Definition 15.19 immediately.

General Currency Model


If the exchange rate follows a general uncertain differential equation, then we
have a general currency model,


 dXt = uXt dt (Domestic Currency)

dYt = vYt dt (Foreign Currency) (15.91)


dZt = F (t, Zt )dt + G(t, Zt )dCt (Exchange Rate)

where u and v are interest rates, F and G are two functions, and Ct is a Liu
process.
Theorem 15.18 (Liu [95]) Assume a European currency option for the un-
certain currency model (15.91) has a strike price K and an expiration time
s. Then the European currency option price is
1 1
Z
exp(−us)(Zsα − K)+ + exp(−vs)Z0 (1 − K/Zsα )+ dα (15.92)

f=
2 0
382 Chapter 15 - Uncertain Finance

where Ztα is the α-path of the corresponding uncertain differential equation.

Proof: It follows from the fair price principle that the European option price
is
1 1
f= exp(−us)E[(Zs − K)+ ] + exp(−vs)Z0 E[(1 − K/Zs )+ ]. (15.93)
2 2
By using Theorem 14.12, we get the equation (15.92).

Theorem 15.19 (Liu [95]) Assume an American currency option for the
uncertain currency model (15.91) has a strike price K and an expiration
time s. Then the American currency option price is

1 1
Z  
f= sup exp(−ut)(Ztα − K)+ + sup exp(−vt)Z0 (1 − K/Ztα )+ dα
2 0 0≤t≤s 0≤t≤s

where Ztα is the α-path of the corresponding uncertain differential equation.

Proof: It follows from the fair price principle that the American option price
is
   
1 1
f = E sup exp(−ut)(Zt − K)+ + E sup exp(−vt)Z0 (1 − K/Zt )+ .
2 0≤t≤s 2 0≤t≤s

By using Theorem 14.13, we get the result.

15.9 Bibliographic Notes


The classical finance theory assumed that stock price, interest rate, and ex-
change rate follow stochastic differential equations. However, this preassump-
tion was challenged among others by Liu [89] in which a convincing paradox
was presented to show why the real stock price is impossible to follow any
stochastic differential equations (see also Appendix C.9). As an alternative,
Liu [89] suggested to develop a theory of uncertain finance.
Uncertain differential equations were first introduced into finance by Liu
[80] in 2009 in which an uncertain stock model was proposed and European
option price formulas were provided. Besides, Chen [6] derived American
option price formulas, Sun-Chen [144] and Zhang-Liu [204] verified Asian
option price formulas, and Yao [176] proved a no-arbitrage theorem for this
type of uncertain stock model. It is emphasized that uncertain stock models
were also actively investigated among others by Peng-Yao [120], Yu [191],
Chen-Liu-Ralescu [12], Yao [181], and Ji-Zhou [62].
Uncertain differential equations were used to simulate floating interest
rate by Chen-Gao [14] in 2013. Following that, Jiao-Yao [64] presented a
price formula of zero-coupon bond, and Zhang-Ralescu-Liu [205] discussed
the valuation of interest rate ceiling and floor.
Section 15.9 - Bibliographic Notes 383

Uncertain differential equations were employed to model currency ex-


change rate by Liu-Chen-Ralescu [109] in 2015 in which some currency option
price formulas were derived for the uncertain currency markets. Afterwards,
uncertain currency models were also actively investigated among others by
Liu [95], Shen-Yao [136] and Wang-Ning [150].
For further explorations on the development of the theory of uncertain
finance, the interested reader may consult Chen’s book [18].
Chapter 16

Uncertain Statistics

The study of uncertain statistics was started by Liu [84] in 2010. It is a


methodology for collecting and interpreting expert’s experimental data by un-
certainty theory. This chapter will design a questionnaire survey for collect-
ing expert’s experimental data, and introduce linear interpolation method,
principle of least squares, method of moments, and Delphi method for deter-
mining uncertainty distributions and membership functions from the expert’s
experimental data. In addition, uncertain regression analysis and uncertain
time series analysis are also documented in this chapter.

16.1 Expert’s Experimental Data


Uncertain statistics is based on expert’s experimental data rather than histor-
ical data. How do we obtain expert’s experimental data? Liu [84] proposed a
questionnaire survey for collecting expert’s experimental data. The starting
point is to invite one or more domain experts who are asked to complete
a questionnaire about the meaning of an uncertain variable ξ like “how far
from Beijing to Tianjin”.
We first ask the domain expert to choose a possible value x (say 110km)
that the uncertain variable ξ may take, and then quiz him

“How likely is ξ less than or equal to x?” (16.1)

Denote the expert’s belief degree by α (say 0.6). Note that the expert’s belief
degree of ξ greater than x must be 1 − α due to the self-duality of uncertain
measure. An expert’s experimental data

(x, α) = (110, 0.6) (16.2)

is thus acquired from the domain expert.


386 Chapter 16 - Uncertain Statistics

............................................................................
.....
.....
x ... .....
...........................................................................
.....
.....
... .....
..... .....
α .....
..... ... ......
..... .. .....
.
. ..
..
1−α
..... .. .....
. ..
.................................................................................................................................................................................................................................... ξ
..
M{ξ ≤ x} ...
.. M{ξ ≥ x}

Figure 16.1: Expert’s Experimental Data (x, α)

Repeating the above process, the following expert’s experimental data are
obtained by the questionnaire,

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ). (16.3)

Remark 16.1: None of x, α and n could be assigned a value in the ques-


tionnaire before asking the domain expert. Otherwise, the domain expert
may have no knowledge or experiments enough to answer your questions.

16.2 Questionnaire Survey


Beijing is the capital of China, and Tianjin is a coastal city. Assume that
the real distance between them is not exactly known for us, and is regarded
as an uncertain variable. Chen-Ralescu [11] employed uncertain statistics to
estimate the travel distance between Beijing and Tianjin. The consultation
process is as follows:

Q1: May I ask you how far is from Beijing to Tianjin? What do you think
is the minimum distance?
A1: 100km. (an expert’s experimental data (100, 0) is acquired)
Q2: What do you think is the maximum distance?
A2: 150km. (an expert’s experimental data (150, 1) is acquired)
Q3: What do you think is a likely distance?
A3: 130km.
Q4: To what degree do you think that the real distance is less than 130km?
A4: 60%. (an expert’s experimental data (130, 0.6) is acquired)
Q5: Is there another number this distance may be? If yes, what is it?
A5: 140km.
Q6: To what degree do you think that the real distance is less than 140km?
A6: 90%. (an expert’s experimental data (140, 0.9) is acquired)
Section 16.3 - Determining Uncertainty Distribution 387

Q7: Is there another number this distance may be? If yes, what is it?

A7: 120km.

Q8: To what degree do you think that the real distance is less than 120km?

A8: 30%. (an expert’s experimental data (120, 0.3) is acquired)

Q9: Is there another number this distance may be? If yes, what is it?

A9: No idea.

By using the questionnaire survey, five expert’s experimental data of the


travel distance between Beijing and Tianjin are acquired from the domain
expert,
(100, 0), (120, 0.3), (130, 0.6), (140, 0.9), (150, 1). (16.4)

Exercise 16.1: Please do a questionnaire survey on the height of some friend


of yours.

16.3 Determining Uncertainty Distribution


In order to determine the uncertainty distribution of uncertain variable, this
section will introduce empirical uncertainty distribution (i.e., linear interpo-
lation method), principle of least squares, method of moments, and Delphi
method.

Empirical Uncertainty Distribution


How do we determine the uncertainty distribution for an uncertain variable?
Assume that we have obtained a set of expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (16.5)

that meet the following consistence condition (perhaps after a rearrangement)

x1 < x2 < · · · < xn , 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1. (16.6)

Based on those expert’s experimental data, Liu [84] suggested an empirical


uncertainty distribution,


 0, if x < x1
(αi+1 − αi )(x − xi )


Φ(x) = αi + , if xi ≤ x ≤ xi+1 , 1 ≤ i < n (16.7)

 xi+1 − xi

1, if x > xn .

Essentially, it is a type of linear interpolation method.


388 Chapter 16 - Uncertain Statistics

Φ(x)
....
........
..
...
..
1 ..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ...............................................
... ...
... ...
.
... .........•
... ...
...
...
...
................. (x , α ) 5 5
... (x , α ) 4 4 • ..
...
...
...
............
.
... ....
... .
...
.
... ...
... ...
... ...
... ...
... ...
... (x , α ) 2 2 • ...
... ...
..........................................•
... .
.... (x , α )
... .. 3 3
...
... ...
... ...
... .
....
... ..
...
... ...
... ...
... .....
(x , α ) ... 1 1 .....
.
... •.....
...
... ....
0 ...................................................................................................................................................................................................................................................................................
....
x
.

Figure 16.2: Empirical Uncertainty Distribution Φ(x)

The empirical uncertainty distribution Φ determined by (16.7) has an


expected value
n−1
X αi+1 − αi−1  
α1 + α2 αn−1 + αn
E[ξ] = x1 + xi + 1 − xn . (16.8)
2 i=2
2 2

If all xi ’s are nonnegative, then the k-th empirical moments are


n−1 k
1 XX
E[ξ k ] = α1 xk1 + (αi+1 − αi )xji xk−j k
i+1 + (1 − αn )xn . (16.9)
k + 1 i=1 j=0

Example 16.1: Recall that the five expert’s experimental data (100, 0),
(120, 0.3), (130, 0.6), (140, 0.9), (150, 1) of the travel distance between Beijing
and Tianjin have been acquired in Section 16.2. Based on those expert’s
experimental data, an empirical uncertainty distribution of travel distance is
shown in Figure 16.3.

Principle of Least Squares


Assume that an uncertainty distribution to be determined has a known func-
tional form Φ(x|θ) with an unknown parameter θ. In order to estimate the
parameter θ, Liu [84] employed the principle of least squares that minimizes
the sum of the squares of the distance of the expert’s experimental data to
the uncertainty distribution. This minimization can be performed in either
the vertical or horizontal direction. If the expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (16.10)


Section 16.3 - Determining Uncertainty Distribution 389

Φ(x)
....
........
.
..... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..........•
1 ... ....
........
........................................
... ......... (150, 1)
... (140, 0.9) .•
.
.........
... ...
....
... .....
... .....
... ..
.....
...
... ....
... .....
... .....
... ..
.....
.•
..
... ....
... (130, 0.6) .....
... .....
... ..
.....
...
... .....
... .....
... .....
... ..
.....
... .....
.....•
... .......
... (120, 0.3) ......
... ....
........
... .......
......
... .......
... .......
(100, 0) ... .
..........
....
.................................................................................................................................................................................................................................
.....................................•
x
0 ....

Figure 16.3: Empirical Uncertainty Distribution of Travel Distance between


Beijing and Tianjin. Note that the empirical expected distance is 125.5km
and the real distance is 127km in the google earth.

are obtained and the vertical direction is accepted, then we have

n
X
min (Φ(xi |θ) − αi )2 . (16.11)
θ
i=1

The optimal solution θ̂ of (16.11) is called the least squares estimate of θ,


and then the least squares uncertainty distribution is Φ(x|θ̂).

Φ(x|θ)
...
..........
....
...
.........................
... ..................................
...
...
•... ...................................... •.
.......... ..
...
......
.
.. . •
... ......
... .....
... .....
... .
.....
.
.
... ....
... ....
.....
... .... ...
... ..... • .
... ...
...
... ....
... ....
... •.. ........
.. ....
... .....
.
... .....
... .......
...........
................................................................................................................................................................................................................................................................ x
...
0 ...
...
..

Figure 16.4: Principle of Least Squares


390 Chapter 16 - Uncertain Statistics

Example 16.2: Assume that an uncertainty distribution has a linear form


with two unknown parameters a and b, i.e.,

if x ≤ a

0,

x−a


Φ(x|a, b) = , if a ≤ x ≤ b (16.12)

 b−a

1, if x ≥ b.

We also assume the following expert’s experimental data,

(1, 0.15), (2, 0.45), (3, 0.55), (4, 0.85), (5, 0.95). (16.13)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that â = 0.2273, b̂ = 4.7727 and the least squares uncertainty distribu-
tion is


 0, if x ≤ 0.2273
Φ(x) = (x − 0.2273)/4.5454, if 0.2273 ≤ x ≤ 4.7727 (16.14)

1, if x ≥ 4.7727.

Example 16.3: Assume that an uncertainty distribution has a lognormal


form with two unknown parameters e and σ, i.e.,
  −1
π(e − ln x)
Φ(x|e, σ) = 1 + exp √ . (16.15)

We also assume the following expert’s experimental data,

(0.6, 0.1), (1.0, 0.3), (1.5, 0.4), (2.0, 0.6), (2.8, 0.8), (3.6, 0.9). (16.16)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that ê = 0.4825, σ̂ = 0.7852 and the least squares uncertainty distribu-
tion is   −1
0.4825 − ln x
Φ(x) = 1 + exp . (16.17)
0.4329

Method of Moments
Assume that a nonnegative uncertain variable has an uncertainty distribution

Φ(x|θ1 , θ2 , · · · , θp ) (16.18)

with unknown parameters θ1 , θ2 , · · · , θp . Given a set of expert’s experimental


data
(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (16.19)
Section 16.3 - Determining Uncertainty Distribution 391

with
0 ≤ x1 < x2 < · · · < xn , 0 ≤ α1 ≤ α2 ≤ · · · ≤ αn ≤ 1, (16.20)
Wang-Peng [154] proposed a method of moments to estimate the unknown
parameters of uncertainty distribution. At first, the kth empirical moments
of the expert’s experimental data are defined as that of the corresponding
empirical uncertainty distribution, i.e.,
n−1 k
1 XX
ξ k = α1 xk1 + (αi+1 − αi )xji xk−j k
i+1 + (1 − αn )xn . (16.21)
k + 1 i=1 j=0

The moment estimates θ̂1 , θ̂2 , · · · , θ̂p are then obtained by equating the first
p moments of Φ(x|θ1 , θ2 , · · · , θp ) to the corresponding first p empirical mo-
ments. In other words, the moment estimates θ̂1 , θ̂2 , · · · , θ̂p should solve the
system of equations,
Z +∞

(1 − Φ( k x | θ1 , θ2 , · · · , θp ))dx = ξ k , k = 1, 2, · · · , p (16.22)
0

where ξ 1 , ξ 2 , · · · , ξ p are empirical moments determined by (16.21).

Example 16.4: Assume that a questionnaire survey has successfully ac-


quired the following expert’s experimental data,
(1.2, 0.1), (1.5, 0.3), (1.8, 0.4), (2.5, 0.6), (3.9, 0.8), (4.6, 0.9). (16.23)
Then the first three empirical moments are 2.5100, 7.7226 and 29.4936. We
also assume that the uncertainty distribution to be determined has a zigzag
form with three unknown parameters a, b and c, i.e.,


 0, if x ≤ a
x−a


 2(b − a) , if a ≤ x ≤ b



Φ(x|a, b, c) = (16.24)
x + c − 2b
, if b ≤ x ≤ c


2(c − b





1, if x ≥ c.

From the expert’s experimental data, we may believe that the unknown pa-
rameters must be positive numbers. Thus the first three moments of the
zigzag uncertainty distribution Φ(x|a, b, c) are
a + 2b + c
,
4
a2 + ab + 2b2 + bc + c2
,
6
a3 + a2 b + ab2 + 2b3 + b2 c + bc2 + c3
.
8
392 Chapter 16 - Uncertain Statistics

It follows from the method of moments that the unknown parameters a, b, c


should solve the system of equations,


 a + 2b + c = 4 × 2.5100

a2 + ab + 2b2 + bc + c2 = 6 × 7.7226 (16.25)


 3 2 2 3 2 2 3
a + a b + ab + 2b + b c + bc + c = 8 × 29.4936.

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that the moment estimates are (â, b̂, ĉ) = (0.9804, 2.0303, 4.9991) and
the corresponding uncertainty distribution is


 0, if x ≤ 0.9804
 (x − 0.9804)/2.0998, if 0.9804 ≤ x ≤ 2.0303

Φ(x) = (16.26)

 (x + 0.9385)/5.9376, if 2.0303 ≤ x ≤ 4.9991

if x ≥ 4.9991.

1,

Multiple Domain Experts


Assume there are m domain experts and each produces an uncertainty distri-
bution. Then we may get m uncertainty distributions Φ1 (x), Φ2 (x), · · ·, Φm (x).
It was suggested by Liu [84] that the m uncertainty distributions should be
aggregated to an uncertainty distribution

Φ(x) = w1 Φ1 (x) + w2 Φ2 (x) + · · · + wm Φm (x) (16.27)

where w1 , w2 , · · · , wm are convex combination coefficients (i.e., they are non-


negative numbers and w1 + w2 + · · · + wn = 1) representing weights of the
domain experts. For example, we may set
1
wi = , ∀i = 1, 2, · · · , n. (16.28)
m
Since Φ1 (x), Φ2 (x), · · ·, Φm (x) are uncertainty distributions, they are increas-
ing functions taking values in [0, 1] and are not identical to either 0 or 1. It
is easy to verify that their convex combination Φ(x) is also an increasing
function taking values in [0, 1] and Φ(x) 6≡ 0, Φ(x) 6≡ 1. Hence Φ(x) is also
an uncertainty distribution by Peng-Iwamura theorem.

Delphi Method
Delphi method was originally developed in the 1950s by the RAND Corpo-
ration based on the assumption that group experience is more valid than
individual experience. This method asks the domain experts answer ques-
tionnaires in two or more rounds. After each round, a facilitator provides
an anonymous summary of the answers from the previous round as well as
the reasons that the domain experts provided for their opinions. Then the
Section 16.4 - Determining Membership Function 393

domain experts are encouraged to revise their earlier answers in light of the
summary. It is believed that during this process the opinions of domain
experts will converge to an appropriate answer. Wang-Gao-Guo [152] recast
Delphi method as a process to determine uncertainty distributions. The main
steps are listed as follows:

Step 1. The m domain experts provide their expert’s experimental data,

(xij , αij ), j = 1, 2, · · · , ni , i = 1, 2, · · · , m. (16.29)

Step 2. Use the i-th expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · ,
(xini , αini ) to generate the uncertainty distributions Φi of the i-
th domain experts, i = 1, 2, · · · , m, respectively.
Step 3. Compute Φ(x) = w1 Φ1 (x) + w2 Φ2 (x) + · · · + wm Φm (x) where
w1 , w2 , · · · , wm are convex combination coefficients representing
weights of the domain experts.
Step 4. If |αij − Φ(xij )| are less than a given level ε > 0 for all i and j, then
go to Step 5. Otherwise, the i-th domain experts receive the sum-
mary (for example, the function Φ obtained in the previous round
and the reasons of other experts), and then provide a set of revised
expert’s experimental data (xi1 , αi1 ), (xi2 , αi2 ), · · · , (xini , αini ) for
i = 1, 2, · · · , m. Go to Step 2.
Step 5. The last function Φ is the uncertainty distribution to be determined.

16.4 Determining Membership Function


In order to determine the membership function of uncertain set, this sec-
tion will introduce empirical membership function (i.e., linear interpolation
method) and principle of least squares.

Expert’s Experimental Data


Expert’s experimental data were suggested by Liu [85] to represent expert’s
knowledge about the membership function to be determined. The first step
is to ask the domain expert to choose a possible point x that the uncertain
set ξ may contain, and then quiz him
“How likely does x belong to ξ?” (16.30)
Assume the expert’s belief degree is α in uncertain measure. Note that the
expert’s belief degree of x not belonging to ξ must be 1 − α due to the duality
of uncertain measure. An expert’s experimental data (x, α) is thus acquired
from the domain expert. Repeating the above process, the following expert’s
experimental data are obtained by the questionnaire,
(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ). (16.31)
394 Chapter 16 - Uncertain Statistics

Empirical Membership Function


How do we determine the membership function for an uncertain set? The
first method is the linear interpolation method developed by Liu [85]. Assume
that we have obtained a set of expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ). (16.32)

Without loss of generality, we also assume x1 < x2 < · · · < xn . Based


on those expert’s experimental data, an empirical membership function is
determined as follows,

 αi + (αi+1 − αi )(x − xi ) , if xi ≤ x ≤ xi+1 , 1 ≤ i < n

µ(x) = xi+1 − xi

0, otherwise.

µ(x)
...
..........
...
.... •
......................................................•
...
.. ..... ...
... ..... ...
... ..
..... ...
...
... ..
...
. ...
... •
...
. ...
... ... .
... ..
. •...........
... ... .....
.....
... ..
. .....
... .. .....
... .•
.
.... ....
•....
... ..
.... ...
... ..
.... ...
... ..
.... ...
... ..
.... ...
...
... ..
.... ...
... .... ...
... ..
.
.
...
. • ...
... ..... ...
... •...
. ...
. .
.................................................................................................................................................................................................................................................
.
. .
.
x
...
.

Figure 16.5: Empirical Membership Function µ(x)

Principle of Least Squares


Principle of least squares was first employed to determine membership func-
tion by Liu [85]. Assume that a membership function to be determined has
a known functional form µ(x|θ) with an unknown parameter θ. In order to
estimate the parameter θ, we may employ the principle of least squares that
minimizes the sum of the squares of the distance of the expert’s experimental
data to the membership function. If the expert’s experimental data

(x1 , α1 ), (x2 , α2 ), · · · , (xn , αn ) (16.33)

are obtained, then we have


n
X
min (µ(xi |θ) − αi )2 . (16.34)
θ
i=1
Section 16.4 - Determining Membership Function 395

The optimal solution θ̂ of (16.34) is called the least squares estimate of θ,


and then the least squares membership function is µ(x|θ̂).

Example 16.5: Assume that a membership function has a trapezoidal form


(a, b, c, d). We also assume the following expert’s experimental data,

(1, 0.15), (2, 0.45), (3, 0.90), (6, 0.85), (7, 0.60), (8, 0.20). (16.35)

The Matlab Uncertainty Toolbox (http://orsc.edu.cn/liu/resources.htm) may


yield that the least squares membership function has a trapezoidal form
(0.6667, 3.3333, 5.6154, 8.6923).

What is “about 100km”?


Let us pay attention to the concept of “about 100km”. When we are inter-
ested in what distances can be considered “about 100km”, it is reasonable to
regard such a concept as an uncertain set. In order to determine the mem-
bership function of “about 100km”, a questionnaire survey was made for
collecting expert’s experimental data. The consultation process is as follows:

Q1: May I ask you what distances belong to “about 100km”? What do you
think is the minimum distance?

A1: 80km. (an expert’s experimental data (80, 0) is acquired)

Q2: What do you think is the maximum distance?

A2: 120km. (an expert’s experimental data (120, 0) is acquired)

Q3: What distance do you think belongs to “about 100km”?

A3: 95km.

Q4: To what degree do you think that 95km belongs to “about 100km”?

A4: 100%. (an expert’s experimental data (95, 1) is acquired)

Q5: Is there another distance that belongs to “about 100km”? If yes, what
is it?

A5: 105km.

Q6: To what degree do you think that 105km belongs to “about 100km”?

A6: 100%. (an expert’s experimental data (105, 1) is acquired)

Q7: Is there another distance that belongs to “about 100km”? If yes, what
is it?

A7: 90km.
396 Chapter 16 - Uncertain Statistics

Q8: To what degree do you think that 90km belongs to “about 100km”?

A8: 50%. (an expert’s experimental data (90, 0.5) is acquired)

Q9: Is there another distance that belongs to “about 100km”? If yes, what
is it?

A9: 110km.

Q10: To what degree do you think that 110km belongs to “about 100km”?

A10: 50%. (an expert’s experimental data (110, 0.5) is acquired)

Q11: Is there another distance that belongs to “about 100km”? If yes, what
is it?

A11: No idea.

Until now six expert’s experimental data (80, 0), (90, 0.5), (95, 1), (105, 1),
(110, 0.5), (120, 0) are acquired from the domain expert. Based on those
expert’s experimental data, an empirical membership function of “about
100km” is produced and shown by Figure 16.6.

µ(x)
..
........
...
... (95, 1) (105, 1)
... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ............................................. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
1 ....
...

...
...
•....
...
... ... ...
...
... .... ...
... .
. ...
... .... ...
... ... ...
... .
. ...
... .... ...
...
... ... ...
... ... ...
... .
. ...
... .... ...
... . ...
... (90, 0.5) .•
..
. (110, 0.5) • ...
... ..
. ...
... .... ...
...
... ... ...
... ... ...
... ... ...
...
... .
..
...
... ...
. ...
... ... ...
... ... ...
...
... ...
...
... ...
. ...
(80, 0) ... ... (120, 0) ...
.....................................................................................................................................................................................................................................................................
0 ...
.
• • ... x

Figure 16.6: Empirical Membership Function of “about 100km”

16.5 Uncertain Regression Analysis


Let (x1 , x2 , · · · , xp ) be a vector of explanatory variables, and let y be a re-
sponse variable. Assume the functional relationship between (x1 , x2 , · · · , xp )
and y is expressed by a regression model

y = f (x1 , x2 , · · · , xp |β) + ε (16.36)


Section 16.5 - Uncertain Regression Analysis 397

where β is an unknown vector of parameters, and ε is a disturbance term.


Especially, we will call

y = β0 + β1 x 1 + β2 x 2 + · · · + βp x p + ε (16.37)

a linear regression model, and call

y = β0 − β1 exp(−β2 x) + ε, β1 > 0, β2 > 0 (16.38)

an asymptotic regression model.


Traditionally, it is assumed that (x1 , x2 , · · · , xp , y) are able to be precisely
observed. However, in many cases, the observations of those data are impre-
cise and characterized in terms of uncertain variables. It is thus assumed
that we have a set of imprecisely observed data,

(x̃i1 , x̃i2 , · · · , x̃ip , ỹi ), i = 1, 2, · · · , n (16.39)

where x̃i1 , x̃i2 , · · · , x̃ip , ỹi are uncertain variables with uncertainty distribu-
tions Φi1 , Φi2 , · · · , Φip , Ψi , i = 1, 2, · · · , n, respectively.
Based on the imprecisely observed data (16.39), Yao-Liu [187] suggested
that the least squares estimate of β in the regression model

y = f (x1 , x2 , · · · , xp |β) + ε (16.40)

is the solution of the minimization problem,


n
X
min E[(ỹi − f (x̃i1 , x̃i2 , · · · , x̃ip |β))2 ]. (16.41)
β
i=1

If the minimization solution is β∗ , then the fitted regression model is deter-


mined by
y = f (x1 , x2 , · · · , xp |β∗ ). (16.42)

Theorem 16.1 Let (x̃i1 , x̃i2 , · · · , x̃ip , ỹi ), i = 1, 2, · · · , n be a set of im-


precisely observed data, where x̃i1 , x̃i2 , · · · , x̃ip , ỹi are independent uncertain
variables with regular uncertainty distributions Φi1 , Φi2 , · · · , Φip , Ψi , i = 1, 2,
· · · , n, respectively. Then the least squares estimate of β0 , β1 , · · · , βp in the
linear regression model
p
X
y = β0 + βj x j + ε (16.43)
j=1

solves the minimization problem,


 2
Xn Z 1 p
X
min Ψ−1i (α) − β0 − βj Υ−1
ij (α, βj )
 dα (16.44)
β0 ,β1 ,··· ,βp 0
i=1 j=1
398 Chapter 16 - Uncertain Statistics

where
Φ−1
(
ij (1 − α), if βj ≥ 0
Υ−1
ij (α, βj ) = (16.45)
Φ−1
ij (α), if βj < 0

for i = 1, 2, · · · , n and j = 1, 2, · · · , p.

Proof: Note that the least squares estimate of β0 , β1 , · · · , βp in the linear


regression model is the solution of the minimization problem,
 2 
n
X p
X
min E ỹi − β0 − βj x̃ij   . (16.46)
 
β0 ,β1 ,··· ,βp
i=1 j=1

For each index i, the inverse uncertainty distribution of the uncertain variable
p
X
ỹi − β0 − βj x̃ij
j=1

is just
p
X
Fi−1 (α) = Ψ−1
i (α) − β0 − βj Υ−1
ij (α, βj ).
j=1

It follows from Theorem 2.42 that


 2   2
Xp Z 1 p
X
−1 −1
E  ỹi − β0 − βj x̃ij   = Ψi (α) − β0 − βj Υij (α, βj ) dα.
 
j=1 0 j=1

Hence the minimization problem (16.44) is equivalent to (16.46). The theo-


rem is thus proved.

Exercise 16.2: Let (x̃i , ỹi ), i = 1, 2, · · · , n be a set of imprecisely observed


data, where x̃i and ỹi are independent uncertain variables with regular un-
certainty distributions Φi and Ψi , i = 1, 2, · · · , n, respectively. Show that
the least squares estimate of β0 , β1 , β2 in the asymptotic regression model

y = β0 − β1 exp(−β2 x) + ε, β1 > 0, β2 > 0 (16.47)

solves the minimization problem,


n Z 1
X 2
min Ψ−1 −1
i (α) − β0 + β1 exp(−β2 Φi (1 − α)) dα. (16.48)
β0 ,β1 >0,β2 >0 0
i=1
Section 16.5 - Uncertain Regression Analysis 399

Residual Analysis
Definition 16.1 (Lio-Liu [74]) Let (x̃i1 , x̃i2 , · · · , x̃ip , ỹi ), i = 1, 2, · · · , n be
a set of imprecisely observed data, and let the fitted regression model be
y = f (x1 , x2 , · · · , xp |β∗ ). (16.49)
Then for each index i (i = 1, 2, · · · , n), the term
ε̂i = ỹi − f (x̃i1 , x̃i2 , · · · , x̃ip |β∗ ) (16.50)
is called the i-th residual.
If the disturbance term ε is assumed to be an uncertain variable, then
its expected value can be estimated as the average of the expected values of
residuals, i.e.,
n
1X
ê = E[ε̂i ] (16.51)
n i=1
and the variance can be estimated as
n
1X
σ̂ 2 = E[(ε̂i − ê)2 ] (16.52)
n i=1

where ε̂i are the i-th residuals, i = 1, 2, · · · , n, respectively.


Theorem 16.2 (Lio-Liu [74]) Let (x̃i1 , x̃i2 , · · · , x̃ip , ỹi ), i = 1, 2, · · · , n be
a set of imprecisely observed data, where x̃i1 , x̃i2 , · · · , x̃ip , ỹi are independent
uncertain variables with regular uncertainty distributions Φi1 , Φi2 , · · · , Φip , Ψi ,
i = 1, 2, · · · , n, respectively, and let the fitted linear regression model be
p
X
y = β0∗ + βj∗ xj . (16.53)
j=1

Then the estimated expected value of disturbance term ε is


 
n Z p
1 X 1  −1 X
ê = Ψi (α) − β0∗ − βj∗ Υ−1 ∗ 
ij (α, βj ) dα (16.54)
n i=1 0 j=1

and the estimated variance is


 2
n Z 1 p
1 X
Ψ−1
X
σ̂ 2 = ∗
i (α) − β0 − βj∗ Υ−1 ∗
ij (α, βj ) − ê
 dα (16.55)
n i=1 0 j=1

where
Φ−1 ∗
(
ij (1 − α), if βj ≥ 0
Υ−1 ∗
ij (α, βj ) = (16.56)
Φ−1
ij (α), if βj∗ < 0
for i = 1, 2, · · · , n and j = 1, 2, · · · , p.
400 Chapter 16 - Uncertain Statistics

Proof: For each index i, the inverse uncertainty distribution of the uncertain
variable
p
X
ỹi − β0∗ − βj∗ x̃ij
j=1

is just
p
X
Fi−1 (α) = Ψ−1 ∗
i (α) − β0 − βj∗ Υ−1 ∗
ij (α, βj ).
j=1

It follows from Theorems 2.25 and 2.42 that (16.54) and (16.55) hold.

Exercise 16.3: Let (x̃i , ỹi ), i = 1, 2, · · · , n be a set of imprecisely observed


data, where x̃i and ỹi are independent uncertain variables with regular un-
certainty distributions Φi and Ψi , i = 1, 2, · · · , n, respectively, and let the
fitted asymptotic regression model be

y = β0∗ − β1∗ exp(−β2∗ x), β1∗ > 0, β2∗ > 0. (16.57)

Show that the estimated expected value of disturbance term ε is


n Z 1
1X
Ψ−1 ∗ ∗ ∗ −1

ê = i (α) − β0 + β1 exp(−β2 Φi (1 − α)) dα (16.58)
n i=1 0

and the estimated variance is


n Z 1
1X 2
σ̂ 2 = Ψ−1 ∗ ∗ ∗ −1
i (α) − β0 + β1 exp(−β2 Φi (1 − α)) − ê dα. (16.59)
n i=1 0

Forecast Value and Confidence Interval


Now let (x̃1 , x̃2 , · · · , x̃p ) be a new explanatory vector, where x̃1 , x̃2 , · · · , x̃p
are independent uncertain variables with regular uncertainty distributions
Φ1 , Φ2 , · · · , Φp , respectively. Assume (i) the fitted linear regression model is
p
X
y = β0∗ + βj∗ xj , (16.60)
j=1

and (ii) the disturbance term ε has expected value ê and variance σ̂ 2 , and
is independent of x̃1 , x̃2 , · · · , x̃p . Lio-Liu [74] suggested that the forecast
uncertain variable of response variable y with respect to x̃1 , x̃2 , · · · , x̃p is
determined by
Xp

ŷ = β0 + βj∗ x̃j + ε, (16.61)
j=1
Section 16.5 - Uncertain Regression Analysis 401

and the forecast value is defined as the expected value of the forecast uncertain
variable ŷ, i.e.,
Xp
µ = β0∗ + βj∗ E[x̃j ] + ê. (16.62)
j=1

If we suppose further that the disturbance term ε follows normal uncertainty


distribution, then the inverse uncertainty distribution of forecast uncertain
variable ŷ is
p
X
Ψ̂−1 (α) = β0∗ + βj∗ Υ−1 ∗
j (α, βj ) + Φ
−1
(α) (16.63)
j=1

where
Φ−1 if βj∗ ≥ 0
(
j (α),
Υ−1 ∗
j (α, βj ) = (16.64)
Φ−1 ∗
j (1 − α), if βj < 0

for j = 1, 2, · · · , p, and Φ−1 (α) is the inverse uncertainty distribution of


N (ê, σ̂), i.e., √
−1 σ̂ 3 α
Φ (α) = ê + ln . (16.65)
π 1−α
From Ψ̂−1 , we may also derive the uncertainty distribution Ψ̂ of ŷ. Take α
(e.g., 95%) as the confidence level, and find the minimum value b such that

Ψ̂(µ + b) − Ψ̂(µ − b) ≥ α. (16.66)

Since M{µ − b ≤ ŷ ≤ µ + b} ≥ Ψ̂(µ + b) − Ψ̂(µ − b) ≥ α, Lio-Liu [74] suggested


that the α confidence interval of response variable y is [µ − b, µ + b], which is
often abbreviated as
µ ± b. (16.67)

Exercise 16.4: Let (x̃1 , x̃2 , · · · , x̃p ) be a new explanatory vector, where
x̃1 , x̃2 , · · · , x̃p are independent uncertain variables with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φp , respectively. Assume (i) the fitted linear re-
gression model is
Xp
y = β0∗ + βj∗ xj , (16.68)
j=1

and (ii) the disturbance term ε follows linear uncertainty distribution with ex-
pected value ê and variance σ̂ 2 , and is independent of x̃1 , x̃2 , · · · , x̃p . What is
the α confidence √ interval√of response variable y? (Hint: The linear uncertain
variable L(ê − 3σ̂, ê + 3σ̂) has expected value ê and variance σ̂ 2 .)

Exercise 16.5: Let x̃ be a new explanatory variable with regular uncertainty


distribution Φ. Assume (i) the fitted asymptotic regression model is

y = β0∗ − β1∗ exp(−β2∗ x), β1∗ > 0, β2∗ > 0, (16.69)


402 Chapter 16 - Uncertain Statistics

and (ii) the disturbance term ε follows normal uncertainty distribution with
expected value ê and variance σ̂ 2 , and is independent of x̃. What are the
forecast value and α confidence interval of response variable y?

Example 16.6: Suppose that there exist 24 imprecisely observed data


(x̃i1 , x̃i2 , x̃i3 , ỹi ), i = 1, 2, · · · , 24. For each i, x̃i1 , x̃i2 , x̃i3 , ỹi are independent
linear uncertain variables. See Table 16.1. Let us show how the uncertain
regression analysis is used to determine the functional relationship between
(x1 , x2 , x3 ) and y.

Table 16.1: 24 Imprecisely Observed Data

No. x1 x2 x3 y
1 L(3, 4) L(9, 10) L(6, 7) L(33, 36)
2 L(5, 6) L(20, 22) L(6, 7) L(40, 43)
3 L(5, 6) L(18, 20) L(7, 8) L(38, 41)
4 L(5, 6) L(33, 36) L(6, 7) L(46, 49)
5 L(4, 5) L(31, 34) L(7, 8) L(41, 44)
6 L(6, 7) L(13, 15) L(5, 6) L(37, 40)
7 L(6, 7) L(25, 28) L(6, 7) L(39, 42)
8 L(5, 6) L(30, 33) L(4, 5) L(40, 43)
9 L(3, 4) L(5, 6) L(5, 6) L(30, 33)
10 L(7, 8) L(47, 50) L(8, 9) L(52, 55)
11 L(4, 5) L(25, 28) L(5, 6) L(38, 41)
12 L(4, 5) L(11, 13) L(6, 7) L(31, 34)
13 L(8, 9) L(23, 26) L(7, 8) L(43, 46)
14 L(6, 7) L(35, 38) L(7, 8) L(44, 47)
15 L(6, 7) L(39, 44) L(5, 6) L(42, 45)
16 L(3, 4) L(21, 24) L(4, 5) L(33, 36)
17 L(6, 7) L(7, 8) L(5, 6) L(34, 37)
18 L(7, 8) L(40, 43) L(7, 8) L(48, 51)
19 L(4, 5) L(35, 38) L(6, 7) L(38, 41)
20 L(4, 5) L(23, 26) L(3, 4) L(35, 38)
21 L(5, 6) L(33, 36) L(4, 5) L(40, 43)
22 L(5, 6) L(27, 30) L(4, 5) L(36, 39)
23 L(4, 5) L(34, 37) L(8, 9) L(45, 48)
24 L(3, 4) L(15, 17) L(5, 6) L(35, 38)

In order to determine it, we employ the uncertain linear regression model,

y = β0 + β1 x1 + β2 x2 + β3 x3 + ε. (16.70)

By solving the minimization problem (16.44), we get the least squares esti-
Section 16.6 - Uncertain Time Series Analysis 403

mate
(β0∗ , β1∗ , β2∗ , β3∗ ) = (21.5196, 0.8678, 0.3110, 1.0053). (16.71)
Thus the fitted linear regression model is

y = 21.5196 + 0.8678x1 + 0.3110x2 + 1.0053x3 . (16.72)

By using the formulas (16.54) and (16.55), we get the expected value and
variance of the disturbance term ε are

ê = 0.0000, σ̂ 2 = 5.6305, (16.73)

respectively. Now let

(x̃1 , x̃2 , x̃3 ) ∼ (L(5, 6), L(28, 30), L(6, 7)) (16.74)

be a new uncertain explanatory vector. When x̃1 , x̃2 , x̃3 , ε are independent,
by calculating the formula (16.62), we get the forecast value of response
variable y is
µ = 41.8460. (16.75)
Taking the confidence level α = 95%, if the disturbance term ε is assumed to
follow normal uncertainty distribution, then

b = 5.9780 (16.76)

is the minimum value such that (16.66) holds. Therefore, the 95% confidence
interval of response variable y is

41.8460 ± 5.9780. (16.77)

16.6 Uncertain Time Series Analysis


An uncertain time series is a sequence of imprecisely observed values that are
characterized in terms of uncertain variables. Mathematically, an uncertain
time series is represented by

X = {X1 , X2 , · · · , Xn } (16.78)

where Xt are imprecisely observed values (i.e., uncertain variables) at times


t, t = 1, 2, · · · , n, respectively. A basic problem of uncertain time series
analysis is to predict the value of Xn+1 based on previously observed values
X1 , X2 , · · · , Xn .
The simplest approach for modelling uncertain time series is the autore-
gressive model,
Xk
Xt = a0 + ai Xt−i + εt (16.79)
i=1
404 Chapter 16 - Uncertain Statistics

where a0 , a1 , · · · , ak are unknown parameters, εt is a disturbance term, and


k is called the order of the autoregressive model.
Based on the imprecisely observed values X1 , X2 , · · · , Xn , Yang-Liu [166]
suggested that the least squares estimate of a0 , a1 , · · · , ak in the autoregres-
sive model (16.79) is the solution of the minimization problem,
n

k
!2 
X X
min E  Xt − a0 − ai Xt−i  . (16.80)
a0 ,a1 ,··· ,ak
t=k+1 i=1

If the minimization solution is a∗0 , a∗1 , · · · , a∗k , then the fitted autoregressive
model is
X k
Xt = a∗0 + a∗i Xt−i . (16.81)
i=1

Theorem 16.3 (Yang-Liu [166]) Let X1 , X2 , · · · , Xn be imprecisely observed


values characterized in terms of independent uncertain variables with regular
uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively. Then the least squares
estimate of a0 , a1 , · · · , ak in the autoregressive model
k
X
Xt = a0 + ai Xt−i + εt (16.82)
i=1

solves the minimization problem,


n k
!2
X Z 1 X
min Φ−1
t (α) − a0 − ai Υ−1
t−i (α, ai ) dα (16.83)
a0 ,a1 ,··· ,ak 0
t=k+1 i=1

where
Φ−1
(
t−i (1 − α), if ai ≥ 0
Υ−1
t−i (α, ai ) = (16.84)
Φ−1
t−i (α), if ai < 0
for i = 1, 2, · · · , k.
Proof: For each index t, the inverse uncertainty distribution of the uncertain
variable
Xk
Xt − a0 − ai Xt−i
i=1
is just
k
X
Ft−1 (α) = Φ−1
t (α) − a0 − ai Υ−1
t−i (α, ai ).
i=1
It follows from Theorem 2.42 that

k
!2  Z k
!2
X 1 X
E  Xt − a0 − ai Xt−i  = Φ−1
t (α) − a0 − ai Υ−1
t−i (α, ai ) dα.
i=1 0 i=1
Section 16.6 - Uncertain Time Series Analysis 405

Hence the minimization problem (16.83) is equivalent to (16.80). The theo-


rem is thus proved.

Residual Analysis
Definition 16.2 (Yang-Liu [166]) Let X1 , X2 , · · · , Xn be imprecisely ob-
served values, and let the fitted autoregressive model be
k
X
Xt = a∗0 + a∗i Xt−i . (16.85)
i=1

Then for each index t (t = k + 1, k + 2, · · · , n), the difference between the


actual observed value and the value predicted by the model,
k
X
ε̂t = Xt − a∗0 − a∗i Xt−i (16.86)
i=1

is called the t-th residual.

If disturbance terms εk+1 , εk+2 , · · · , εn are assumed to be iid uncertain


variables (hereafter called iid hypothesis), then the expected value of dis-
turbance terms can be estimated as the average of the expected values of
residuals, i.e.,
n
1 X
ê = E[ε̂t ] (16.87)
n−k
t=k+1

and the variance can be estimated as


n
1 X
σ̂ 2 = E[(ε̂t − ê)2 ] (16.88)
n−k
t=k+1

where ε̂t are the t-th residuals, t = k + 1, k + 2, · · · , n, respectively.

Theorem 16.4 (Yang-Liu [166]) Let X1 , X2 , · · · , Xn be imprecisely observed


values characterized in terms of independent uncertain variables with regu-
lar uncertainty distributions Φ1 , Φ2 , · · · , Φn , respectively, and let the fitted
autoregressive model be
k
X
Xt = a∗0 + a∗i Xt−i . (16.89)
i=1

Then the estimated expected value of disturbance terms under the iid hypoth-
esis is
n Z 1 k
!
1 X X
ê = Φ−1 ∗
t (α) − a0 − a∗i Υ−1 ∗
t−i (α, ai ) dα (16.90)
n−k 0 i=1
t=k+1
406 Chapter 16 - Uncertain Statistics

and the estimated variance is


n Z 1 k
!2
1 X X
σ̂ =2
Φ−1
t (α) − a∗0 − a∗i Υ−1 ∗
t−i (α, ai ) − ê dα (16.91)
n−k 0 i=1
t=k+1

where
Φ−1 ∗
(
t−i (1 − α), if ai ≥ 0
Υ−1 ∗
t−i (α, ai ) = (16.92)
Φ−1
t−i (α), if a∗i < 0
for i = 1, 2, · · · , k.

Proof: For each index t, the inverse uncertainty distribution of the uncertain
variable
Xk
Xt − a∗0 − a∗i Xt−i
i=1

is just
k
X
Ft−1 (α) = Φ−1 ∗
t (α) − a0 − a∗i Υ−1 ∗
t−i (α, ai ).
i=1

It follows from Theorems 2.25 and 2.42 that (16.90) and (16.91) hold.

Forecast Value and Confidence Interval


Now let X1 , X2 , · · · , Xn be imprecisely observed values characterized in terms
of independent uncertain variables with regular uncertainty distributions
Φ1 , Φ2 , · · · , Φn , respectively. Assume (i) the fitted autoregressive model is

k
X
Xt = a∗0 + a∗i Xt−i , (16.93)
i=1

and (ii) the disturbance term εn+1 has expected value ê and variance σ̂ 2 , and
is independent of X1 , X2 , · · · , Xn . Yang-Liu [166] suggested that the forecast
uncertain variable of Xn+1 based on X1 , X2 , · · · , Xn is determined by

k
X
X̂n+1 = a∗0 + a∗i Xn+1−i + εn+1 , (16.94)
i=1

and the forecast value is defined as the expected value of the forecast uncertain
variable X̂n+1 , i.e.,

k
X
µ = a∗0 + a∗i E[Xn+1−i ] + ê. (16.95)
i=1
Section 16.6 - Uncertain Time Series Analysis 407

If we suppose further that the disturbance term εn+1 follows normal un-
certainty distribution, then the inverse uncertainty distribution of forecast
uncertain variable X̂n+1 is
k
X
Φ̂−1 ∗
n+1 (α) = a0 + a∗i Υ−1 ∗
n+1−i (α, ai ) + Φ
−1
(α) (16.96)
i=1

where
Φ−1 if a∗i ≥ 0
(
n+1−i (α),
Υ−1 ∗
n+1−i (α, ai ) = (16.97)
Φ−1 ∗
n+1−i (1 − α), if ai < 0

for i = 1, 2, · · · , k, and Φ−1 (α) is the inverse uncertainty distribution of


N (ê, σ̂), i.e.,

σ̂ 3 α
Φ−1 (α) = ê + ln . (16.98)
π 1−α
From Φ̂−1
n+1 , we may also derive the uncertainty distribution Φ̂n+1 of X̂n+1 .
Take α (e.g., 95%) as the confidence level, and find the minimum value b such
that
Φ̂n+1 (µ + b) − Φ̂n+1 (µ − b) ≥ α. (16.99)
Since M{µ − b ≤ X̂n+1 ≤ µ + b} ≥ Φ̂n+1 (µ + b) − Φ̂n+1 (µ − b) ≥ α, Yang-Liu
[166] suggested that the α confidence interval of Xn+1 is [µ − b, µ + b], which
is often abbreviated as
µ ± b. (16.100)

Exercise 16.6: Let X1 , X2 , · · · , Xn be imprecisely observed values charac-


terized in terms of independent uncertain variables with regular uncertainty
distributions Φ1 , Φ2 , · · · , Φn , respectively. Assume (i) the fitted autoregres-
sive model is
Xk
Xt = a∗0 + a∗i Xt−i , (16.101)
i=1

and (ii) the disturbance term εn+1 follows linear uncertainty distribution with
expected value ê and variance σ̂ 2 , and is independent of X1 , X2 , · · · , Xn .
What is the α√confidence √ interval of Xn+1 ? (Hint: The linear uncertain
variable L(ê − 3σ̂, ê + 3σ̂) has expected value ê and variance σ̂ 2 .)

Example 16.7: Assume there exist 20 imprecisely observed carbon emis-


sions X1 , X2 , · · · , X20 that are characterized in terms of independent linear
uncertain variables. See Table 16.2. Let us show how the uncertain time
series analysis is used to forecast the carbon emission in the 21st year.
In order to forecast it, we employ the 2-order uncertain autoregressive
model,
Xt = a0 + a1 Xt−1 + a2 Xt−2 + εt . (16.102)
408 Chapter 16 - Uncertain Statistics

Table 16.2: Imprecisely Observed Carbon Emissions over 20 Years

X1 X2 X3 X4 X5
L(330, 341) L(333, 346) L(335, 347) L(338, 350) L(340, 354)
X6 X7 X8 X9 X10
L(343, 359) L(344, 364) L(346, 366) L(350, 366) L(355, 369)
X11 X12 X13 X14 X15
L(360, 372) L(362, 376) L(365, 381) L(370, 384) L(373, 390)
X16 X17 X18 X19 X20
L(379, 391) L(380, 398) L(384, 402) L(388, 410) L(390, 415)

By solving the minimization problem (16.83), we get the least squares esti-
mate
(a∗0 , a∗1 , a∗2 ) = (28.4715, 0.2367, 0.7018). (16.103)
Thus the fitted autoregressive model is

Xt = 28.4715 + 0.2367Xt−1 + 0.7018Xt−2 . (16.104)

By using the formulas (16.90) and (16.91), we get the expected value and
variance of disturbance term ε21 are

ê = 0.0000, σ̂ 2 = 84.7422, (16.105)

respectively. When the disturbance term ε21 is assumed to be independent


of X20 and X19 , by calculating the formula (16.95), we get the forecast value
of carbon emission in the 21st year (i.e., X21 ) is

µ = 403.7361. (16.106)

Taking the confidence level α = 95%, if the disturbance term ε21 is assumed
to follow normal uncertainty distribution, then

b = 28.7376 (16.107)

is the minimum value such that (16.99) holds. Therefore, the 95% confidence
interval of carbon emission in the 21st year (i.e., X21 ) is

403.7361 ± 28.7376. (16.108)

16.7 Bibliographic Notes


The study of uncertain statistics was started by Liu [84] in 2010 in which a
questionnaire survey for collecting expert’s experimental data was designed.
Section 16.7 - Bibliographic Notes 409

It was showed among others by Chen-Ralescu [11] that the questionnaire


survey may successfully acquire the expert’s experimental data.
Parametric uncertain statistics assumes that the uncertainty distribution
to be determined has a known functional form but with unknown parameters.
In order to estimate the unknown parameters, Liu [84] suggested the princi-
ple of least squares, and Wang-Peng [154] proposed the method of moments.
Nonparametric uncertain statistics does not rely on the expert’s experimen-
tal data belonging to any particular uncertainty distribution. In order to
determine the uncertainty distributions, Liu [84] introduced the linear inter-
polation method (i.e., empirical uncertainty distribution), and Chen-Ralescu
[11] proposed a series of spline interpolation methods. When multiple do-
main experts are available, Wang-Gao-Guo [152] recast Delphi method as a
process to determine uncertainty distributions.
In order to determine membership functions, a questionnaire survey for
collecting expert’s experimental data was designed by Liu [85]. Based on
expert’s experimental data, Liu [85] also suggested the linear interpolation
method and the principle of least squares to determine membership functions.
When multiple domain experts are available, Delphi method was introduced
to uncertain statistics by Guo-Wang-Wang-Chen [53].
Uncertain regression analysis is used to model the relationship between
explanatory variables and response variables when the imprecise observations
are characterized in terms of uncertain variables. For that matter, Yao-
Liu [187] suggested the principle of least squares to estimate the unknown
parameters in the regression models. Lio-Liu [74] analyzed the residual and
confidence interval of forecast values.
Uncertain time series analysis was first presented by Yang-Liu [166] in
order to predict the future values based on preciously imprecise observations
that are characterized in terms of uncertain variables.
Appendix A

Uncertain Random
Variable

Uncertainty and randomness are two basic types of indeterminacy. Uncertain


random variable was initialized by Liu [106] in 2013 for modelling complex
systems with not only uncertainty but also randomness. This appendix will
introduce the concepts of chance measure, uncertain random variable, chance
distribution, operational law, expected value, variance, and law of large num-
bers. As applications of chance theory, this appendix will also provide uncer-
tain random programming, uncertain random risk analysis, uncertain random
reliability analysis, uncertain random graph, uncertain random network, and
uncertain random process.

A.1 Chance Measure


Let (Γ, L, M) be an uncertainty space and let (Ω, A, Pr) be a probability
space. Then the product (Γ, L, M) × (Ω, A, Pr) is called a chance space.
Essentially, it is another triplet,

(Γ × Ω, L × A, M × Pr) (A.1)

where Γ × Ω is the universal set, L × A is the product σ-algebra, and M × Pr


is the product measure.
The universal set Γ × Ω is clearly the set of all ordered pairs of the form
(γ, ω), where γ ∈ Γ and ω ∈ Ω. That is,

Γ × Ω = {(γ, ω) | γ ∈ Γ, ω ∈ Ω} . (A.2)

The product σ-algebra L × A is the smallest σ-algebra containing mea-


surable rectangles of the form Λ × A, where Λ ∈ L and A ∈ A. Any element
in L × A is called an event in the chance space.
412 Appendix A - Uncertain Random Variable

What is the product measure M × Pr? In order to answer this question,


let us consider an event Θ in L × A. For each ω ∈ Ω, the cross section

Θω = {γ ∈ Γ | (γ, ω) ∈ Θ} (A.3)

is clearly an event in L. Thus the uncertain measure of Θω , i.e.,

M{Θω } = M {γ ∈ Γ | (γ, ω) ∈ Θ} (A.4)

exists for each ω ∈ Ω. If M{Θω } is measurable with respect to ω, then it


is a random variable. Now we define M × Pr of Θ as the average value of
M{Θω } in the sense of probability measure (i.e., the expected value), and
call it chance measure represented by Ch{Θ}.

Ω..
..
.......... ...........................................
... ......... .......
... ....... ......
... ..
.......
. .....
... ..... .....
... ...... ....
...
ω ... .. ..
.
.........................................................................................................................
.. ...
... .... .. ... .....
.
... .. ..
... .... .. .. ...
... .... .. .. ...
...
...
... ..
... ..
Θ .. ...
.. ...
... ... .. .. ...
... ... . .. ...
... .. .....
... ..... ...
... ... .
... ..... ......
... .. ...... ..
... ..
... .. ........ ....... ..
...... ...
... .. ....... ...... ..
... .. .......... ....... ..
.. ..................................... ..
... ..
... ..
.
. .. .
....................................................................................................................................................................
..... ..... ...................................... .
.
.
Γ
.......Θω ..............................................

Figure A.1: An Event Θ in L × A and its Cross Section Θω

Definition A.1 (Liu [106]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space, and
let Θ ∈ L × A be an event. Then the chance measure of Θ is defined as
Z 1
Ch{Θ} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x} dx. (A.5)
0

Exercise A.1: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure, and take a probability space (Ω, A, Pr) to be
also [0, 1] with Borel algebra and Lebesgue measure. Then

Θ = {(γ, ω) ∈ Γ × Ω | γ + ω ≤ 1} (A.6)

is an event on the chance space (Γ, L, M) × (Ω, A, Pr). Show that

1
Ch{Θ} = . (A.7)
2
Section A.1 - Chance Measure 413

Exercise A.2: Take an uncertainty space (Γ, L, M) to be [0, 1] with Borel


algebra and Lebesgue measure, and take a probability space (Ω, A, Pr) to be
also [0, 1] with Borel algebra and Lebesgue measure. Then

Θ = (γ, ω) ∈ Γ × Ω | (γ − 0.5)2 + (ω − 0.5)2 ≤ 0.52



(A.8)

is an event on the chance space (Γ, L, M) × (Ω, A, Pr). Show that


π
Ch{Θ} = . (A.9)
4
Theorem A.1 (Liu [106]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space. Then

Ch{Λ × A} = M{Λ} × Pr{A} (A.10)

for any Λ ∈ L and any A ∈ A. Especially, we have

Ch{∅} = 0, Ch{Γ × Ω} = 1. (A.11)

Proof: Let us first prove the identity (A.10). When A is nonempty, we have

{γ ∈ Γ | (γ, ω) ∈ Λ × A} = Λ

and
M{γ ∈ Γ | (γ, ω) ∈ Λ × A} = M{Λ}.
For any real number x, if M{Λ} ≥ x, then

Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} = Pr{A}.

If M{Λ} < x, then

Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} = Pr{∅} = 0.

Thus
Z 1
Ch{Λ × A} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Λ × A} ≥ x} dx
0
Z M{Λ} Z 1
= Pr{A}dx + 0dx = M{Λ} × Pr{A}.
0 M{Λ}

Furthermore, it follows from (A.10) that

Ch{∅} = M{∅} × Pr{∅} = 0,

Ch{Γ × Ω} = M{Γ} × Pr{Ω} = 1.


The theorem is thus verified.
414 Appendix A - Uncertain Random Variable

Theorem A.2 (Liu [106], Monotonicity Theorem) The chance measure is


a monotone increasing set function. That is, for any events Θ1 and Θ2 with
Θ1 ⊂ Θ2 , we have
Ch{Θ1 } ≤ Ch{Θ2 }. (A.12)

Proof: Since Θ1 and Θ2 are two events with Θ1 ⊂ Θ2 , we immediately have

{γ ∈ Γ | (γ, ω) ∈ Θ1 } ⊂ {γ ∈ Γ | (γ, ω) ∈ Θ2 }

and
M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≤ M{γ ∈ Γ | (γ, ω) ∈ Θ2 }.
Thus for any real number x, we have

Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≥ x}
≤ Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ2 } ≥ x} .

By the definition of chance measure, we get


Z 1
Ch{Θ1 } = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ1 } ≥ x} dx
0
Z 1
≤ Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ2 } ≥ x} dx = Ch{Θ2 }.
0

That is, Ch{Θ} is a monotone increasing function with respect to Θ. The


theorem is thus verified.

Theorem A.3 (Liu [106], Duality Theorem) The chance measure is self-
dual. That is, for any event Θ, we have

Ch{Θ} + Ch{Θc } = 1. (A.13)

Proof: Since both uncertain measure and probability measure are self-dual,
we have
Z 1
Ch{Θ} = Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θ} ≥ x} dx
0
Z 1
= Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } ≤ 1 − x} dx
0
Z 1
= (1 − Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > 1 − x}) dx
0
Z 1
=1− Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θc } > x} dx
0

= 1 − Ch{Θc }.

That is, Ch{Θ} + Ch{Θc } = 1, i.e., the chance measure is self-dual.


Section A.2 - Uncertain Random Variable 415

Theorem A.4 (Hou [55], Subadditivity Theorem) The chance measure is


subadditive. That is, for any countable sequence of events Θ1 , Θ2 , · · · , we
have (∞ ∞
)
[ X
Ch Θi ≤ Ch{Θi }. (A.14)
i=1 i=1

Proof: At first, it follows from the subadditivity of uncertain measure that


∞ ∞
( )
[ X
M γ ∈ Γ | (γ, ω) ∈ Θi ≤ M{γ ∈ Γ | (γ, ω) ∈ Θi }.
i=1 i=1

Thus for any real number x, we have



( ( ) )
[
Pr ω ∈ Ω | M γ ∈ Γ | (γ, ω) ∈ Θi ≥x
i=1

( )
X
≤ Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x .
i=1

By the definition of chance measure, we get


(∞ ) Z ( ( ∞
) )
[ 1 [
Ch Θi = Pr ω ∈ Ω | M γ ∈ Γ | (γ, ω) ∈ Θi ≥ x dx
i=1 0 i=1

( )
Z 1 X
≤ Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x dx
0 i=1

( )
Z +∞ X
≤ Pr ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x dx
0 i=1
∞ Z
X 1
= Pr {ω ∈ Ω | M{γ ∈ Γ | (γ, ω) ∈ Θi } ≥ x} dx
i=1 0

X∞
= Ch{Θi }.
i=1

That is, the chance measure is subadditive.

A.2 Uncertain Random Variable


Theoretically, an uncertain random variable is a measurable function on the
chance space. It is usually used to deal with measurable functions of uncertain
variables and random variables.
Definition A.2 (Liu [106]) An uncertain random variable is a function ξ
from a chance space (Γ, L, M) × (Ω, A, Pr) to the set of real numbers such
that {ξ ∈ B} is an event in L × A for any Borel set B of real numbers.
416 Appendix A - Uncertain Random Variable

Remark A.1: An uncertain random variable ξ(γ, ω) degenerates to a ran-


dom variable if it does not vary with γ. Thus a random variable is a special
uncertain random variable.

Remark A.2: An uncertain random variable ξ(γ, ω) degenerates to an un-


certain variable if it does not vary with ω. Thus an uncertain variable is a
special uncertain random variable.
Theorem A.5 Let ξ1 , ξ2 , · · ·, ξn be uncertain random variables on the chance
space (Γ, L, M) × (Ω, A, Pr), and let f be a measurable function. Then
ξ = f (ξ1 , ξ2 , · · · , ξn ) (A.15)
is an uncertain random variable determined by
ξ(γ, ω) = f (ξ1 (γ, ω), ξ2 (γ, ω), · · · , ξn (γ, ω)) (A.16)
for all (γ, ω) ∈ Γ × Ω.
Proof: Since ξ1 , ξ2 , · · · , ξn are uncertain random variables, we know that
they are measurable functions on the chance space, and ξ = f (ξ1 , ξ2 , · · · , ξn )
is also a measurable function. Hence ξ is an uncertain random variable.

Example A.1: A random variable η plus an uncertain variable τ makes an


uncertain random variable ξ, i.e.,
ξ(γ, ω) = η(ω) + τ (γ) (A.17)
for all (γ, ω) ∈ Γ × Ω.

Example A.2: A random variable η times an uncertain variable τ makes


an uncertain random variable ξ, i.e.,
ξ(γ, ω) = η(ω) · τ (γ) (A.18)
for all (γ, ω) ∈ Γ × Ω.
Theorem A.6 (Liu [106]) Let ξ be an uncertain random variable on the
chance space (Γ, L, M) × (Ω, A, Pr), and let B be a Borel set of real numbers.
Then {ξ ∈ B} is an uncertain random event with chance measure
Z 1
Ch{ξ ∈ B} = Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ∈ B} ≥ x} dx. (A.19)
0

Proof: Since {ξ ∈ B} is an event in the chance space, the equation (A.19)


follows from Definition A.1 immediately.

Remark A.3: If the uncertain random variable degenerates to a random


variable η, then Ch{η ∈ B} = Ch{Γ × (η ∈ B)} = M{Γ} × Pr{η ∈ B} =
Pr{η ∈ B}. That is,
Ch{η ∈ B} = Pr{η ∈ B}. (A.20)
Section A.3 - Chance Distribution 417

If the uncertain random variable degenerates to an uncertain variable τ , then


Ch{τ ∈ B} = Ch{(τ ∈ B) × Ω} = M{τ ∈ B} × Pr{Ω} = M{τ ∈ B}. That is,
Ch{τ ∈ B} = M{τ ∈ B}. (A.21)
Theorem A.7 (Liu [106]) Let ξ be an uncertain random variable. Then the
chance measure Ch{ξ ∈ B} is a monotone increasing function of B and
Ch{ξ ∈ ∅} = 0, Ch{ξ ∈ <} = 1. (A.22)
Proof: Let B1 and B2 be Borel sets of real numbers with B1 ⊂ B2 . Then
we immediately have {ξ ∈ B1 } ⊂ {ξ ∈ B2 }. It follows from the monotonicity
of chance measure that
Ch{ξ ∈ B1 } ≤ Ch{ξ ∈ B2 }.
Hence Ch{ξ ∈ B} is a monotone increasing function of B. Furthermore, we
have
Ch{ξ ∈ ∅} = Ch{∅} = 0,
Ch{ξ ∈ <} = Ch{Γ × Ω} = 1.
The theorem is verified.
Theorem A.8 (Liu [106]) Let ξ be an uncertain random variable. Then for
any Borel set B of real numbers, we have
Ch{ξ ∈ B} + Ch{ξ ∈ B c } = 1. (A.23)
Proof: It follows from {ξ ∈ B}c = {ξ ∈ B c } and the duality of chance
measure immediately.

A.3 Chance Distribution


Definition A.3 (Liu [106]) Let ξ be an uncertain random variable. Then
its chance distribution is defined by
Φ(x) = Ch{ξ ≤ x} (A.24)
for any x ∈ <.

Example A.3: As a special uncertain random variable, the chance distri-


bution of a random variable η is just its probability distribution, that is,
Φ(x) = Ch{η ≤ x} = Pr{η ≤ x}. (A.25)

Example A.4: As a special uncertain random variable, the chance distri-


bution of an uncertain variable τ is just its uncertainty distribution, that
is,
Φ(x) = Ch{τ ≤ x} = M{τ ≤ x}. (A.26)
418 Appendix A - Uncertain Random Variable

Theorem A.9 (Liu [106], Sufficient and Necessary Condition for Chance
Distribution) A function Φ : < → [0, 1] is a chance distribution if and only if
it is a monotone increasing function except Φ(x) ≡ 0 and Φ(x) ≡ 1.
Proof: Assume Φ is a chance distribution of uncertain random variable ξ.
Let x1 and x2 be two real numbers with x1 < x2 . It follows from Theorem A.7
that
Φ(x1 ) = Ch{ξ ≤ x1 } ≤ Ch{ξ ≤ x2 } = Φ(x2 ).
Hence the chance distribution Φ is a monotone increasing function. Further-
more, if Φ(x) ≡ 0, then
Z 1
Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr ≡ 0.
0

Thus for almost all ω ∈ Ω, we have


M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≡ 0, ∀x ∈ <
which is in contradiction to the asymptotic theorem, and then Φ(x) 6≡ 0 is
verified. Similarly, if Φ(x) ≡ 1, then
Z 1
Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr ≡ 1.
0

Thus for almost all ω ∈ Ω, we have


M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≡ 1, ∀x ∈ <
which is also in contradiction to the asymptotic theorem, and then Φ(x) 6≡ 1
is proved.
Conversely, suppose Φ : < → [0, 1] is a monotone increasing function but
Φ(x) 6≡ 0 and Φ(x) 6≡ 1. It follows from Peng-Iwamura theorem that there is
an uncertain variable whose uncertainty distribution is just Φ(x). Since an
uncertain variable is a special uncertain random variable, we know that Φ is
a chance distribution.
Theorem A.10 (Liu [106], Chance Inversion Theorem) Let ξ be an uncer-
tain random variable with chance distribution Φ. Then for any real number
x, we have
Ch{ξ ≤ x} = Φ(x), Ch{ξ > x} = 1 − Φ(x). (A.27)
Proof: The equation Ch{ξ ≤ x} = Φ(x) follows from the definition of chance
distribution immediately. By using the duality of chance measure, we get
Ch{ξ > x} = 1 − Ch{ξ ≤ x} = 1 − Φ(x).

Remark A.4: When the chance distribution Φ is a continuous function, we


also have
Ch{ξ < x} = Φ(x), Ch{ξ ≥ x} = 1 − Φ(x). (A.28)
Section A.4 - Operational Law 419

A.4 Operational Law


Assume η1 , η2 , · · · , ηm are independent random variables with probability
distributions Ψ1 , Ψ2 , · · · , Ψm , and τ1 , τ2 , · · · , τn are independent uncertain
variables with uncertainty distributions Υ1 , Υ2 , · · ·, Υn , respectively. What
is the chance distribution of the uncertain random variable

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn )? (A.29)

This section will provide an operational law to answer this question.

Theorem A.11 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , respectively, and let τ1 , τ2 ,
· · · , τn be uncertain variables. Assume f is a measurable function. Then the
uncertain random variable

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.30)

has a chance distribution


Z
Φ(x) = F (x; y1 , y2 , · · · , ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.31)
<m

where F (x; y1 , y2 , · · · , ym ) is the uncertainty distribution of the uncertain


variable f (y1 , y2 , · · ·, ym , τ1 , τ2 , · · ·, τn ).

Proof: It follows from Theorem A.6 that the uncertain random variable ξ
has a chance distribution
Z 1
Φ(x) = Pr {ω ∈ Ω | M{γ ∈ Γ | ξ(γ, ω) ≤ x} ≥ r} dr
0
Z 1
= Pr {ω ∈ Ω | M{f (η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) ≤ x} ≥ r} dr
0
Z
= M{f (y1 , y2 , · · ·, ym , τ1 , τ2 , · · ·, τn ) ≤ x}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= F (x; y1 , y2 , · · ·, ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym )
<m

where F (x; y1 , y2 , · · ·, ym ) is just the uncertainty distribution of the uncertain


variable f (y1 , y2 , · · ·, ym , τ1 , τ2 , · · ·, τn ). The theorem is thus verified.

Exercise A.3: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the sum

ξ = η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn (A.32)
420 Appendix A - Uncertain Random Variable

has a chance distribution


Z +∞
Φ(x) = Υ(x − y)dΨ(y) (A.33)
−∞

where
Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.34)
y1 +y2 +···+ym ≤y

is the probability distribution of η1 + η2 + · · · + ηm , and

Υ(z) = sup Υ1 (z1 ) ∧ Υ2 (z2 ) ∧ · · · ∧ Υn (zn ) (A.35)


z1 +z2 +···+zn =z

is the uncertainty distribution of τ1 + τ2 + · · · + τn .

Exercise A.4: Let η1 , η2 , · · · , ηm be independent positive random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn
be independent positive uncertain variables with uncertainty distributions
Υ1 , Υ2 , · · · , Υn , respectively. Show that the product

ξ = η1 η2 · · · ηm τ1 τ2 · · · τn (A.36)

has a chance distribution


Z +∞
Φ(x) = Υ(x/y)dΨ(y) (A.37)
0

where Z
Ψ(y) = dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.38)
y1 y2 ···ym ≤y

is the probability distribution of η1 η2 · · · ηm , and

Υ(z) = sup Υ1 (z1 ) ∧ Υ2 (z2 ) ∧ · · · ∧ Υn (zn ) (A.39)


z1 z2 ···zn =z

is the uncertainty distribution of τ1 τ2 · · · τn .

Exercise A.5: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the minimum

ξ = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn (A.40)

has a chance distribution

Φ(x) = Ψ(x) + Υ(x) − Ψ(x)Υ(x) (A.41)


Section A.4 - Operational Law 421

where

Ψ(x) = 1 − (1 − Ψ1 (x))(1 − Ψ2 (x)) · · · (1 − Ψm (x)) (A.42)

is the probability distribution of η1 ∧ η2 ∧ · · · ∧ ηm , and

Υ(x) = Υ1 (x) ∨ Υ2 (x) ∨ · · · ∨ Υn (x) (A.43)

is the uncertainty distribution of τ1 ∧ τ2 ∧ · · · ∧ τn .

Exercise A.6: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the maximum

ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (A.44)

has a chance distribution

Φ(x) = Ψ(x)Υ(x) (A.45)

where
Ψ(x) = Ψ1 (x)Ψ2 (x) · · · Ψm (x) (A.46)
is the probability distribution of η1 ∨ η2 ∨ · · · ∨ ηm , and

Υ(x) = Υ1 (x) ∧ Υ2 (x) ∧ · · · ∧ Υn (x) (A.47)

is the uncertainty distribution of τ1 ∨ τ2 ∨ · · · ∨ τn .

Theorem A.12 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with continuous uncertainty distributions Υ1 ,
Υ2 , · · · , Υn , respectively. Assume f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) is strictly
increasing with respect to τ1 , τ2 , · · · , τk and strictly decreasing with respect to
τk+1 , τk+2 , · · · , τn . Then the uncertain random variable

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.48)

has a chance distribution


Z
Φ(x) = F (x; y1 , y2 , · · · , ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.49)
<m

where
 
F (x; y1 , · · ·, ym ) = sup min Υi (xi ) ∧ min (1 − Υi (xi )) .
f (y1 ,··· ,ym ,x1 ,··· ,xn )=x 1≤i≤k k+1≤i≤n

Proof: It follows from Theorems 2.20 and A.11 immediately.


422 Appendix A - Uncertain Random Variable

Theorem A.13 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with regular uncertainty distributions Υ1 , Υ2 ,
· · · , Υn , respectively. Assume f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) is strictly in-
creasing with respect to τ1 , τ2 , · · · , τk and strictly decreasing with respect to
τk+1 , τk+2 , · · · , τn . Then the uncertain random variable

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.50)

has a chance distribution


Z
Φ(x) = F (x; y1 , y2 , · · · , ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.51)
<m

where F (x; y1 , y2 , · · · , ym ) is the root α of the equation

f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = x.

Proof: It follows from Theorem 2.14 that f (y1 , y2 , · · · , ym , τ1 , τ2 , · · · , τn ) is


an uncertain variable whose inverse uncertainty distribution is

G−1 (α) = f (y1 , y2 , · · ·, ym , Υ−1 −1 −1 −1


1 (α), · · ·, Υk (α), Υk+1 (1−α), · · ·, Υn (1−α)).

Since M{f (y1 , y2 , · · ·, ym , τ1 , τ2 , · · ·, τn ) ≤ x} = G(x), it is the solution α of


the equation G−1 (α) = x. Thus (A.51) follows from Theorem A.11 immedi-
ately.

Remark A.5: Sometimes, the equation in the above theorem may not have
a root. In this case, if

f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) < x

for all α, then we set the root α = 1; and if

f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) > x

for all α, then we set the root α = 0. The root α may be estimated by the
bisection method because

f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α))

is a strictly increasing function with respect to α.

Order Statistics
Definition A.4 (Gao-Sun-Ralescu [38], Order Statistic) Let ξ1 , ξ2 , · · · , ξn
be uncertain random variables, and let k be an index with 1 ≤ k ≤ n. Then

ξ = k-min[ξ1 , ξ2 , · · · , ξn ] (A.52)

is called the kth order statistic of ξ1 , ξ2 , · · · , ξn .


Section A.4 - Operational Law 423

Theorem A.14 (Gao-Sun-Ralescu [38]) Let η1 , η2 , · · · , ηn be independent


random variables with probability distributions Ψ1 , Ψ2 , · · · , Ψn , and let τ1 , τ2 ,
· · · , τn be independent uncertain variables with uncertainty distributions Υ1 ,
Υ2 , · · · , Υn , respectively. If f1 , f2 , · · · , fn are continuous and strictly increas-
ing functions, then the kth order statistic of f1 (η1 , τ1 ), f2 (η2 , τ2 ), · · ·, fn (ηn , τn )
has a chance distribution
 
sup Υ1 (z1 )
 f1 (y1 ,z1 )=x 
 
Z sup Υ
 f (y ,z )=x 2 2 
 (z ) 
Φ(x) = k-max 2 2 2 dΨ1 (y1 )dΨ2 (y2 ) · · · dΨn (yn ).
<n
···
 
 
sup Υn (zn )
 
fn (yn ,zn )=x

Proof: For each index i and each real number yi , since fi is a strictly increas-
ing function, the uncertain variable fi (yi , τi ) has an uncertainty distribution
Fi (x; yi ) = sup Υi (zi ).
fi (yi ,zi )=x

Theorem 2.17 states that the kth order statistic of f1 (y1 , τ1 ), f2 (y2 , τ2 ), · · · ,
fn (yn , τn ) has an uncertainty distribution
" #
F (x; y1 , y2 , · · · , yn ) = k-max sup Υ1 (z1 ), · · · , sup Υn (zn ) .
f1 (y1 ,z1 )=x fn (yn ,zn )=x

Thus the theorem follows from the operational law of uncertain random vari-
ables immediately.

Exercise A.7: Let η1 , η2 , · · · , ηn be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψn , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with continuous uncertainty distributions Υ1 , Υ2 , · · · ,
Υn , respectively. Assume f1 , f2 , · · · , fn are continuous and strictly decreas-
ing functions. Show that the kth order statistic of f1 (η1 , τ1 ), f2 (η2 , τ2 ), · · · ,
fn (ηn , τn ) has a chance distribution
sup (1 − Υ1 (z1 ))
 
 f1 (y1 ,z1 )=x 
sup (1 − Υ2 (z2 )) 
Z  

 f2 (y2 ,z2 )=x
Φ(x) = k-max dΨ1 (y1 )dΨ2 (y2 ) · · · dΨn (yn ).

<n
···
 
 
 
sup (1 − Υn (zn ))
fn (yn ,zn )=x

Exercise A.8: Let η1 , η2 , · · · , ηm be independent random variables with


probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be indepen-
dent uncertain variables with uncertainty distributions Υ1 , Υ2 , · · · , Υn , re-
spectively. Show that the kth order statistic of η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn
424 Appendix A - Uncertain Random Variable

has a chance distribution


Z  
I(y1 ≤ x), · · ·, I(ym ≤ x)
Φ(x) = k-max dΨ1 (y1 ) · · · dΨm (ym ). (A.53)
<m Υ1 (x), Υ2 (x), · · · , Υn (x)

Operational Law for Boolean System


Theorem A.15 (Liu [107]) Assume η1 , η2 , · · · , ηm are independent Boolean
random variables, i.e.,
(
1 with probability measure ai
ηi = (A.54)
0 with probability measure 1 − ai

for i = 1, 2, · · · , m, and τ1 , τ2 , · · · , τn are independent Boolean uncertain


variables, i.e.,
(
1 with uncertain measure bj
τj = (A.55)
0 with uncertain measure 1 − bj

for j = 1, 2, · · · , n. If f is a Boolean function, then

ξ = f (η1 , · · · , ηm , τ1 , · · · , τn ) (A.56)

is a Boolean uncertain random variable such that


m
!
X Y
Ch{ξ = 1} = µi (xi ) f ∗ (x1 , · · · , xm ) (A.57)
(x1 ,··· ,xm )∈{0,1}m i=1

where

 sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n








 if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n



f (x1 , · · · , xm ) = (A.58)

 1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n






 if sup min νj (yj ) ≥ 0.5,



1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1

(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (A.59)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (A.60)
1 − bj , if yj = 0
Section A.4 - Operational Law 425

Proof: At first, when (x1 , · · · , xm ) is given, f (x1 , · · · , xm , τ1 , · · · , τn ) is es-


sentially a Boolean function of uncertain variables. It follows from the oper-
ational law of uncertain variables that
M{f (x1 , · · · , xm , τ1 , · · · , τn ) = 1} = f ∗ (x1 , · · · , xm )
that is determined by (A.58). On the other hand, it follows from the opera-
tional law of uncertain random variables that
m
!
X Y
Ch{ξ = 1} = µi (xi ) M{f (x1 , · · · , xm , τ1 , · · · , τn ) = 1}.
(x1 ,··· ,xm )∈{0,1}m i=1

Thus (A.57) is verified.

Remark A.6: When the uncertain variables disappear, the operational law
becomes
m
!
X Y
Pr{ξ = 1} = µi (xi ) f (x1 , x2 , · · · , xm ). (A.61)
(x1 ,x2 ,··· ,xm )∈{0,1}m i=1

Remark A.7: When the random variables disappear, the operational law
becomes

 sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n








 if sup min νj (yj ) < 0.5
f (y1 ,y2 ,··· ,yn )=1 1≤j≤n


M{ξ = 1} = (A.62)

 1− sup min νj (yj ),
f (y1 ,y2 ,··· ,yn )=0 1≤j≤n






if sup min νj (yj ) ≥ 0.5.



f (y1 ,y2 ,··· ,yn )=1 1≤j≤n

Exercise A.9: Let η1 , η2 , · · · , ηm be independent Boolean random variables


defined by (A.54) and let τ1 , τ2 , · · · , τn be independent Boolean uncertain
variables defined by (A.55). Then the minimum
ξ = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn (A.63)
is a Boolean uncertain random variable. Show that
Ch{ξ = 1} = a1 a2 · · · am (b1 ∧ b2 ∧ · · · ∧ bn ). (A.64)

Exercise A.10: Let η1 , η2 , · · · , ηm be independent Boolean random vari-


ables defined by (A.54) and let τ1 , τ2 , · · · , τn be independent Boolean uncer-
tain variables defined by (A.55). Then the maximum
ξ = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn (A.65)
426 Appendix A - Uncertain Random Variable

is a Boolean uncertain random variable. Show that

Ch{ξ = 1} = 1 − (1 − a1 )(1 − a2 ) · · · (1 − am )(1 − b1 ∨ b2 ∨ · · · ∨ bn ). (A.66)

Exercise A.11: Let η1 , η2 , · · · , ηm be independent Boolean random vari-


ables defined by (A.54) and let τ1 , τ2 , · · · , τn be independent Boolean uncer-
tain variables defined by (A.55). Then the kth order statistic

ξ = k-min [η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ] (A.67)

is a Boolean uncertain random variable. Show that


m
!
X Y
Ch{ξ = 1} = µi (xi ) k-min [x1 , · · · , xm , b1 , · · · , bn ]
(x1 ,··· ,xm )∈{0,1}m i=1

where (
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (A.68)
1 − ai , if xi = 0

Exercise A.12: Let η1 , η2 , · · · , ηm be independent Boolean random vari-


ables defined by (A.54) and let τ1 , τ2 , · · · , τn be independent Boolean uncer-
tain variables defined by (A.55). Then

ξ = k-max [η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ] (A.69)

is the (n − k + 1)th order statistic. Show that


m
!
X Y
Ch{ξ = 1} = µi (xi ) k-max [x1 , · · · , xm , b1 , · · · , bn ]
(x1 ,··· ,xm )∈{0,1}m i=1

where (
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (A.70)
1 − ai , if xi = 0

A.5 Expected Value


Definition A.5 (Liu [106]) Let ξ be an uncertain random variable. Then
its expected value is defined by
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ x}dx − Ch{ξ ≤ x}dx (A.71)
0 −∞

provided that at least one of the two integrals is finite.


Section A.5 - Expected Value 427

Theorem A.16 (Liu [106]) Let ξ be an uncertain random variable with


chance distribution Φ. Then
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx. (A.72)
0 −∞

Proof: It follows from the chance inversion theorem that for almost all
numbers x, we have Ch{ξ ≥ x} = 1 − Φ(x) and Ch{ξ ≤ x} = Φ(x). By using
the definition of expected value operator, we obtain
Z +∞ Z 0
E[ξ] = Ch{ξ ≥ x}dx − Ch{ξ ≤ x}dx
0 −∞
Z +∞ Z 0
= (1 − Φ(x))dx − Φ(x)dx.
0 −∞

Thus we obtain the equation (A.72).

Theorem A.17 Let ξ be an uncertain random variable with chance distri-


bution Φ. Then Z +∞
E[ξ] = xdΦ(x). (A.73)
−∞

Proof: It follows from the change of variables of integral and Theorem A.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞ Z 0 Z +∞
= xdΦ(x) + xdΦ(x) = xdΦ(x).
0 −∞ −∞

The theorem is proved.

Theorem A.18 Let ξ be an uncertain random variable with regular chance


distribution Φ. Then Z 1
E[ξ] = Φ−1 (α)dα. (A.74)
0

Proof: It follows from the change of variables of integral and Theorem A.16
that the expected value is
Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z 1 Z Φ(0) Z 1
= Φ−1 (α)dα + Φ−1 (α)dα = Φ−1 (α)dα.
Φ(0) 0 0

The theorem is proved.


428 Appendix A - Uncertain Random Variable

Theorem A.19 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , respectively, let τ1 , τ2 , · · · ,
τn be uncertain variables, and let f be a measurable function. Then

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.75)

has an expected value


Z
E[ξ] = E[f (y1 , y2 , · · · , ym , τ1 , τ2 , · · · , τn )]dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym )
<m

where E[f (y1 , y2 , · · · , ym , τ1 , τ2 , · · · , τn )] is the expected value of the uncertain


variable f (y1 , y2 , · · · , ym , τ1 , τ2 , · · · , τn ) for any real numbers y1 , y2 , · · · , ym .

Proof: For simplicity, we only prove the case m = n = 2. Write the


uncertainty distribution of f (y1 , y2 , τ1 , τ2 ) by F (x; y1 , y2 ) for any real numbers
y1 and y2 . Then
Z +∞ Z 0
E[f (y1 , y2 , τ1 , τ2 )] = (1 − F (x; y1 , y2 ))dx − F (x; y1 , y2 )dx.
0 −∞

On the other hand, the uncertain random variable ξ = f (η1 , η2 , τ1 , τ2 ) has a


chance distribution
Z
Φ(x) = F (x; y1 , y2 )dΨ1 (y1 )dΨ2 (y2 ).
<2

It follows from Theorem A.16 that


Z +∞ Z 0
E[ξ] = (1 − Φ(x))dx − Φ(x)dx
0 −∞
Z +∞  Z 
= 1− F (x; y1 , y2 )dΨ1 (y1 )dΨ2 (y2 ) dx
0 <2
Z 0 Z
− F (x; y1 , y2 )dΨ1 (y1 )dΨ2 (y2 )dx
−∞ <2
Z Z +∞ Z 0 
= (1 − F (x; y1 , y2 ))dx − F (x; y1 , y2 )dx dΨ1 (y1 )dΨ2 (y2 )
<2 0 −∞
Z
= E[f (y1 , y2 , τ1 , τ2 )]dΨ1 (y1 )dΨ2 (y2 ).
<2

Thus the theorem is proved.

Exercise A.13: Let η be a random variable and let τ be an uncertain


variable. Show that
E[η + τ ] = E[η] + E[τ ] (A.76)
Section A.5 - Expected Value 429

and
E[ητ ] = E[η]E[τ ]. (A.77)

Exercise A.14: Let η be a random variable with probability distribution


Ψ, and let τ be an uncertain variable with regular uncertainty distribution
Υ. Show that
Z Z 1
y ∨ Υ−1 (α) dαdΨ(y)

E[η ∨ τ ] = (A.78)
< 0

and Z Z 1
y ∧ Υ−1 (α) dαdΨ(y).

E[η ∧ τ ] = (A.79)
< 0

Theorem A.20 (Liu [107], Linearity of Expected Value Operator) Assume


η1 and η2 are random variables (not necessarily independent), τ1 and τ2 are
independent uncertain variables, and f1 and f2 are measurable functions.
Then

E[f1 (η1 , τ1 ) + f2 (η2 , τ2 )] = E[f1 (η1 , τ1 )] + E[f2 (η2 , τ2 )]. (A.80)

Proof: Since τ1 and τ2 are independent uncertain variables, for any real
numbers y1 and y2 , the functions f1 (y1 , τ1 ) and f2 (y2 , τ2 ) are also independent
uncertain variables. Thus

E[f1 (y1 , τ1 ) + f2 (y2 , τ2 )] = E[f1 (y1 , τ1 )] + E[f2 (y2 , τ2 )].

Let Ψ1 and Ψ2 be the probability distributions of random variables η1 and


η2 , respectively. Then we have

E[f1 (η1 , τ1 ) + f2 (η2 , τ2 )]


Z
= E[f1 (y1 , τ1 ) + f2 (y2 , τ2 )]dΨ1 (y1 )dΨ2 (y2 )
<2
Z
= (E[f1 (y1 , τ1 )] + E[f2 (y2 , τ2 )])dΨ1 (y1 )dΨ2 (y2 )
<2
Z Z
= E[f1 (y1 , τ1 )]dΨ1 (y1 ) + E[f2 (y2 , τ2 )]dΨ2 (y2 )
< <

= E[f1 (η1 , τ1 )] + E[f2 (η2 , τ2 )].

The theorem is proved.

Exercise A.15: Assume η1 and η2 are random variables, and τ1 and τ2 are
independent uncertain variables. Show that

E[η1 ∨ τ1 + η2 ∧ τ2 ] = E[η1 ∨ τ1 ] + E[η2 ∧ τ2 ]. (A.81)


430 Appendix A - Uncertain Random Variable

Theorem A.21 (Liu [106]) Let ξ be an uncertain random variable, and let f
be a nonnegative even function. If f is decreasing on (−∞, 0] and increasing
on [0, ∞), then for any given number t > 0, we have
E[f (ξ)]
Ch{|ξ| ≥ t} ≤ . (A.82)
f (t)
Proof: It is clear that Ch{|ξ| ≥ f −1 (r)} is a monotone decreasing function
of r on [0, ∞). It follows from the nonnegativity of f (ξ) that
Z +∞ Z +∞
E[f (ξ)] = Ch{f (ξ) ≥ x}dx = Ch{|ξ| ≥ f −1 (x)}dx
0 0
Z f (t) Z f (t)
≥ Ch{|ξ| ≥ f −1 (x)}dx ≥ Ch{|ξ| ≥ f −1 (f (t))}dx
0 0
Z f (t)
= Ch{|ξ| ≥ t}dx = f (t) · Ch{|ξ| ≥ t}
0

which proves the inequality.


Theorem A.22 (Liu [106], Markov Inequality) Let ξ be an uncertain ran-
dom variable. Then for any given numbers t > 0 and p > 0, we have
E[|ξ|p ]
Ch{|ξ| ≥ t} ≤ . (A.83)
tp
Proof: It is a special case of Theorem A.21 when f (x) = |x|p .

A.6 Variance
Definition A.6 (Liu [106]) Let ξ be an uncertain random variable with finite
expected value e. Then the variance of ξ is
V [ξ] = E[(ξ − e)2 ]. (A.84)
2
Since (ξ − e) is a nonnegative uncertain random variable, we also have
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx. (A.85)
0

Theorem A.23 (Liu [106]) If ξ is an uncertain random variable with finite


expected value, a and b are real numbers, then
V [aξ + b] = a2 V [ξ]. (A.86)
Proof: Let e be the expected value of ξ. Then aξ + b has an expected value
ae + b. Thus the variance is
V [aξ + b] = E[(aξ + b − (ae + b))2 ] = E[a2 (ξ − e)2 ] = a2 V [ξ].
The theorem is verified.
Section A.6 - Variance 431

Theorem A.24 (Liu [106]) Let ξ be an uncertain random variable with ex-
pected value e. Then V [ξ] = 0 if and only if Ch{ξ = e} = 1.

Proof: We first assume V [ξ] = 0. It follows from the equation (A.85) that
Z +∞
Ch{(ξ − e)2 ≥ x}dx = 0
0

which implies Ch{(ξ − e)2 ≥ x} = 0 for any x > 0. Hence we have

Ch{(ξ − e)2 = 0} = 1.

That is, Ch{ξ = e} = 1. Conversely, assume Ch{ξ = e} = 1. Then we


immediately have Ch{(ξ − e)2 = 0} = 1 and Ch{(ξ − e)2 ≥ x} = 0 for any
x > 0. Thus
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx = 0.
0

The theorem is proved.

Theorem A.25 (Liu [106], Chebyshev Inequality) Let ξ be an uncertain ran-


dom variable whose variance exists. Then for any given number t > 0, we
have
V [ξ]
Ch {|ξ − E[ξ]| ≥ t} ≤ 2 . (A.87)
t

Proof: It is a special case of Theorem A.21 when the uncertain random


variable ξ is replaced with ξ − E[ξ], and f (x) = x2 .

How to Obtain Variance from Distributions?


Let ξ be an uncertain random variable with expected value e. If we only
know its chance distribution Φ, then the variance
Z +∞
V [ξ] = Ch{(ξ − e)2 ≥ x}dx
0
Z +∞ √ √
= Ch{(ξ ≥ e + x) ∪ (ξ ≤ e − x)}dx
0
Z +∞ √ √
≤ (Ch{ξ ≥ e + x} + Ch{ξ ≤ e − x})dx
0
Z +∞ √ √
= (1 − Φ(e + x) + Φ(e − x))dx.
0

Thus we have the following stipulation.


432 Appendix A - Uncertain Random Variable

Stipulation A.1 (Guo-Wang [52]) Let ξ be an uncertain random variable


with chance distribution Φ and finite expected value e. Then
Z +∞
√ √
V [ξ] = (1 − Φ(e + x) + Φ(e − x))dx. (A.88)
0

Theorem A.26 (Sheng-Yao [138]) Let ξ be an uncertain random variable


with chance distribution Φ and finite expected value e. Then
Z +∞
V [ξ] = (x − e)2 dΦ(x). (A.89)
−∞

Proof: This theorem is based on Stipulation A.1 that says the variance of ξ
is Z +∞ Z +∞
√ √
V [ξ] = (1 − Φ(e + y))dy + Φ(e − y)dy.
0 0

Substituting e + y with x and y with (x − e)2 , the change of variables and
integration by parts produce
Z +∞ Z +∞ Z +∞

(1 − Φ(e + y))dy = (1 − Φ(x))d(x − e)2 = (x − e)2 dΦ(x).
0 e e

Similarly, substituting e − y with x and y with (x − e)2 , we obtain
+∞ −∞ e

Z Z Z
2
Φ(e − y)dy = Φ(x)d(x − e) = (x − e)2 dΦ(x).
0 e −∞

It follows that the variance is


Z +∞ Z e Z +∞
V [ξ] = (x − e)2 dΦ(x) + (x − e)2 dΦ(x) = (x − e)2 dΦ(x).
e −∞ −∞

The theorem is verified.

Theorem A.27 (Sheng-Yao [138]) Let ξ be an uncertain random variable


with regular chance distribution Φ and finite expected value e. Then
Z 1
V [ξ] = (Φ−1 (α) − e)2 dα. (A.90)
0

Proof: Substituting Φ(x) with α and x with Φ−1 (α), it follows from the
change of variables of integral and Theorem A.26 that the variance is
Z +∞ Z 1
V [ξ] = (x − e)2 dΦ(x) = (Φ−1 (α) − e)2 dα.
−∞ 0

The theorem is verified.


Section A.7 - Law of Large Numbers 433

Theorem A.28 (Guo-Wang [52]) Let η1 , η2 , · · · , ηm be independent random


variables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn
be independent uncertain variables with regular uncertainty distributions Υ1 ,
Υ2 , · · · , Υn , respectively. Assume f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) is strictly
increasing with respect to τ1 , τ2 , · · · , τk and strictly decreasing with respect to
τk+1 , τk+2 , · · · , τn . Then

ξ = f (η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ) (A.91)

has a variance
Z Z +∞ √
V [ξ] = (1 − F (e + x; y1 , y2 , · · · , ym )
<m 0

+F (e − x; y1 , y2 , · · · , ym ))dxdΨ1 (y1 )dΨ2 (y2 ) · · · Ψm (ym )

where F (x; y1 , y2 , · · · , ym ) is the root α of the equation

f (y1 , y2 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = x.

Proof: It follows from the operational law of uncertain random variables


that ξ has a chance distribution
Z
Φ(x) = F (x; y1 , y2 , · · · , ym )dΨ1 (y1 )dΨ2 (y2 ) · · · Ψm (ym )
<m

where F (x; y1 , y2 , · · · , ym ) is the uncertainty distribution of the uncertain


variable f (y1 , y2 , · · · , ym , τ1 , τ2 , · · · , τn ). Thus the theorem follows Stipula-
tion A.1 immediately.

Exercise A.16: Let η be a random variable with probability distribution


Ψ, and let τ be an uncertain variable with uncertainty distribution Υ. Show
that the sum
ξ =η+τ (A.92)
has a variance
Z +∞Z +∞ √ √
V [ξ] = (1 − Υ(e + x − y) + Υ(e − x − y))dxdΨ(y). (A.93)
−∞ 0

A.7 Law of Large Numbers


Theorem A.29 (Yao-Gao [182], Law of Large Numbers) Let η1 , η2 , · · · be
iid random variables with a common probability distribution Ψ, and let τ1 , τ2 ,
· · · be iid uncertain variables. Assume f is a strictly monotone function.
Then
Sn = f (η1 , τ1 ) + f (η2 , τ2 ) + · · · + f (ηn , τn ) (A.94)
434 Appendix A - Uncertain Random Variable

is a sequence of uncertain random variables and


Z +∞
Sn
→ f (y, τ1 )dΨ(y) (A.95)
n −∞

in the sense of convergence in distribution as n → ∞.


Proof: According to the definition of convergence in distribution, it suffices
to prove
 Z +∞ 
Sn
lim Ch ≤ f (y, z)dΨ(y)
n→∞ n −∞
Z +∞ Z +∞  (A.96)
=M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y)
−∞ −∞

for any real number z with


Z +∞ Z +∞ 
lim M f (y, τ1 )dΨ(y) ≤ f (y, w)dΨ(y)
w→z −∞ −∞
Z +∞ Z +∞ 
=M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y) .
−∞ −∞

The argument breaks into two cases. Case 1: Assume f (y, z) is strictly
increasing with respect to z. Let Υ denote the common uncertainty distri-
bution of τ1 , τ2 , · · · It is clear that
M{f (y, τ1 ) ≤ f (y, z)} = Υ(z)
for any real numbers y and z. Thus we have
Z +∞ Z +∞ 
M f (y, τ1 )dΨ(y) ≤ f (y, z)dΨ(y) = Υ(z). (A.97)
−∞ −∞

In addition, since f (η1 , z), f (η2 , z), · · · are a sequence of iid random variables,
the law of large numbers for random variables tells us that
Z +∞
f (η1 , z) + f (η2 , z) + · · · + f (ηn , z)
→ f (y, z)dΨ(y), a.s.
n −∞

as n → ∞. Thus
 Z +∞ 
Sn
lim Ch ≤ f (y, z)dΨ(y) = Υ(z). (A.98)
n→∞ n −∞

It follows from (A.97) and (A.98) that (A.96) holds. Case 2: Assume f (y, z)
is strictly decreasing with respect to z. Then −f (y, z) is strictly increasing
with respect to z. By using Case 1 we obtain
   Z +∞ 
Sn
lim Ch − < −z = M − f (y, τ1 )dΨ(y) < −z .
n→∞ n −∞
Section A.8 - Uncertain Random Programming 435

That is,
  Z +∞ 
Sn
lim Ch >z =M f (y, τ1 )dΨ(y) > z .
n→∞ n −∞

It follows from the duality property that


  Z +∞ 
Sn
lim Ch ≤z =M f (y, τ1 )dΨ(y) ≤ z .
n→∞ n −∞

The theorem is thus proved.

Exercise A.17: Let η1 , η2 , · · · be iid random variables, and let τ1 , τ2 , · · · be


iid uncertain variables. Define

Sn = (η1 + τ1 ) + (η2 + τ2 ) + · · · + (ηn + τn ). (A.99)

Show that
Sn
→ E[η1 ] + τ1 (A.100)
n
in the sense of convergence in distribution as n → ∞.

Exercise A.18: Let η1 , η2 , · · · be iid positive random variables, and let


τ1 , τ2 , · · · be iid positive uncertain variables. Define

Sn = η1 τ1 + η2 τ2 + · · · + ηn τn . (A.101)

Show that
Sn
→ E[η1 ]τ1 (A.102)
n
in the sense of convergence in distribution as n → ∞.

A.8 Uncertain Random Programming


Assume that x is a decision vector, and ξ is an uncertain random vector.
Since an uncertain random objective function f (x, ξ) cannot be directly min-
imized, we may minimize its expected value, i.e.,

min E[f (x, ξ)]. (A.103)


x

Since the uncertain random constraints gj (x, ξ) ≤ 0, j = 1, 2, · · · , p do not


make a crisp feasible set, it is naturally desired that the uncertain random
constraints hold with confidence levels α1 , α2 , · · · , αp . Then we have a set of
chance constraints,

Ch{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p. (A.104)


436 Appendix A - Uncertain Random Variable

In order to obtain a decision with minimum expected objective value subject


to a set of chance constraints, Liu [107] proposed the following uncertain
random programming model,

 min E[f (x, ξ)]
 x

subject to: (A.105)


Ch{gj (x, ξ) ≤ 0} ≥ αj , j = 1, 2, · · · , p.

Definition A.7 (Liu [107]) A vector x is called a feasible solution to the


uncertain random programming model (A.105) if

Ch{gj (x, ξ) ≤ 0} ≥ αj (A.106)

for j = 1, 2, · · · , p.

Definition A.8 (Liu [107]) A feasible solution x∗ is called an optimal solu-


tion to the uncertain random programming model (A.105) if

E[f (x∗ , ξ)] ≤ E[f (x, ξ)] (A.107)

for any feasible solution x.

Theorem A.30 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with regular uncertainty distributions Υ1 , Υ2 ,
· · · , Υn , respectively. If f (x, η1 , · · · , ηm , τ1 , · · · , τn ) is a strictly increasing
function or a strictly decreasing function with respect to τ1 , · · · , τn , then the
expected function
E[f (x, η1 , · · · , ηm , τ1 , · · · , τn )] (A.108)
is equal to
Z Z 1
f (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α))dαdΨ1 (y1 ) · · · dΨm (ym ).
<m 0

Proof: Since f (x, y1 , · · · , ym , τ1 , · · · , τn ) is a strictly increasing function or


a strictly decreasing function with respect to τ1 , · · · , τn , we have
Z 1
E[f (x, y1 , · · · , ym , τ1 , · · · , τn )] = f (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α))dα.
0

It follows from Theorem A.19 that the result holds.

Remark A.8: If f (x, η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with re-


spect to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn , then
the integrand in the formula of expected value E[f (x, η1 , · · · , ηm , τ1 , · · · , τn )]
should be replaced with

f (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)).
Section A.8 - Uncertain Random Programming 437

Theorem A.31 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with regular uncertainty distributions Υ1 , Υ2 ,
· · · , Υn , respectively. If gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) is a strictly increasing
function with respect to τ1 , · · · , τn , then the chance constraint

Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0} ≥ αj (A.109)

holds if and only if


Z
Gj (x, y1 , · · · , ym )dΨ1 (y1 ) · · · dΨm (ym ) ≥ αj (A.110)
<m

where Gj (x, y1 , · · · , ym ) is the root α of the equation

gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0. (A.111)

Proof: It follows from Theorem A.6 that the left side of the chance constraint
(A.109) is

Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0}
Z 1
= Pr {ω ∈ Ω | M{gj (x, η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) ≤ 0} ≥ r} dr
0
Z
= M{gj (x, y1 , · · ·, ym , τ1 , · · ·, τn ) ≤ 0}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= Gj (x; y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m

where Gj (x, y1 , · · · , ym ) = M{gj (x, y1 , · · · , ym , τ1 , · · · , τn ) ≤ 0} is the root


α of the equation (A.111). Hence the chance constraint (A.109) holds if and
only if (A.110) is true. The theorem is verified.

Remark A.9: Sometimes, the equation (A.111) may not have a root. In
this case, if
gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) < 0 (A.112)
for all α, then we set the root α = 1; and if

gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) > 0 (A.113)

for all α, then we set the root α = 0.

Remark A.10: The root α may be estimated by the bisection method be-
cause gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) is a strictly increasing function
with respect to α.
438 Appendix A - Uncertain Random Variable

Remark A.11: If gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) is strictly increasing with


respect to τ1 , · · · , τk and strictly decreasing with respect to τk+1 , · · · , τn ,
then the equation (A.111) becomes
gj (x, y1 , · · · , ym , Υ−1 −1 −1 −1
1 (α), · · · , Υk (α), Υk+1 (1 − α), · · · , Υn (1 − α)) = 0.

Theorem A.32 (Liu [107]) Let η1 , η2 , · · · , ηm be independent random vari-


ables with probability distributions Ψ1 , Ψ2 , · · · , Ψm , and let τ1 , τ2 , · · · , τn be
independent uncertain variables with regular uncertainty distributions Υ1 , Υ2 ,
· · · , Υn , respectively. If the objective function f (x, η1 , · · · , ηm , τ1 , · · · , τn )
and constraint functions gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) are strictly increasing
functions with respect to τ1 , · · · , τn for j = 1, 2, · · · , p, then the uncertain
random programming

 min E[f (x, η1 , · · · , ηm , τ1 , · · · , τn )]
x

subject to:

Ch{gj (x, η1 , · · · , ηm , τ1 , · · · , τn ) ≤ 0} ≥ αj , j = 1, 2, · · · , p

is equivalent to the crisp mathematical programming


 Z Z 1
f (x, y1 , · · ·, ym , Υ−1 −1
1 (α), · · ·, Υn (α))dαdΨ1 (y1 ) · · · dΨm (ym )

 min

 x <m 0


subject to:

 Z

Gj (x, y1 , · · · , ym )dΨ1 (y1 ) · · · dΨm (ym ) ≥ αj , j = 1, 2, · · · , p



<m

where Gj (x, y1 , · · · , ym ) are the roots α of the equations


gj (x, y1 , · · · , ym , Υ−1 −1
1 (α), · · · , Υn (α)) = 0 (A.114)
for j = 1, 2, · · · , p, respectively.
Proof: It follows from Theorems A.30 and A.31 immediately.
After an uncertain random programming is converted into a crisp math-
ematical programming, we may solve it by any classical numerical methods
(e.g. iterative method) or intelligent algorithms (e.g. genetic algorithm).

A.9 Uncertain Random Risk Analysis


The study of uncertain random risk analysis was started by Liu-Ralescu [108]
with the concept of risk index.
Definition A.9 (Liu-Ralescu [108]) Assume that a system contains uncer-
tain random factors ξ1 , ξ2 , · · ·, ξn , and has a loss function f . Then the risk
index is the chance measure that the system is loss-positive, i.e.,
Risk = Ch{f (ξ1 , ξ2 , · · · , ξn ) > 0}. (A.115)
Section A.9 - Uncertain Random Risk Analysis 439

If all uncertain random factors degenerate to random ones, then the risk
index is the probability measure that the system is loss-positive (Roy [131]).
If all uncertain random factors degenerate to uncertain ones, then the risk
index is the uncertain measure that the system is loss-positive (Liu [83]).

Theorem A.33 Assume that a system contains uncertain random factors


ξ1 , ξ2 , · · · , ξn , and has a loss function f . If f (ξ1 , ξ2 , · · · , ξn ) has a chance
distribution Φ, then the risk index is

Risk = 1 − Φ(0). (A.116)

Proof: It follows from the definition of risk index and self-duality of chance
measure that
Risk = Ch{f (ξ1 , ξ2 , · · · , ξn ) > 0}
= 1 − Ch{f (ξ1 , ξ2 , · · · , ξn ) ≤ 0}
= 1 − Φ(0).
The theorem is proved.

Theorem A.34 (Liu-Ralescu [108], Risk Index Theorem) Assume a system


contains independent random variables η1 , η2 , · · · , ηm with probability distri-
butions Ψ1 , Ψ2 , · · ·, Ψm and independent uncertain variables τ1 , τ2 , · · ·, τn with
regular uncertainty distributions Υ1 , Υ2 , · · ·, Υn , respectively. If the loss func-
tion f (η1 , · · ·, ηm , τ1 , · · ·, τn ) is strictly increasing with respect to τ1 , · · · , τk
and strictly decreasing with respect to τk+1 , · · · , τn , then the risk index is
Z
Risk = G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym ) (A.117)
<m

where G(y1 , · · · , ym ) is the root α of the equation

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.

Proof: It follows from the definition of risk index and Theorem A.6 that
Risk = Ch{f (η1 , · · · , ηm , τ1 , · · · , τn ) > 0}
Z 1
= Pr {ω ∈ Ω | M{f (η1 (ω), · · · , ηm (ω), τ1 , · · · , τn ) > 0} ≥ r} dr
0
Z
= M{f (y1 , · · ·, ym , τ1 , · · ·, τn ) > 0}dΨ1 (y1 ) · · · dΨm (ym )
<m
Z
= G(y1 , · · ·, ym )dΨ1 (y1 ) · · · dΨm (ym )
<m

where G(y1 , · · · , ym ) = M{f (y1 , · · · , ym , τ1 , · · · , τn ) > 0} is the root α of the


equation

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) = 0.
440 Appendix A - Uncertain Random Variable

The theorem is thus verified.

Remark A.12: Sometimes, the equation may not have a root. In this case,
if

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) < 0

for all α, then we set the root α = 0; and if

f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) > 0

for all α, then we set the root α = 1.

Remark A.13: The root α may be estimated by the bisection method


because f (y1 , · · · , ym , Υ−1 −1 −1 −1
1 (1 − α), · · · , Υk (1 − α), Υk+1 (α), · · · , Υn (α)) is
a strictly decreasing function with respect to α.

Exercise A.19: (Series System) Consider a series system in which there are
m elements whose lifetimes are independent random variables η1 , η2 , · · · , ηm
with continuous probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements
whose lifetimes are independent uncertain variables τ1 , τ2 , · · · , τn with con-
tinuous uncertainty distributions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is
understood as the case that the system fails before the time T , then the loss
function is

f = T − η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (A.118)

Show that the risk index is

Risk = a + b − ab (A.119)

where

a = 1 − (1 − Ψ1 (T ))(1 − Ψ2 (T )) · · · (1 − Ψm (T )), (A.120)

b = Υ1 (T ) ∨ Υ2 (T ) ∨ · · · ∨ Υn (T ). (A.121)

Exercise A.20: (Parallel System) Consider a parallel system in which there


are m elements whose lifetimes are independent random variables η1 , η2 , · · · ,
ηm with continuous probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements
whose lifetimes are independent uncertain variables τ1 , τ2 , · · · , τn with con-
tinuous uncertainty distributions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is
understood as the case that the system fails before the time T , then the loss
function is

f = T − η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (A.122)

Show that the risk index is


Risk = ab (A.123)
Section A.9 - Uncertain Random Risk Analysis 441

where
a = Ψ1 (T )Ψ2 (T ) · · · Ψm (T ), (A.124)
b = Υ1 (T ) ∧ Υ2 (T ) ∧ · · · ∧ Υn (T ). (A.125)

Exercise A.21: (k-out-of-(m + n) System) Consider a k-out-of-(m + n)


system in which there are m elements whose lifetimes are independent random
variables η1 , η2 , · · · , ηm with probability distributions Ψ1 , Ψ2 , · · · , Ψm and n
elements whose lifetimes are independent uncertain variables τ1 , τ2 , · · · , τn
with regular uncertainty distributions Υ1 , Υ2 , · · · , Υn , respectively. If the
loss is understood as the case that the system fails before the time T , then
the loss function is

f = T − k-max[η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ]. (A.126)

Show that the risk index is


Z
Risk = G(y1 , y2 , · · ·, ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.127)
<m

where G(y1 , y2 , · · · , ym ) is the root α of the equation

k-max[y1 , y2 , · · · , ym , Υ−1 −1 −1
1 (α), Υ2 (α), · · · , Υn (α)] = T. (A.128)

Exercise A.22: (Standby System) Consider a standby system in which


there are m elements whose lifetimes are independent random variables η1 , η2 ,
· · · , ηm with probability distributions Ψ1 , Ψ2 , · · · , Ψm and n elements whose
lifetimes are independent uncertain variables τ1 , τ2 , · · · , τn with regular un-
certainty distributions Υ1 , Υ2 , · · · , Υn , respectively. If the loss is understood
as the case that the system fails before the time T , then the loss function is

f = T − (η1 + η2 + · · · + ηm + τ1 + τ2 + · · · + τn ). (A.129)

Show that the risk index is


Z
Risk = G(y1 , y2 , · · ·, ym )dΨ1 (y1 )dΨ2 (y2 ) · · · dΨm (ym ) (A.130)
<m

where G(y1 , y2 , · · · , ym ) is the root α of the equation

Υ−1 −1 −1
1 (α) + Υ2 (α) + · · · + Υn (α) = T − (y1 + y2 + · · · + ym ). (A.131)

Remark A.14: As a substitute of risk index, Liu-Ralescu [110] suggested a


concept of value-at-risk,

VaR(α) = sup{x | Ch{f (ξ1 , ξ2 , · · · , ξn ) ≥ x} ≥ α}. (A.132)


442 Appendix A - Uncertain Random Variable

Note that VaR(α) represents the maximum possible loss when α percent
of the right tail distribution is ignored. In other words, the loss will ex-
ceed VaR(α) with chance measure α. If the chance distribution Φ(x) of
f (ξ1 , ξ2 , · · · , ξn ) is continuous, then

VaR(α) = sup {x | Φ(x) ≤ 1 − α} . (A.133)

If its inverse chance distribution Φ−1 (α) exists, then

VaR(α) = Φ−1 (1 − α). (A.134)

It is also easy to show that VaR(α) is a monotone decreasing function with


respect to α. When the uncertain random variables degenerate to random
variables, the value-at-risk becomes the one in Morgan [115]. When the
uncertain random variables degenerate to uncertain variables, the value-at-
risk becomes the one in Peng [121].

Remark A.15: Liu-Ralescu [112] proposed a concept of expected loss that


is the expected value of the loss f (ξ1 , ξ2 , · · · , ξn ) given f (ξ1 , ξ2 , · · · , ξn ) > 0,
i.e., Z +∞
L= Ch{f (ξ1 , ξ2 , · · · , ξn ) ≥ x}dx. (A.135)
0
If Φ(x) is the chance distribution of the loss f (ξ1 , ξ2 , · · · , ξn ), then we imme-
diately have Z +∞
L= (1 − Φ(x))dx. (A.136)
0

If its inverse chance distribution Φ−1 (α) exists, then the expected loss is
Z 1
+
L= Φ−1 (α) dα. (A.137)
0

A.10 Uncertain Random Reliability Analysis


The study of uncertain random reliability analysis was started by Wen-Kang
[155] with the concept of reliability index.

Definition A.10 (Wen-Kang [155]) Assume a Boolean system has uncer-


tain random elements ξ1 , ξ2 , · · · , ξn and a structure function f . Then the
reliability index is the chance measure that the system is working, i.e.,

Reliability = Ch{f (ξ1 , ξ2 , · · · , ξn ) = 1}. (A.138)

If all uncertain random elements degenerate to random ones, then the


reliability index is the probability measure that the system is working. If all
uncertain random elements degenerate to uncertain ones, then the reliability
index (Liu [83]) is the uncertain measure that the system is working.
Section A.10 - Uncertain Random Reliability Analysis 443

Theorem A.35 (Wen-Kang [155], Reliability Index Theorem) Assume that


a system has a structure function f and contains independent random ele-
ments η1 , η2 , · · · , ηm with reliabilities a1 , a2 , · · · , am , and independent uncer-
tain elements τ1 , τ2 , · · · , τn with reliabilities b1 , b2 , · · · , bn , respectively. Then
the reliability index is
m
!
X Y
Reliability = µi (xi ) f ∗ (x1 , · · · , xm ) (A.139)
(x1 ,··· ,xm )∈{0,1}m i=1

where

 sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n








 if sup min νj (yj ) < 0.5
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1 1≤j≤n


f ∗ (x1 , · · · , xm ) = (A.140)

 1− sup min νj (yj ),
f (x1 ,··· ,xm ,y1 ,··· ,yn )=0 1≤j≤n






 if sup min νj (yj ) ≥ 0.5,



1≤j≤n
f (x1 ,··· ,xm ,y1 ,··· ,yn )=1

(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m), (A.141)
1 − ai , if xi = 0
(
bj , if yj = 1
νj (yj ) = (j = 1, 2, · · · , n). (A.142)
1 − bj , if yj = 0

Proof: It follows from Definition A.10 and Theorem A.15 immediately.

Exercise A.23: (Series System) Consider a series system in which there are
m independent random elements η1 , η2 , · · ·, ηm with reliabilities a1 , a2 , · · ·, am ,
and n independent uncertain elements τ1 , τ2 , · · ·, τn with reliabilities b1 , b2 , · · · ,
bn , respectively. Note that the structure function is

f = η1 ∧ η2 ∧ · · · ∧ ηm ∧ τ1 ∧ τ2 ∧ · · · ∧ τn . (A.143)

Show that the reliability index is

Reliability = a1 a2 · · · am (b1 ∧ b2 ∧ · · · ∧ bn ). (A.144)

Exercise A.24: (Parallel System) Consider a parallel system in which


there are m independent random elements η1 , η2 , · · · , ηm with reliabilities
a1 , a2 , · · · , am , and n independent uncertain elements τ1 , τ2 , · · · , τn with re-
liabilities b1 , b2 , · · · , bn , respectively. Note that the structure function is

f = η1 ∨ η2 ∨ · · · ∨ ηm ∨ τ1 ∨ τ2 ∨ · · · ∨ τn . (A.145)
444 Appendix A - Uncertain Random Variable

Show that the reliability index is


Reliability = 1 − (1 − a1 )(1 − a2 ) · · · (1 − am )(1 − b1 ∨ b2 ∨ · · · ∨ bn ). (A.146)

Exercise A.25: (k-out-of-(m + n) System) Consider a k-out-of-(m + n) sys-


tem in which there are m independent random elements η1 , η2 , · · · , ηm with
reliabilities a1 , a2 , · · ·, am , and n independent uncertain elements τ1 , τ2 , · · ·, τn
with reliabilities b1 , b2 , · · · , bn , respectively. Note that the structure function
is
f = k-max [η1 , η2 , · · · , ηm , τ1 , τ2 , · · · , τn ]. (A.147)
Show that the reliability index is
m
!
X Y
Reliability = µi (xi ) k-max [x1 , · · · , xm , b1 , · · · , bn ]
(x1 ,··· ,xm )∈{0,1}m i=1

where
(
ai , if xi = 1
µi (xi ) = (i = 1, 2, · · · , m). (A.148)
1 − ai , if xi = 0

A.11 Uncertain Random Graph


In classic graph theory, the edges and vertices are all deterministic, either
exist or not. However, in practical applications, some indeterminate factors
will no doubt appear in graphs. Thus it is reasonable to assume that in a
graph some edges exist with some degrees in probability measure and others
exist with some degrees in uncertain measure. In order to model this type of
graph, Liu [93] presented a concept of uncertain random graph.
We say a graph is of order n if it has n vertices labeled by 1, 2, · · · , n. In
this section, we assume the graph is always of order n, and has a collection
of vertices,
V = {1, 2, · · · , n}. (A.149)
Let us define two collections of edges,
U = {(i, j) | 1 ≤ i < j ≤ n and (i, j) are uncertain edges}, (A.150)
R = {(i, j) | 1 ≤ i < j ≤ n and (i, j) are random edges}. (A.151)
Note that all deterministic edges are regarded as special uncertain ones. Then
U ∪ R = {(i, j) | 1 ≤ i < j ≤ n} that contains n(n − 1)/2 edges. We will call
α11 α12 · · · α1n
 

 21 α22 · · · α2n 
 α 
T= .

.. .. .. 
 (A.152)
 .. . . . 
αn1 αn2 ··· αnn
Section A.11 - Uncertain Random Graph 445

an uncertain random adjacency matrix if αij represent the truth values in


uncertain measure or probability measure that the edges between vertices
i and j exist, i, j = 1, 2, · · · , n, respectively. Note that αii = 0 for i =
1, 2, · · · , n, and T is a symmetric matrix, i.e., αij = αji for i, j = 1, 2, · · · , n.
....... .......
.... ...... .... ......
.... .......................................................................
 
1
............... ............... 4 .
0 0.8 0 0.5
.... ....
.... ....
...
...
...
...

 0.8 0 1 0 

... ...
... ...  
...
..
.
...
......
 0 1 0 0.3 
.... ....... .... ...
.....
..
. ... . .. ......
..
2
................
........................................................................ ...
............ 3 0.5 0 0.3 0

Figure A.2: An Uncertain Random Graph

Definition A.11 (Liu [93]) Assume V is the collection of vertices, U is the


collection of uncertain edges, R is the collection of random edges, and T is
the uncertain random adjacency matrix. Then the quartette (V, U, R, T) is
said to be an uncertain random graph.

Please note that the uncertain random graph becomes a random graph
(Erdős-Rényi [29], Gilbert [51]) if the collection U of uncertain edges vanishes;
and becomes an uncertain graph (Gao-Gao [43]) if the collection R of random
edges vanishes.
In order to deal with uncertain random graph, let us introduce some
symbols. Write
x11 x12 · · · x1n
 

 21 x22 · · · x2n 
 x 
X= . 
.. .. .. 
 (A.153)
 .. . . . 
xn1 xn2 · · · xnn
and
xij = 0 or 1, if (i, j) ∈ R 
 

 
xij = 0, if (i, j) ∈ U

 

X= X| . (A.154)

 xij = xji , i, j = 1, 2, · · · , n 

 
xii = 0, i = 1, 2, · · · , n
 

For each given matrix

y11 y12 ··· y1n


 

 y21 y22 ··· y2n 

Y = .. .. .. ..
, (A.155)
.
 
 . . . 
yn1 yn2 ··· ynn
446 Appendix A - Uncertain Random Variable

the extension class of Y is defined by

xij = yij , if (i, j) ∈ R


 

 

∈ U
 

 x ij = 0 or 1, if (i, j) 
Y = X| . (A.156)

 xij = xji , i, j = 1, 2, · · · , n 

 
xii = 0, i = 1, 2, · · · , n
 

Example A.5: (Liu [93], Connectivity Index) An uncertain random graph is


connected for some realizations of uncertain and random edges, and discon-
nected for some other realizations. In order to show how likely an uncertain
random graph is connected, a connectivity index of an uncertain random
graph is defined as the chance measure that the uncertain random graph is
connected. Let (V, U, R, T) be an uncertain random graph. Liu [93] proved
that the connectivity index is
 
X Y
ρ=  νij (Y ) f ∗ (Y ) (A.157)
Y ∈X (i,j)∈R

where
sup min νij (X), if sup min νij (X) < 0.5

X∈Y ∗, f (X)=1 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U


f ∗ (Y ) =
1 −
 sup min νij (X), if sup min νij (X) ≥ 0.5,
X∈Y ∗, f (X)=0 (i,j)∈U X∈Y ∗, f (X)=1 (i,j)∈U

(
αij , if xij = 1
νij (X) = (i, j) ∈ U, (A.158)
1 − αij , if xij = 0
(
1, if I + X + X 2 + · · · + X n−1 > 0
f (X) = (A.159)
0, otherwise,
X and Y ∗ are defined by (A.154) and (A.156), respectively.

Remark A.16: If the uncertain random graph becomes a random graph,


then the connectivity index is
 
X Y
ρ=  νij (X) f (X) (A.160)
X∈X 1≤i<j≤n

where  

xij = 0 or 1, i, j = 1, 2, · · · , n 

X = X | xij = xji , i, j = 1, 2, · · · , n . (A.161)
 
xii = 0, i = 1, 2, · · · , n
 
Section A.12 - Uncertain Random Network 447

Remark A.17: (Gao-Gao [43]) If the uncertain random graph becomes an


uncertain graph, then the connectivity index is

 sup min νij (X), if sup min νij (X) < 0.5
(X)=1 1≤i<j≤n X∈X,f (X)=1 1≤i<j≤n
 X∈X,f
ρ=
 1−
 sup min νij (X), if sup min νij (X) ≥ 0.5
X∈X,f (X)=0 1≤i<j≤n X∈X,f (X)=1 1≤i<j≤n

where X becomes
 

 xij = 0 or 1, i, j = 1, 2, · · · , n 

X = X | xij = xji , i, j = 1, 2, · · · , n . (A.162)
 
xii = 0, i = 1, 2, · · · , n
 

Exercise A.26: (Zhang-Peng-Li [198]) An Euler circuit in the graph is a


circuit that passes through each edge exactly once. In other words, a graph
has an Euler circuit if it can be drawn on paper without ever lifting the pencil
and without retracing over any edge. It has been proved that a graph has
an Euler circuit if and only if it is connected and each vertex has an even
degree (i.e., the number of edges that are adjacent to that vertex). In order to
measure how likely an uncertain random graph has an Euler circuit, an Euler
index is defined as the chance measure that the uncertain random graph has
an Euler circuit. Please give a formula for calculating Euler index.

A.12 Uncertain Random Network


The term network is a synonym for a weighted graph, where the weights may
be understood as cost, distance or time consumed. Assume that in a network
some weights are random variables and others are uncertain variables. In
order to model this type of network, Liu [93] presented a concept of uncertain
random network.
In this section, we assume the uncertain random network is always of
order n, and has a collection of nodes,
N = {1, 2, · · · , n} (A.163)
where “1” is always the source node, and “n” is always the destination node.
Let us define two collections of arcs,
U = {(i, j) | (i, j) are uncertain arcs}, (A.164)
R = {(i, j) | (i, j) are random arcs}. (A.165)
Note that all deterministic arcs are regarded as special uncertain ones. Let
wij denote the weights of arcs (i, j), (i, j) ∈ U ∪ R, respectively. Then wij
are uncertain variables if (i, j) ∈ U, and random variables if (i, j) ∈ R. Write
W = {wij | (i, j) ∈ U ∪ R}. (A.166)
448 Appendix A - Uncertain Random Variable

Definition A.12 (Liu [93]) Assume N is the collection of nodes, U is the


collection of uncertain arcs, R is the collection of random arcs, and W is the
collection of uncertain and random weights. Then the quartette (N, U, R, W)
is said to be an uncertain random network.

Please note that the uncertain random network becomes a random net-
work (Frank-Hakimi [30]) if all weights are random variables; and becomes
an uncertain network (Liu [84]) if all weights are uncertain variables.
................ ................
... ............................................................. ..
....
.. . ..
.. . 2
..... .......
...... ........ ....
. ... ...
.4.
..
.....
................... ............
....
. .... ...
..
.
. ....... .
... ..
.... ......
......
.... .... ......
...... ....
.... ......
...... ...
...
......
..........
. ....
.... ... ......
...... .
.....
...
.. ..
...... .
. .... .......... ..............
.. ......... .... ... . ...... ...
.... ... ......
.... ..
... 1 ..
....... .............. .
..
.....
... ..... 6
.......... ...
..
... ......
...... ..
.
... ...
.
. ..
........... ............
.
......
...
.. .... ......
......
....
.... ......
......
....
.... ......
...... .... ......
......
...... ...... ......
....
.......
......... ............... ......... ........ ......
......... .... .... ............
.
.... ............................................................. ..
3
.... ......
..........
... ... 5
....... ........
...
.

Figure A.3: An Uncertain Random Network

Figure A.3 shows an uncertain random network (N, U, R, W) of order 6 in


which
N = {1, 2, 3, 4, 5, 6}, (A.167)

U = {(1, 2), (1, 3), (2, 4), (2, 5), (3, 4), (3, 5)}, (A.168)

R = {(4, 6), (5, 6)}, (A.169)

W = {w12 , w13 , w24 , w25 , w34 , w35 , w46 , w56 }. (A.170)

Example A.6: (Liu [93], Shortest Path Distribution) Consider an uncertain


random network (N, U, R, W). Assume the uncertain weights wij have regular
uncertainty distributions Υij for (i, j) ∈ U, and the random weights wij have
probability distributions Ψij for (i, j) ∈ R, respectively. Then the shortest
path distribution from a source node to a destination node is
Z +∞ Z +∞ Y
Φ(x) = ··· F (x; yij , (i, j) ∈ R) dΨij (yij ) (A.171)
0 0 (i,j)∈R

where F (x; yij , (i, j) ∈ R) is the root α of the equation

f (Υ−1
ij (α), (i, j) ∈ U; yij , (i, j) ∈ R) = x (A.172)

and f is the length of the shortest path and may be calculated by the Dijkstra
algorithm (Dijkstra [25]) when the weights are yij if (i, j) ∈ R and Υ−1
ij (α) if
(i, j) ∈ U, respectively.
Section A.13 - Uncertain Random Process 449

Remark A.18: If the uncertain random network becomes a random net-


work, then the shortest path distribution is
Z Y
Φ(x) = dΨij (yij ). (A.173)
f (yij ,(i,j)∈R)≤x (i,j)∈R

Remark A.19: (Gao [45]) If the uncertain random network becomes an


uncertain network, then the inverse shortest path distribution is

Φ−1 (α) = f (Υ−1


ij (α), (i, j) ∈ U). (A.174)

Exercise A.27: (Sheng-Gao [139]) Maximum flow problem is to find a flow


with maximum value from a source node to a destination node in an uncertain
random network. What is the maximum flow distribution?

A.13 Uncertain Random Process


Uncertain random process is a sequence of uncertain random variables in-
dexed by time. A formal definition is given below.

Definition A.13 (Gao-Yao [31]) Let (Γ, L, M)×(Ω, A, Pr) be a chance space
and let T be a totally ordered set (e.g. time). An uncertain random process is
a function Xt (γ, ω) from T × (Γ, L, M) × (Ω, A, Pr) to the set of real numbers
such that {Xt ∈ B} is an event in L × A for any Borel set B of real numbers
at each time t.

Example A.7: A stochastic process is a sequence of random variables in-


dexed by time, and then is a special type of uncertain random process.

Example A.8: An uncertain process is a sequence of uncertain variables


indexed by time, and then is a special type of uncertain random process.

Example A.9: Let Yt be a stochastic process, and let Zt be an uncertain


process. If f is a measurable function, then

Xt = f (Yt , Zt ) (A.175)

is an uncertain random process.

Definition A.14 (Gao-Yao [31]) Let η1 , η2 , · · · be iid random variables, let


τ1 , τ2 , · · · be iid uncertain variables, and let f be a positive and strictly mono-
tone function. Define S0 = 0 and

Sn = f (η1 , τ1 ) + f (η2 , τ2 ) + · · · + f (ηn , τn ) (A.176)


450 Appendix A - Uncertain Random Variable

for n ≥ 1. Then

Nt = max n Sn ≤ t (A.177)
n≥0

is called an uncertain random renewal process with interarrival times f (η1 , τ1 ),


f (η2 , τ2 ), · · ·

Theorem A.36 (Gao-Yao [31]) Let η1 , η2 , · · · be iid random variables with


a common probability distribution Ψ, let τ1 , τ2 , · · · be iid uncertain variables,
and let f be a positive and strictly monotone function. Assume Nt is an
uncertain random renewal process with interarrival times f (η1 , τ1 ), f (η2 , τ2 ),
· · · Then the average renewal number
Z +∞ −1
Nt
→ f (y, τ1 )dΨ(y) (A.178)
t −∞

in the sense of convergence in distribution as t → ∞.

Proof: Write Sn = f (η1 , τ1 ) + f (η2 , τ2 ) + · · · + f (ηn , τn ) for all n ≥ 1. Let


x be a continuous point of the uncertainty distribution of
Z +∞ −1
f (y, τ1 )dΨ(y) .
−∞

It is clear that 1/x is a continuous point of the uncertainty distribution of


Z +∞
f (y, τ1 )dΨ(y).
−∞

At first, it follows from the definition of uncertain random renewal process


that
   
Nt  Sbtxc+1 t
Ch ≤ x = Ch Sbtxc+1 > t = Ch >
t btxc + 1 btxc + 1

where btxc represents the maximal integer less than or equal to tx. Since
btxc ≤ tx < btxc + 1, we immediately have

btxc 1 t 1
· ≤ <
btxc + 1 x btxc + 1 x

and then
     
Sbtxc+1 1 Sbtxc+1 t Sbtxc+1 1
Ch > ≤ Ch > ≤ Ch > .
btxc + 1 x btxc + 1 btxc + 1 btxc x
Section A.13 - Uncertain Random Process 451

It follows from the law of large numbers for uncertain random variables that
   
Sbtxc+1 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch ≤
t→∞ btxc + 1 x t→∞ btxc + 1 x
Z +∞ 
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x
−∞

and
   
Sbtxc+1 1 btxc + 1 Sbtxc+1 1
lim Ch > = 1 − lim Ch · ≤
t→∞ btxc x t→∞ btxc btxc + 1 x
Z +∞ 
1
=1−M f (y, τ1 )dΨ(y) ≤
−∞ x
(Z
+∞ −1 )
=M f (y, τ1 )dΨ(y) ≤x .
−∞

From the above three relations we get


  (Z
+∞ −1 )
Sbtxc+1 t
lim Ch > =M f (y, τ1 )dΨ(y) ≤x
t→∞ btxc + 1 btxc + 1 −∞

and then
  (Z
+∞ −1 )
Nt
lim Ch ≤x =M f (y, τ1 )dΨ(y) ≤x .
t→∞ t −∞

The theorem is thus verified.

Exercise A.28: Let η1 , η2 , · · · be iid positive random variables, and let


τ1 , τ2 , · · · be iid positive uncertain variables. Assume Nt is an uncertain
random renewal process with interarrival times η1 + τ1 , η2 + τ2 , · · · Show that
Nt 1
→ (A.179)
t E[η1 ] + τ1
in the sense of convergence in distribution as t → ∞.

Exercise A.29: Let η1 , η2 , · · · be iid positive random variables, and let


τ1 , τ2 , · · · be iid positive uncertain variables. Assume Nt is an uncertain
random renewal process with interarrival times η1 τ1 , η2 τ2 , · · · Show that
Nt 1
→ (A.180)
t E[η1 ]τ1
in the sense of convergence in distribution as t → ∞.
452 Appendix A - Uncertain Random Variable

Theorem A.37 (Yao-Zhou [183]) Let η1 , η2 , · · · be iid random interarrival


times, and let τ1 , τ2 , · · · be iid uncertain rewards. Assume Nt is a stochastic
renewal process with interarrival times η1 , η2 , · · · Then
Nt
X
Rt = τi (A.181)
i=1

is an uncertain random renewal reward process, and


Rt τ1
→ (A.182)
t E[η1 ]
in the sense of convergence in distribution as t → ∞.

Proof: Let Υ denote the uncertainty distribution of τ1 . Then for each


realization of Nt , the uncertain variable
Nt
1 X
τi
Nt i=1

follows the uncertainty distribution Υ. In addition, by the definition of chance


distribution, we have
  Z 1    
Rt Rt
Ch ≤x = Pr M ≤ x ≥ r dr
t 0 t
Z 1 ( ( Nt
) )
1 X tx
= Pr M τi ≤ ≥ r dr
0 Nt i=1 Nt
Z 1    
tx
= Pr Υ ≥ r dr
0 Nt
for any real number x. Since Nt is a stochastic renewal process with iid
interarrival times η1 , η2 , · · · , we have
t
→ E[η1 ], a.s.
Nt
as t → ∞. It follows from the Lebesgue domain convergence theorem that
  Z 1    
Rt tx
lim Ch ≤ x = lim Pr Υ ≥ r dr
t→∞ t t→∞ 0 Nt
Z 1
= Pr {Υ(E[η1 ]x) ≥ r} dr = Υ(E[η1 ]x)
0

that is just the uncertainty distribution of τ1 /E[η1 ]. The theorem is thus


proved.
Section A.13 - Uncertain Random Process 453

Theorem A.38 (Yao-Zhou [188]) Let η1 , η2 , · · · be iid random rewards, and


let τ1 , τ2 , · · · be iid uncertain interarrival times. Assume Nt is an uncertain
renewal process with interarrival times τ1 , τ2 , · · · Then
Nt
X
Rt = ηi (A.183)
i=1

is an uncertain random renewal reward process, and

Rt E[η1 ]
→ (A.184)
t τ1
in the sense of convergence in distribution as t → ∞.

Proof: Let Υ denote the uncertainty distribution of τ1 . It follows from the


definition of chance distribution that for any real number x, we have
  Z 1    
Rt Rt
Ch ≤x = Pr M ≤ x ≥ r dr
t 0 t
Z 1 ( ( Nt
) )
1 1 X t
= Pr M · ηi ≤ ≥ r dr.
0 x Nt i=1 Nt

Since Nt is an uncertain renewal process with iid interarrival times τ1 , τ2 , · · · ,


by using Theorem 12.3, we have
t
→ τ1
Nt
in the sense of convergence in distribution as t → ∞. In addition, for each
realization of Nt , the law of large numbers for random variables says
Nt
1 X
ηi → E[η1 ], a.s.
Nt i=1

as t → ∞ for each number x. It follows from the Lebesgue domain conver-


gence theorem that
  Z 1      
Rt E[η1 ] E[η1 ]
lim Ch ≤x = Pr 1 − Υ ≥ r dr = 1 − Υ
t→∞ t 0 x x

that is just the uncertainty distribution of E[η1 ]/τ1 . The theorem is thus
proved.

Theorem A.39 (Yao-Gao [179]) Let η1 , η2 , · · · be iid random on-times, and


let τ1 , τ2 , · · · be iid uncertain off-times. Assume Nt is an uncertain random
454 Appendix A - Uncertain Random Variable

renewal process with interarrival times η1 + τ1 , η2 + τ2 , · · · Then


Nt
X Nt
X Nt
X

t − τ , if (η + τ ) ≤ t < (ηi + τi ) + ηNt +1

i i i




i=1 i=1 i=1
At = (A.185)
 N
Xt +1 Nt
X N
Xt +1

ηi , if (ηi + τi ) + ηNt +1 ≤ t < (ηi + τi )




i=1 i=1 i=1

is an uncertain random alternating renewal process (i.e., the total time at


which the system is on up to time t), and

At E[η1 ]
→ (A.186)
t E[η1 ] + τ1

in the sense of convergence in distribution as t → ∞.

Proof: Let Φ denote the uncertainty distribution of τ1 , and let Υ be the


uncertainty distribution of E[η1 ]/(E[η1 ] + τ1 ). Then at each continuity point
x of Υ, we have

   
E[η1 ] E[η1 ](1 − x)
Υ(x) = M ≤ x = M τ1 ≥
E[η1 ] + τ1 x
   
E[η1 ](1 − x) E[η1 ](1 − x)
= 1 − M τ1 < =1−Φ .
x x

On the one hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have

( N
) ( ( N
) )
t Z 1 t
1X 1X
lim Ch ηi ≤ x = lim Pr M ηi ≤ x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
( ( N
) )
Z 1 t
1X
= lim Pr M ηi ≤ x ≥ r dr
0 t→∞ t i=1
( ( N
) )
Z 1 t
1X
= Pr lim M ηi ≤ x ≥ r dr.
0 t→∞ t i=1
Section A.13 - Uncertain Random Process 455

Note that
( N ) (∞ k
! )
t
1X [ 1X
M ηi ≤ x = M ηi ≤ x ∩ (Nt = k)
t i=1 t i=1
k=0
(∞ k
! k+1
!)
[ X X
≤M ηi ≤ tx ∩ (ηi + τi ) > t
k=0 i=1 i=1

( k
! k+1
!)
[ X X
≤M ηi ≤ tx ∩ tx + ηk+1 + τi > t
k=0 i=1 i=1

( k+1
!)
[ ηk+1 1X
=M (k ≤ ∗
Ntx ) ∩ + τi > 1 − x
t t i=1
k=0

where Nt∗ is a stochastic renewal process with random interarrival times


η1 , η2 , · · · Since
ηk+1
→ 0 as t → ∞
t
and
k+1
X
τi ∼ (k + 1)τ1 ,
i=1

we have
( Nt
) (∞  )
1X [ t − tx
lim M ηi ≤ x ≤ lim M (k ≤ Ntx ∗
) ∩ τ1 >
t→∞ t i=1 t→∞ k+1
k=0
 ∗ 
N [tx  
t − tx 
= lim M τ1 >
t→∞  k+1 
k=0
 
t − tx
= lim M τ1 > ∗
t→∞ Ntx + 1
 
t − tx
= 1 − lim Φ ∗ +1 .
t→∞ Ntx

By the elementary renewal theorem in probability, we have



Ntx 1
→ , a.s.
tx E[η1 ]

as t → ∞, and then
( N
)
t  
1X E[η1 ](1 − x)
lim M ηi ≤ x ≤1−Φ = Υ(x).
t→∞ t i=1 x
456 Appendix A - Uncertain Random Variable

Thus
( N
)
t Z 1
1X
lim Ch ηi ≤ x ≤ Pr {Υ(x) ≥ r} dr = Υ(x). (A.187)
t→∞ t i=1 0

On the other hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have
( Nt +1
) Z 1 ( ( NX +1
) )
1 X 1 t
lim Ch ηi > x = lim Pr M ηi > x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
Z 1 ( ( N +1 ) )
t
1 X
= lim Pr M ηi > x ≥ r dr
0 t→∞ t i=1
Z 1 ( ( N +1
t
) )
1 X
= Pr lim M ηi > x ≥ r dr.
0 t→∞ t i=1

Note that
( Nt +1
)
1 X
M ηi > x
t i=1

( k+1
! )
[ 1X
=M ηi > x ∩ (Nt = k)
t i=1
k=0

( k+1
! k
!)
[ X X
≤M ηi > tx ∩ (ηi + τi ) ≤ t
k=0 i=1 i=1

( k+1
! k
!)
[ X X
≤M ηi > tx ∩ tx − ηk+1 + τi ≤ t
k=0 i=1 i=1

( k
!)
[ 1X ηk+1
=M ∗
(Ntx ≤ k) ∩ τi − ≤1−x .
t i=1 t
k=0

Since
k
X
τi ∼ kτ1
i=1

and
ηk+1
→ 0 as t → ∞,
t
Section A.13 - Uncertain Random Process 457

we have
( Nt +1
) (∞ k
!)
1 X [ 1X
lim M ηi > x ≤ lim M ∗
(Ntx ≤ k) ∩ τi ≤ 1 − x
t→∞ t i=1 t→∞ t i=1
k=0
 
∞  
 [ t − tx
= lim M τ1 ≤
t→∞  ∗
k 
k=Ntx
 
t − tx
= lim M τ1 ≤ ∗
t→∞ Ntx
 
t − tx
= lim Φ ∗ .
t→∞ Ntx
By the elementary renewal theorem, we have

Ntx 1
→ , a.s.
tx E[η1 ]
as t → ∞, and then
( N +1 )
t  
1 X E[η1 ](1 − x)
lim M ηi > x ≤ Φ = 1 − Υ(x).
t→∞ t i=1 x

Thus
( Nt +1
) Z
1
1 X
lim Ch ηi > x ≤ Pr {1 − Υ(x) ≥ r} dr = 1 − Υ(x).
t→∞ t i=1 0

By using the duality property of chance measure, we get


( N +1 )
t
1 X
lim Ch ηi ≤ x ≥ Υ(x). (A.188)
t→∞ t i=1

Since
Nt Nt +1
1X At 1 X
ηi ≤ ≤ ηi ,
t i=1 t t i=1
we obtain
( Nt +1
)   ( Nt
)
1 X At 1X
Ch ηi ≤ x ≤ Ch ≤x ≤ Ch ηi ≤ x .
t i=1 t t i=1

It follows from (A.187) and (A.188) that


 
At
lim Ch ≤ x = Υ(x).
t→∞ t
Hence the availability rate At /t converges in distribution to E[η1 ]/(E[η1 ]+τ1 )
as t → ∞. The theorem is proved.
458 Appendix A - Uncertain Random Variable

Theorem A.40 (Yao-Gao [179]) Let τ1 , τ2 , · · · be iid uncertain on-times,


and let η1 , η2 , · · · be iid random off-times. Assume Nt is an uncertain random
renewal process with interarrival times τ1 + η1 , τ2 + η2 , · · · Then


Nt
X Nt
X Nt
X

t − η , if (τ + η ) ≤ t < (τi + ηi ) + τNt +1

i i i




i=1 i=1 i=1
At = (A.189)
 N
Xt +1 Nt
X N
Xt +1

τi , if (τi + ηi ) + τNt +1 ≤ t < (τi + ηi )




i=1 i=1 i=1

is an uncertain random alternating renewal process (i.e., the total time at


which the system is on up to time t), and

At τ1
→ (A.190)
t τ1 + E[η1 ]

in the sense of convergence in distribution as t → ∞.

Proof: Let Φ denote the uncertainty distribution of τ1 , and let Υ be the


uncertainty distribution of τ1 /(τ1 + E[η1 ]). Then at each continuity point x
of Υ, we have

     
τ1 E[η1 ]x E[η1 ]x
Υ(x) = M ≤ x = M τ1 ≤ =Φ .
τ1 + E[η1 ] 1−x 1−x

On the one hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have

( N
) ( ( N
) )
t Z 1 t
1X 1X
lim Ch τi ≤ x = lim Pr M τi ≤ x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
( ( N
) )
Z 1 t
1X
= lim Pr M τi ≤ x ≥ r dr
0 t→∞ t i=1
( ( N
) )
Z 1 t
1X
= Pr lim M τi ≤ x ≥ r dr.
0 t→∞ t i=1
Section A.13 - Uncertain Random Process 459

Note that
( N
)
t
1X
M τi ≤ x
t i=1

( k
! )
[ 1X
=M τi ≤ x ∩ (Nt = k)
t i=1
k=0

( k
! k+1
!)
[ X X
≤M τi ≤ tx ∩ (τi + ηi ) > t
k=0 i=1 i=1

( k
! k+1
!)
[ X X
≤M τi ≤ tx ∩ tx + τk+1 + ηi > t
k=0 i=1 i=1

( k
! k+1
!)
[ X τk+1 1X
=M τi ≤ tx ∩ + ηi > 1 − x .
i=1
t t i=1
k=0

Since
k
X
τi ∼ kτ1
i=1

and
τk+1
→ 0 as t → ∞,
t
we have
( N
)
t
1X
lim M τi ≤ x
t→∞ t i=1
∞ 
(  k+1
!)
[ tx
1X
≤ lim M τ1 ≤
∩ ηi > 1 − x
t→∞ k
t i=1
k=0
(∞   )
[ tx
= lim M ∗

τ1 ≤ ∩ Nt−tx ≤ k
t→∞ k
k=0
 
∞  
 [ tx 
= lim M τ1 ≤
t→∞  ∗
k 
k=Nt−tx
 
tx
= lim M τ1 ≤ ∗
t→∞ Nt−tx
 
tx
= lim Φ ∗
t→∞ Nt−tx

where Nt∗ is a stochastic renewal process with random interarrival times


460 Appendix A - Uncertain Random Variable

η1 , η2 , · · · By the elementary renewal theorem, we have



Nt−tx 1
→ , a.s.
t − tx E[η1 ]
as t → ∞, and then
( N
)
t  
1X E[η1 ]x
lim M τi ≤ x ≤Φ = Υ(x).
t→∞ t i=1 1−x

Thus
( N
)
t Z 1
1X
lim Ch τi ≤ x ≤ Pr {Υ(x) ≥ r} dr = Υ(x). (A.191)
t→∞ t i=1 0

On the other hand, by the Lebesgue dominated convergence theorem and the
continuity of probability measure, we have
( N +1 ) Z 1 ( ( NX +1
) )
t
1 X 1 t
lim Ch τi > x = lim Pr M τi > x ≥ r dr
t→∞ t i=1 t→∞ 0 t i=1
Z 1 ( ( N +1 ) )
t
1 X
= lim Pr M τi > x ≥ r dr
0 t→∞ t i=1
Z 1 ( ( N +1
t
) )
1 X
= Pr lim M τi > x ≥ r dr.
0 t→∞ t i=1

Note that
( Nt +1
)
1 X
M τi > x
t i=1

( k+1
! )
[ 1X
=M τi > x ∩ (Nt = k)
t i=1
k=0

( k+1
! k
!)
[ X X
≤M τi > tx ∩ (τi + ηi ) ≤ t
k=0 i=1 i=1

( k+1
! k
!)
[ X X
≤M τi > tx ∩ tx − τk+1 + ηi ≤ t
k=0 i=1 i=1

( k+1
! k
!)
[ X 1X τk+1
≤M τi > tx ∩ ηi − ≤1−x .
i=1
t i=1 t
k=0

Since
k+1
X
τi ∼ (k + 1)τ1
i=1
Section A.13 - Uncertain Random Process 461

and
τk+1
→ 0 as t → ∞,
t
we have
( Nt +1
)
1 X
lim M τi > x
t→∞ t i=1
∞ 
(  k
!)
[ tx 1X
≤ lim M τ1 > ∩ τi ≤ 1 − x
t→∞ k+1 t i=1
k=0
(∞   )
[ tx
= lim M ∗

τ1 > ∩ Nt−tx ≥ k
t→∞ k+1
k=0
 ∗ 
N[ t−tx 
tx

= lim M τ1 >
t→∞  k+1 
k=0
 
tx
= lim M τ1 > ∗
t→∞ Nt−tx + 1
 
tx
= 1 − lim Φ ∗ +1 .
t→∞ Ntx

By the elementary renewal theorem, we have



Nt−tx 1
→ , a.s.
t − tx E[η1 ]
as t → ∞, and then
( N +1 )
t  
1 X E[η1 ]x
lim M τi > x ≤ 1 − Φ = 1 − Υ(x).
t→∞ t i=1 1−x

Thus
( Nt +1
) Z
1
1 X
lim Ch τi > x ≤ Pr {1 − Υ(x) ≥ r} dr = 1 − Υ(x).
t→∞ t i=1 0

By using the duality property of chance measure, we get


( N +1 )
t
1 X
lim Ch τi ≤ x ≥ Υ(x). (A.192)
t→∞ t i=1

Since
Nt Nt +1
1X At 1 X
τi ≤ ≤ τi ,
t i=1 t t i=1
462 Appendix A - Uncertain Random Variable

we obtain
( Nt +1
)   ( Nt
)
1 X At 1X
Ch τi ≤ x ≤ Ch ≤x ≤ Ch τi ≤ x .
t i=1 t t i=1

It follows from (A.191) and (A.192) that


 
At
lim Ch ≤ x = Υ(x).
t→∞ t

Hence the availability rate At /t converges in distribution to τ /(τ1 + E[η1 ])


as t → ∞. The theorem is proved.

A.14 Bibliographic Notes


Probability theory was developed by Kolmogorov [70] in 1933 for modelling
frequencies, while uncertainty theory was founded by Liu [77] in 2007 for
modelling belief degrees. However, in many cases, uncertainty and random-
ness simultaneously appear in a complex system. In order to describe this
phenomenon, uncertain random variable was initialized by Liu [106] in 2013
with the concepts of chance measure and chance distribution. As an impor-
tant contribution, Liu [107] presented an operational law of uncertain random
variables. Furthermore, Yao-Gao [182], Gao-Sheng [33] and Gao-Ralescu [40]
verified some laws of large numbers for uncertain random variables.
Stochastic programming was first studied by Dantzig [22] in 1965, while
uncertain programming was first proposed by Liu [79] in 2009. In order to
model optimization problems with not only uncertainty but also randomness,
uncertain random programming was founded by Liu [107] in 2013. As ex-
tensions, Zhou-Yang-Wang [206] proposed uncertain random multiobjective
programming for optimizing multiple, noncommensurable and conflicting ob-
jectives, Qin [127] proposed uncertain random goal programming in order to
satisfy as many goals as possible in the order specified, and Ke-Su-Ni [66]
proposed uncertain random multilevel programming for studying decentral-
ized decision systems in which the leader and followers may have their own
decision variables and objective functions. After that, uncertain random pro-
gramming was developed steadily and applied widely.
Probabilistic risk analysis was dated back to 1952 when Roy [131] pro-
posed his safety-first criterion for portfolio selection. Another important con-
tribution is the probabilistic value-at-risk methodology developed by Morgan
[115] in 1996. On the other hand, uncertain risk analysis was proposed by
Liu [83] in 2010 for evaluating the risk index that is the uncertain measure
of an uncertain system being loss-positive. More generally, in order to quan-
tify the risk of uncertain random systems, Liu-Ralescu [108] invented the
tool of uncertain random risk analysis in 2014. Furthermore, the value-at-
risk methodology was presented by Liu-Ralescu [110], and the expected loss
Section A.14 - Bibliographic Notes 463

methodology was investigated by Liu-Ralescu [112] for dealing with uncertain


random systems.
Probabilistic reliability analysis was traced back to 1944 when Pugsley
[125] first proposed structural accident rates for the aeronautics industry.
Nowadays, probabilistic reliability analysis has become a widely used disci-
pline. As a new methodology, uncertain reliability analysis was developed
by Liu [83] in 2010 for evaluating the reliability index. More generally, for
dealing with uncertain random systems, Wen-Kang [155] presented the tool
of uncertain random reliability analysis and defined the reliability index in
2016. In addition, Gao-Yao [35] analyzed the importance index in uncertain
random system.
Random graph was defined by Erdős-Rényi [29] in 1959 and indepen-
dently by Gilbert [51] at nearly the same time. As an alternative, uncertain
graph was proposed by Gao-Gao [43] in 2013 via uncertainty theory. Assum-
ing some edges exist with some degrees in probability measure and others
exist with some degrees in uncertain measure, Liu [93] defined the concept
of uncertain random graph and analyzed the connectivity index in 2014. Af-
ter that, Zhang-Peng-Li [198] discussed the Euler index of uncertain random
graph.
Random network was first investigated by Frank-Hakimi [30] in 1965 for
modelling communication network with random capacities. From then on,
the random network was well developed and widely applied. As a break-
through approach, uncertain network was first explored by Liu [84] in 2010 for
modelling project scheduling problem with uncertain duration times. More
generally, assuming some weights are random variables and others are uncer-
tain variables, Liu [93] initialized the concept of uncertain random network
and discussed the shortest path problem in 2014. Following that, uncertain
random network was explored by many researchers. For example, Sheng-Gao
[139] investigated the maximum flow problem, and Sheng-Qin-Shi [142] dealt
with the minimum spanning tree problem of uncertain random network.
One of the earliest investigations of stochastic process was Bachelier [1] in
1900, and the study of uncertain process was started by Liu [78] in 2008. In
order to deal with uncertain random phenomenon evolving in time, Gao-Yao
[31] presented an uncertain random process in the light of chance theory in
2015. Gao-Yao [31] also proposed an uncertain random renewal process. As
extensions, Yao-Zhou [183] discussed an uncertain random renewal reward
process, and Yao-Gao [179] investigated an uncertain random alternating
renewal process.
Appendix B

Urn Problems

The basic urn problem is to determine the probability of drawing one col-
ored ball from an urn containing differently colored balls. This appendix
will design some new urn problems in order to illustrate probability theory,
uncertainty theory and chance theory.

B.1 Urn Problems

Assume I have filled 100 urns each with 100 balls that are either red or black.
You are told that the compositions (red versus black) in those urns are iid,
but the distribution function is completely unknown to you.
(i) How many balls do you think are red in the first urn?
(ii) How many balls do you think are red in the 100 urns?
(iii) How likely do you think the number of red balls is 10,000?
Let us first consider those problems by probability theory. Since you do
not know the number of red balls completely, Laplace criterion makes you
assign equal probabilities to the possible outcomes 0, 1, 2, · · · , 100. Thus, for
each i with 1 ≤ i ≤ 100, the number of red balls in the ith urn is a random
variable,

1
ξi = k with probability , k = 0, 1, 2, · · · , 100.
101

Note that we have to regard ξ1 , ξ2 , · · · , ξn as iid random variables according


to our assumption. Therefore, the total number of red balls in the 100 urns
is
ξ = ξ1 + ξ2 + · · · + ξ100
466 Appendix B - Urn Problems

that is the random variable


 100
X 1
ξ = k with probability , k = 0, 1, 2, · · · , 10000
101
k1 +k2 +···+k100 =k
ki ∈{0,1,2,··· ,100}
i=1,2,··· ,100

whose probability distribution is


 100
X 1
Φ(x) = .
101
k1 +k2 +···+k100 ≤x
ki ∈{0,1,2,··· ,100}
i=1,2,··· ,100

Especially, since the total number of red balls is 10,000 if and only if the 100
urns each contain 100 red balls, the probability measure of the total number
of red balls being 10,000 is

Pr{ξ = 10, 000} = Pr {ξi = 100, i = 1, 2, · · · , 100}


100 100
Y Y 1
= Pr{ξi = 100} =
i=1 i=1
101
≈ 3.6 × 10−201 .

Let us reconsider those problems by uncertainty theory. Since you do not


know the number of red balls completely, you have to assign equal uncertain
measures (belief degrees) to the possible outcomes 0, 1, 2, · · · , 100. Thus, for
each i with 1 ≤ i ≤ 100, the number of red balls in the ith urn is an uncertain
variable,
1
ηi = k with uncertain measure , k = 0, 1, 2, · · · , 100.
101
Note that we have to regard η1 , η2 , · · · , ηn as iid uncertain variables according
to our assumption. Therefore, the total number of red balls in the 100 urns
is
η = η1 + η2 + · · · + η100
that is the uncertain variable
1
η = k with uncertain measure , k = 0, 1, 2, · · · , 10000
101
whose uncertainty distribution

0, if x < 0




 k+1
Ψ(x) = , if 100k ≤ x < 100(k + 1), k = 0, 1, 2, · · · , 99

 101


1, if x ≥ 10000.
Section B.1 - Urn Problems 467

Especially, since the total number of red balls is 10,000 if and only if the 100
urns each contain 100 red balls, the uncertain measure of the total number
of red balls being 10,000 is

M{η = 10, 000} = M {ηi = 100, i = 1, 2, · · · , 100}


100 100
^ ^ 1
= M{ηi = 100} =
i=1 i=1
101
1
= .
101
For those 100 urns with unknown compositions of colored balls, consider
the following two options:

A: You lose $1,000,000 if the total number of red balls is 10,000, and receive
$1 otherwise;
B: Don’t bet.

What is your choice between A and B? If probability theory is used, then the
probability of the total number of red balls being 10,000 is 3.6 × 10−201 , and
the expected income of A is

A = 1 × (1 − 3.6 × 10−201 ) − 1, 000, 000 × 3.6 × 10−201 ≈ $1.

Since the income of B is always $0, we have

A > B.

That is, probability theory makes you choose A. If uncertainty theory is used,
then the uncertain measure of the total number of red balls being 10,000 is
1/101, and the expected income of A is
 
1 1
A=1× 1− − 1, 000, 000 × ≈ −$9, 900.
101 101

Since the income of B is always $0, we have

A < B.

That is, uncertainty theory makes you choose B. Who do you believe?
Now I would like to show you how I filled the 100 urns. First I took a
distribution function (please recognize that I have the option to choose my
preferred distribution function),
(
0, if x < 100
Υ(x) =
1, if x ≥ 100
468 Appendix B - Urn Problems

that is just the constant 100. Next I generated a random number k from
the distribution function Υ, and filled the first urn with k red balls and
100 − k black balls. Then I generated a new random number k from Υ, and
filled the second urn with k red balls and 100 − k black balls. Repeated
this process until 100 urns were filled. Note that 100, 100, · · · , 100 are indeed
iid, and the total number of red balls happens to be 10,000. You would lose
$1,000,000 if you used probability theory. Could you believe that uncertainty
theory is better than probability theory to deal with unknown-composition
urn problem?

B.2 Ellsberg Experiment


This problem is from Ellsberg experiment. An urn contains 30 red balls and
60 other balls that are either black or yellow in unknown proportion. Let
us randomly draw one ball from the urn. What is your choice between two
gambles:
Gamble A: You receive $100 if a red ball is drawn;
Gamble B: You receive $100 if a black ball is drawn?
Here I would like to propose a new problem: What is your choice if Gamble B
is replaced with
Gamble C: You receive $110 if a black ball is drawn?
Appendix C

Frequently Asked
Questions

This appendix will answer some frequently asked questions related to prob-
ability theory and uncertainty theory as well as their applications. This
appendix will also show why fuzzy set is a wrong model in both theory and
practice. Finally, I will clarify what uncertainty is.

C.1 What is the meaning that an object follows the laws


of probability theory?
We say an object (e.g. frequency) follows the laws of probability theory if
it meets not only the three axioms (Kolmogorov [70]) but also the product
probability theorem of probability theory:
Axiom 1 (Normality Axiom) Pr{Ω} = 1 for the universal set Ω;
Axiom 2 (Nonnegativity Axiom) Pr{A} ≥ 0 for any event A;
Axiom 3 (Additivity Axiom) For every countable sequence of mutually dis-
joint events A1 , A2 , · · · , we have
(∞ ) ∞
[ X
Pr Ai = Pr{Ai }; (C.1)
i=1 i=1

Theorem (Product Probability Theorem) Let (Ωk , Ak , Prk ) be probability


spaces for k = 1, 2, · · · Then there is a unique probability measure Pr such
that (∞ ∞
)
Y Y
Pr Ak = Prk {Ak } (C.2)
k=1 k=1

where Ak are arbitrarily chosen events from Ak for k = 1, 2, · · · , respectively.


470 Appendix C - Frequently Asked Questions

It is easy for us to understand why we need to justify that the object meets
the three axioms. However, some readers may wonder why we also need to
justify that the object meets the product probability theorem. The reason is
that product probability theorem cannot be deduced from Kolmogorov’s axioms
except we presuppose that the product probability meets the three axioms.
In other words, an object does not necessarily satisfy the product probability
theorem if it is only justified to meet the three axioms. Would that surprise
you?
Please keep in mind that “an object follows the laws of probability the-
ory” is equivalent to “an object meets the three axioms plus the product
probability theorem”. This assertion is stronger than “an object meets the
three axioms of Kolmogorov”. In other words, the three axioms do not ensure
that an object follows the laws of probability theory.
There exist two broad categories of interpretations of probability, one is
frequency interpretation and the other is belief interpretation. The frequency
interpretation takes the probability to be the frequency with which an event
happens (Venn [146], Reichenbach [129], von Mises [147]), while the belief
interpretation takes the probability to be the degree to which we believe an
event will happen (Ramsey [128], de Finetti [23], Savage [133]).
The debate between the two interpretations has been lasting from the
nineteenth century. Personally, I agree with the frequency interpretation,
but strongly oppose the belief interpretation of probability because frequency
follows the laws of probability theory but belief degree does not. The detailed
reasons will be given in the following a few sections.

C.2 Why does frequency follow the laws of probability


theory?
In order to show that the frequency follows the laws of probability theory, we
must verify that the frequency meets not only the three axioms of Kolmogorov
but also the product probability theorem.
First, the frequency of the universal set takes value 1 because the uni-
versal set always happens. Thus the frequency meets the normality axiom.
Second, it is obvious that the frequency is a number between 0 and 1. Thus
the frequency of any event is nonnegative, and the frequency meets the non-
negativity axiom. Third, for any disjoint events A and B, if A happens α
times and B happens β times (in percentage), it is clear that the union A ∪ B
happens α + β times. This means the frequency is additive and then meets
the additivity axiom. Finally, numerous experiments showed that if A and B
are two events from different probability spaces (essentially they come from
two different experiments) and happen α and β times, respectively, then the
product A × B happens α × β times. See Figure C.1. Thus the frequency
meets the product probability theorem. Hence the frequency does follow the
laws of probability theory. In fact, frequency is the only empirical basis for
Section C.3 - Belief Interpretation of Probability 471

probability theory.
..
.........
...
..
.............................................................................................................................
......... ... ... ...
... ... ... ...
...
... ... ... ...
. ... ... ...
... ...
B ...
.
.
...
.
.
α×β .....
...
.... ..
. ..
. ...
.... .... .... ....
........ .. .
. .
...............................................................................................................................
.... ..
..
..
..
... .. ..
... .. ..
... .. ..
...
..........................................................................................................................................................................................
.... ... ...
.... .... ................................. .........................................
.
.
.......
A

Figure C.1: Let A and B be two events from different probability spaces
(essentially they come from two different experiments). If A happens α times
and B happens β times, then the product A × B happens α × β times, where
α and β are understood as percentage numbers.

C.3 Why is probability theory not suitable for mod-


elling belief degree?
In order to obtain the belief degree of some event, the decision maker needs
to launch a consultation process with a domain expert. The decision maker is
the user of belief degree while the domain expert is the holder. For justifying
whether probability theory is suitable for modelling belief degree or not, we
must check if the belief degree follows the laws of probability theory.
First, “1” means “complete belief ” and we cannot be in more belief than
“complete belief ”. This means the belief degree of any event cannot exceed
1. Furthermore, the belief degree of the universal set takes value 1 because it
is completely believable. Hence the belief degree meets the normality axiom
of probability theory.
Second, the belief degree meets the nonnegativity axiom because “0”
means “complete disbelief ” and we cannot be in more disbelief than “com-
plete disbelief ”.
Third, de Finetti [23] interpreted the belief degree of an event as the fair
betting ratio (price/stake) of a bet that offers $1 if the event happens and
nothing otherwise. For example, if the domain expert thinks the belief degree
of an event A is α, then the price of the bet about A is α × 100¢. Here the
word “fair” means both the domain expert and the decision maker are willing
to either buy or sell this bet at this price.
Besides, Ramsey [128] suggested a Dutch book argument1 that says the
1 A Dutch book in a betting market is a set of bets which guarantees a loss, regardless

of the outcome of the gamble. For example, let A be a bet that offers $1 if A happens, let
B be a bet that offers $1 if B happens, and let A ∨ B be a bet that offers $1 if either A or
B happens. If the prices of A, B and A ∨ B are 30¢, 40¢ and 80¢, respectively, and you (i)
472 Appendix C - Frequently Asked Questions

belief degree is irrational if there exists a book that guarantees you a loss. For
the moment, we are assumed to agree with it.
Let A1 be a bet that offers $1 if A1 happens, and let A2 be a bet that
offers $1 if A2 happens. Assume the belief degrees of A1 and A2 are α1
and α2 , respectively. This means the prices of A1 and A2 are $α1 and $α2 ,
respectively. Now we consider the bet A1 ∪ A2 that offers $1 if either A1 or
A2 happens, and write the belief degree of A1 ∪ A2 by α. This means the
price of A1 ∪ A2 is $α. If α > α1 + α2 , then you (i) sell A1 , (ii) sell A2 , and
(iii) buy A1 ∪ A2 . It is clear that you are guaranteed to lose α − α1 − α2 > 0.
Thus there exists a Dutch book and the assumption α > α1 + α2 is irrational.
If α < α1 + α2 , then you (i) buy A1 , (ii) buy A2 , and (iii) sell A1 ∪ A2 . It is
clear that you are guaranteed to lose α1 + α2 − α > 0. Thus there exists a
Dutch book and the assumption α < α1 + α2 is irrational. Hence we have to
assume α = α1 + α2 and the belief degree meets the additivity axiom (but
this assertion is questionable because you cannot reverse “buy” and “sell”
arbitrarily due to the unequal status of the decision maker and the domain
expert).
Until now we have verified that the belief degree meets the three axioms
of probability theory. Almost all subjectivists stop here and assert that belief
degree follows the laws of probability theory. Unfortunately, the evidence is
not enough for this conclusion because we have not verified whether belief
degree meets the product probability theorem or not. In fact, it is impossible
for us to prove belief degree meets the product probability theorem through
the Dutch book argument.
Recall the example of truck-cross-over-bridge on Page 6. Let Ai represent
that the ith bridge strengths are greater than 90 tons, i = 1, 2, · · · , 50, re-
spectively. For each i, since your belief degree for Ai is 75%, you are willing
to pay 75¢ for the bet that offers $1 if Ai happens. If the belief degree did
follow the laws of probability theory, then it would be fair to pay

75% × 75% × · · · × 75% ×100¢ ≈ 0.00006¢ (C.3)


| {z }
50

for a bet that offers $1 if A1 × A2 × · · · × A50 happens. Notice that the odd
is over a million and A1 × A2 × · · · × A50 definitely happens because the real
strengths of the 50 bridges range from 95 to 110 tons. All of us will be happy
to bet on it. But who is willing to offer such a bet? It seems that no one does,
and then the belief degree of A1 × A2 × · · · × A50 is not the product of each
individuals. Hence the belief degree does not follow the laws of probability
theory.
It is thus concluded that the belief interpretation of probability is un-
acceptable. The main mistake of subjectivists is that they only justify the

sell A, (ii) sell B, and (iii) buy A ∨ B, then you are guaranteed to lose 10¢ no matter what
happens. Thus there exists a Dutch book, and the prices are considered to be irrational.
Section C.5 - Probability Theory vs Uncertainty Theory 473

belief degree meets the three axioms of probability theory, but do not check
if it meets the product probability theorem.

C.4 What goes wrong with Cox’s theorem?


Some people affirm that probability theory is the only legitimate approach.
Perhaps this misconception is rooted in Cox’s theorem [19] that any measure
of belief is “isomorphic” to a probability measure. However, uncertain mea-
sure is considered coherent but not isomorphic to any probability measure.
What goes wrong with Cox’s theorem? Personally I think that Cox’s theo-
rem presumes the truth value of conjunction P ∧ Q is a twice differentiable
function f of truth values of the two propositions P and Q, i.e.,

T (P ∧ Q) = f (T (P ), T (Q)) (C.4)

and then excludes uncertain measure from its start because the function
f (x, y) = x ∧ y used in uncertainty theory is not differentiable with respect
to x and y. In fact, there does not exist any evidence that the truth value
of conjunction is completely determined by the truth values of individual
propositions, let alone a twice differentiable function.
On the one hand, it is recognized that probability theory is a legitimate
approach to deal with the frequency. On the other hand, at any rate, it is
impossible that probability theory is the unique one for modelling indetermi-
nacy. In fact, it has been demonstrated in this book that uncertainty theory
is successful to deal with belief degrees.

C.5 What is the difference between probability theory


and uncertainty theory?
The difference between probability theory (Kolmogorov [70]) and uncertainty
theory (Liu [77]) does not lie in whether the measures are additive or not,
but how the product measures are defined. The product probability measure
is the product of the probability measures of the individual events, i.e.,

Pr{Λ1 × Λ2 } = Pr{Λ1 } × Pr{Λ2 }, (C.5)

while the product uncertain measure is the minimum of the uncertain mea-
sures of the individual events, i.e.,

M{Λ1 × Λ2 } = M{Λ1 } ∧ M{Λ2 }. (C.6)

Shortly, we may say that probability theory is a “product” mathematical sys-


tem, and uncertainty theory is a “minimum” mathematical system. This dif-
ference implies that random variables and uncertain variables obey different
operational laws.
474 Appendix C - Frequently Asked Questions

Probability theory and uncertainty theory are complementary mathemat-


ical systems that provide two acceptable mathematical models to deal with
the indeterminate world. Probability theory is a branch of mathematics for
modelling frequencies, while uncertainty theory is a branch of mathematics
for modelling belief degrees.

C.6 Why do I think fuzzy set theory is bad mathemat-


ics?
A fuzzy set is defined by its membership function µ which assigns to each
element x a real number µ(x) in the interval [0, 1], where the value of µ(x)
represents the grade of membership of x in the fuzzy set. This definition was
given by Zadeh [192] in 1965. Since then, fuzzy set theory has been spread
broadly. Although I strongly respect Professor Lotfi Zadeh’s achievements, I
have to declare that fuzzy set theory is bad mathematics.
A very strange phenomenon in academia is that different people have
different fuzzy set theories. Even so, we have to admit that every version of
fuzzy set theory contains at least the following four items. The first one is a
fuzzy set ξ with membership function µ. The next one is a complement set
ξ c with membership function

λ(x) = 1 − µ(x). (C.7)

The third one is a possibility measure defined by the three axioms,

Pos{Ω} = 1 for the universal set Ω, (C.8)

Pos{∅} = 0 for the empty set ∅, (C.9)

Pos{Λ1 ∪ Λ2 } = Pos{Λ1 } ∨ Pos{Λ2 } for any events Λ1 and Λ2 . (C.10)


And the fourth one is a relation between membership function and possibility
measure (Zadeh [193]),
µ(x) = Pos{x ∈ ξ}. (C.11)
Now for any point x, it is clear that {x ∈ ξ} and {x ∈ ξ c } are opposite
events2 , and then
{x ∈ ξ} ∪ {x ∈ ξ c } = Ω. (C.12)
On the one hand, by using the possibility axioms, we have

Pos{x ∈ ξ} ∨ Pos{x ∈ ξ c } = Pos{Ω} = 1. (C.13)


2 Please do not challenge this proposition, otherwise the classical mathematics has to

be completely rewritten. Perhaps some fuzzists insist that {x ∈ ξ} and {x ∈ ξ c } are not
opposite. Here I would like to advise them not to think so because it is in contradiction
with that ξ c has the membership function λ(x) = 1 − µ(x).
Section C.6 - Fuzzy set theory is bad mathematics 475

On the other hand, by using the relation (C.11), we have

Pos{x ∈ ξ} = µ(x), (C.14)

Pos{x ∈ ξ c } = 1 − µ(x). (C.15)


It follows from (C.13), (C.14) and (C.15) that

µ(x) ∨ (1 − µ(x)) = 1. (C.16)

Hence
µ(x) = 0 or 1. (C.17)
This result shows that the membership function µ can only be an indicator
function of crisp set. In other words, only crisp sets can simultaneously
satisfy (C.7)∼(C.11). In this sense, fuzzy set theory collapses mathematically
to classical set theory. That is, fuzzy set theory is nothing but classical set
theory.
Furthermore, it seems both in theory and practice that inclusion relation
between fuzzy sets has to be needed. Thus fuzzy set theory also assumes a
formula (Zadeh [193]),

Pos{ξ ⊂ B} = sup µ(x) (C.18)


x∈B

for any crisp set B. Now consider two crisp intervals [1, 2] and [2, 3]. It is
completely inacceptable in mathematical community that [1, 2] is included in
[2, 3], i.e., the inclusion relation

[1, 2] ⊂ [2, 3] (C.19)

is 100% wrong. Note that [1, 2] is a special fuzzy set whose membership
function is (
1, if 1 ≤ x ≤ 2
µ(x) = (C.20)
0, otherwise.
It follows from the formula (C.18) that

Pos{[1, 2] ⊂ [2, 3]} = sup µ(x) = 1. (C.21)


x∈[2,3]

That is, fuzzy set theory says that [1, 2] ⊂ [2, 3] is 100% right. Are you willing
to accept this result? If not, then (C.18) is in conflict with the inclusion rela-
tion in classical set theory. In other words, nothing can simultaneously satisfy
(C.7)∼(C.11) and (C.18). Therefore, fuzzy set theory is not self-consistent in
mathematics and may lead to wrong results in practice.
Perhaps some fuzzists may argue that they never use possibility measure
in fuzzy set theory. Here I would like to remind them that the membership
degree µ(x) is just the possibility measure that the fuzzy set ξ contains the
476 Appendix C - Frequently Asked Questions

point x (i.e., x belongs to ξ). Please also keep in mind that we cannot
distinguish fuzzy set from random set (Robbins [130] and Matheron [113])
and uncertain set (Liu [82]) if the underlying measures are not available.
From the above discussion, we can see that fuzzy set theory is not self-
consistent in mathematics and may lead to wrong results in practice. There-
fore, I would like to conclude that fuzzy set theory is bad mathematics. To
express this more frankly, fuzzy set theory cannot be called mathematics.
Can we improve fuzzy set theory? Yes, we can. But the change is so big
that I have to give the revision a new name called uncertain set theory. See
Chapter 8.

C.7 Why is fuzzy variable not suitable for modelling


indeterminate quantity?
A fuzzy variable is a function from a possibility space to the set of real
numbers (Nahmias [116]). Some people think that fuzzy variable is a suitable
tool for modelling indeterminate quantity. Is it really true? Unfortunately,
the answer is negative.
Let us reconsider the counterexample of truck-cross-over-bridge (Liu [86]).
If the bridge strength is regarded as a fuzzy variable ξ, then we may assign
it a membership function, say


 0, if x ≤ 80

 (x − 80)/10, if 80 ≤ x ≤ 90



µ(x) = 1, if 90 ≤ x ≤ 110 (C.22)

 (120 − x)/10, if 110 ≤ x ≤ 120




0, if x ≥ 120

that is just the trapezoidal fuzzy variable (80, 90, 110, 120). Please do not
argue why I choose such a membership function because it is not important for
the focus of debate. Based on the membership function µ and the definition
of possibility measure
Pos{ξ ∈ B} = sup µ(x), (C.23)
x∈B

it is easy for us to infer that


Pos{“bridge strength” = 100} = 1, (C.24)
Pos{“bridge strength” 6= 100} = 1. (C.25)
Thus we immediately conclude the following three propositions:
(a) the bridge strength is “exactly 100 tons” with possibility measure 1,
(b) the bridge strength is “not 100 tons” with possibility measure 1,
(c) “exactly 100 tons” is as possible as “not 100 tons”.
Section C.9 - Challenge to Stochastic Finance Theory 477

The first proposition says we are 100% sure that the bridge strength is “ex-
actly 100 tons”, neither less nor more. What a coincidence it should be!
It is doubtless that the belief degree of “exactly 100 tons” is almost zero,
and nobody is so naive to expect that “exactly 100 tons” is the true bridge
strength. The second proposition sounds good. The third proposition says
“exactly 100 tons” and “not 100 tons” have the same possibility measure.
Thus we have to regard them “equally likely”. Consider a bet: you get $1 if
the bridge strength is “exactly 100 tons”, and pay $1 if the bridge strength
is“not 100 tons”. Do you think the bet is fair? It seems that no one thinks
so. Hence the conclusion (c) is unacceptable because “exactly 100 tons” is
almost impossible compared with “not 100 tons”. This paradox shows that
those indeterminate quantities like the bridge strength cannot be quantified
by possibility measure and then they are not fuzzy concepts.

C.8 What is the difference between uncertainty theory


and possibility theory?
The essential difference between uncertainty theory (Liu [77]) and possibility
theory (Zadeh [193]) is that the former assumes

M{Λ1 ∪ Λ2 } = M{Λ1 } ∨ M{Λ2 } (C.26)

only for independent events Λ1 and Λ2 , and the latter holds

Pos{Λ1 ∪ Λ2 } = Pos{Λ1 } ∨ Pos{Λ2 } (C.27)

for any events Λ1 and Λ2 no matter if they are independent or not. A lot
of surveys showed that the measure of a union of events is usually greater
than the maximum of the measures of individual events when they are not
independent. This fact states that human brains do not behave fuzziness.
Both uncertainty theory and possibility theory attempt to model belief
degrees, where the former uses the tool of uncertain measure and the latter
uses the tool of possibility measure. Thus they are complete competitors.

C.9 Why is stochastic differential equation not suitable


for modelling stock price?
The origin of stochastic finance theory can be traced to Louis Bachelier’s
doctoral dissertation Théorie de la Speculation in 1900. However, Bache-
lier’s work had little impact for more than a half century. After Kiyosi Ito
invented stochastic calculus [56] in 1944 and stochastic differential equation
[57] in 1951, stochastic finance theory was well developed among others by
Samuelson [132], Black-Scholes [3] and Merton [114] during the 1960s and
1970s.
478 Appendix C - Frequently Asked Questions

Traditionally, stochastic finance theory presumes that the stock price (in-
cluding interest rate and currency exchange rate) follows Ito’s stochastic dif-
ferential equation. Is it really reasonable? In fact, this widely accepted
presumption was challenged among others by Liu [89] via some paradoxes.
First Paradox: As an example, let us assume that the stock price Xt follows
the differential equation,
dXt
= eXt + σXt · “noise” (C.28)
dt
where e is the log-drift, σ is the log-diffusion, and “noise” is a stochastic
process. Now we take the mathematical interpretation of the “noise” term
as
dWt
“noise” = (C.29)
dt
where Wt is a Wiener process3 . Thus the stock price Xt follows the stochastic
differential equation,
dXt dWt
= eXt + σXt . (C.30)
dt dt
Note that the “noise” term
 
dWt 1
∼ N 0, (C.31)
dt dt

is a normal random variable whose expected value is 0 and variance tends to


∞. This setting is very different from other disciplines (e.g. statistics) that
usually take

N (0, 1) (whose variance is 1 rather than ∞) (C.32)

as the “noise” term. In addition, since the right-hand part of (C.30) has
an infinite variance at any time t, the left-hand part (i.e., the instantaneous
growth rate dXt /dt of stock price) has to take an infinite variance at every
time. However, the growth rate usually has a finite variance in practice, or
at least, it is impossible to have infinite variance at every time. Thus it is
impossible that the real stock price Xt follows Ito’s stochastic differential
equation.
Second Paradox: Roughly speaking, the sample path of a stochastic differ-
ential equation (C.30) is increasing with probability 0.5 and decreasing with
probability 0.5 at each time no matter what happened before. However, in
practice, when the stock price is greatly increasing at the moment, usually it
will continue to increase; when the stock price is greatly decreasing, usually
3 A stochastic process W is said to be a Wiener process if (i) W = 0 and almost all
t 0
sample paths are continuous (but non-Lipschitz), (ii) Wt has stationary and independent
increments, and (iii) every increment Ws+t −Ws is a normal random variable with expected
value 0 and variance t.
Section C.9 - Challenge to Stochastic Finance Theory 479

it will continue to decrease. This means that the stock price in the real world
does not behave like Ito’s stochastic differential equation.
Third Paradox: It follows from the stochastic differential equation (C.30)
that Xt is a geometric Wiener process, i.e.,

Xt = X0 exp((e − σ 2 /2)t + σWt ) (C.33)

from which we derive


ln Xt − ln X0 − (e − σ 2 /2)t
Wt = (C.34)
σ
whose increment is
ln Xt+∆t − ln Xt − (e − σ 2 /2)∆t
∆Wt = . (C.35)
σ
Write
(e − σ 2 /2)∆t
A=− . (C.36)
σ
Note that the stock price Xt is actually a step function of time with a finite
number of jumps although it looks like a curve. During a fixed period (e.g.
one week), without loss of generality, we assume that Xt is observed to have
100 jumps. Now we divide the period into 10000 equal intervals. Then we
may observe 10000 samples of Xt . It follows from (C.35) that ∆Wt has 10000
samples that consist of 9900 A’s and 100 other numbers:

A, A, · · · , A, B, C, · · · , Z.
| {z } | {z } (C.37)
9900 100

Nobody can believe that those 10000 samples follow a normal probability
distribution with expected value 0 and variance ∆t. This fact is in contra-
diction with the property of Wiener process that the increment ∆Wt is a
normal random variable. Therefore, the real stock price Xt does not follow
the stochastic differential equation.
Perhaps some people think that the stock price does behave like a geomet-
ric Wiener process (or Ornstein-Uhlenbeck process) in macroscopy although
they recognize the paradox in microscopy. However, as the very core of
stochastic finance theory, Ito’s calculus is just built on the microscopic struc-
ture (i.e., the differential dWt ) of Wiener process rather than macroscopic
structure. More precisely, Ito’s calculus is dependent on the presumption
that dWt is a normal random variable with expected value 0 and variance
dt. This unreasonable presumption is what causes the second order term in
Ito’s formula,

∂h ∂h 1 ∂2h
dXt = (t, Wt )dt + (t, Wt )dWt + (t, Wt )dt. (C.38)
∂t ∂w 2 ∂w2
480 Appendix C - Frequently Asked Questions

..
.........
...
....
99% ...............
..
...
... ... ...
... ... ...
... ... ...
... .. ...
... .. .
... .. . ...
... .. ...
... ... ...
... ... ...
... .. ...
... ... ...
... ... ...
... ... . . .
... .. ......... ....................
.
...
...
.
... ...... . . .. ......
.
... .... .... ... .....
.....
........ .. ... .....
..
.. . .
. .
. .....
...... ... ...
. .
.
. .....
.....
....... .... ............................ ......
...... ... ... ... ............... ....
....
................................ ... ... ... ....................................... .............
..
.... . . . . . .. .................. ... ... ... ... ... ... ... ... .............. ...............
..........................................................................................................................................................................................................................................................
.
....

Figure C.2: There does not exist any continuous probability distribution
(curve) that can approximate to the frequency (histogram) of ∆Wt . Hence
it is impossible that the real stock price Xt follows any Ito’s stochastic dif-
ferential equation.

In fact, the increment of stock price is impossible to follow any continuous


probability distribution.
On the basis of the above three paradoxes, personally I do not think Ito’s
calculus can play the essential tool of finance theory because Ito’s stochastic
differential equation is impossible to model stock price. As a substitute,
uncertain calculus may be a potential mathematical foundation of finance
theory. We will have a theory of uncertain finance if the stock price, interest
rate and exchange rate are assumed to follow uncertain differential equations.

C.10 In what situations should we use uncertainty the-


ory?
Keep in mind that uncertainty theory is not suitable for modelling frequen-
cies. Personally, I think we should use uncertainty theory in the following
five situations.
(i) We should use uncertainty theory (here it refers to uncertain variable)
to quantify the future when no samples are available. In this case, we have to
invite some domain experts to evaluate the belief degree that each event will
happen, and uncertainty theory is just the tool to deal with belief degrees.
(ii) We should use uncertainty theory (here it refers to uncertain variable)
to quantify the future when an emergency arises, e.g., war, flood, earthquake,
accident, and even rumour. In fact, in this case, all historical data are no
longer valid to predict the future. Essentially, this situation equates to (i).
(iii) We should use uncertainty theory (here it refers to uncertain vari-
able) to quantify the past when precise observations or measurements are
impossible to perform, e.g., carbon emission, social benefit and oil reserves.
In this case, we have to invite some domain experts to estimate them, thus
obtaining their uncertainty distributions.
Section C.11 - What is Uncertainty? 481

(iv) We should use uncertainty theory (here it refers to uncertain set) to


model unsharp concepts, e.g., “young”, “tall”, “warm”, and “most” due to
the ambiguity of human language.
(v) We should use uncertainty theory (here it refers to uncertain differ-
ential equation) to model dynamic systems with continuous-time noise, e.g.,
stock price, heat conduction, and population growth.

C.11 How did “uncertainty” evolve over the past 100


years?
After the word “randomness” was used to represent probabilistic phenomena,
Knight (1921) and Keynes (1936) started to use the word “uncertainty” to
represent any non-probabilistic phenomena. The academic community also
calls it Knightian uncertainty, Keynesian uncertainty, or true uncertainty.
Unfortunately, it seems impossible for us to develop a mathematical theory
to deal with such a broad class of uncertainty because “non-probability”
represents too many things. This disadvantage makes uncertainty in the
sense of Knight and Keynes not able to become a scientific terminology.
Despite that, we have to recognize that they made a great process to break
the monopoly of probability theory.
However, there existed two major retrogressions on this issue during that
period. The first retrogression arose from Ramsey (1931) with the Dutch
book argument that “proves” belief degree follows the laws of probability
theory. On the one hand, I strongly disagree with the Dutch book argument.
On the other hand, even if we accept the Dutch book argument, we can only
prove belief degree meets the normality, nonnegativity and additivity axioms
of probability theory, but cannot prove it meets the product probability the-
orem. In other words, Dutch book argument cannot prove probability theory
is able to model belief degree. The second retrogression arose from Cox’s
theorem (1946) that belief degree is isomorphic to a probability measure.
Many people do not notice that Cox’s theorem is based on an unreasonable
assumption, and then mistakenly believe that uncertainty and probability
are synonymous. This idea remains alive today under the name of subjective
probability. Yet numerous experiments demonstrated that belief degree does
not follow the laws of probability theory.
An influential exploration by Zadeh (1965) was the fuzzy set theory that
was widely said to be successfully applied in many areas of our life. However,
fuzzy set theory has neither evolved as a mathematical system nor become
a suitable tool for rationally modelling belief degrees. The main mistake of
fuzzy set theory is based on the wrong assumption that the belief degree
of a union of events is the maximum of the belief degrees of the individual
events no matter if they are independent or not. A lot of surveys showed
that human brains do not behave fuzziness in the sense of Zadeh.
The latest development was uncertainty theory founded by Liu (2007).
482 Appendix C - Frequently Asked Questions

Nowadays, uncertainty theory has become a branch of pure mathematics


that is not only a formal study of an abstract structure (i.e., uncertainty
space) but also applicable to modelling belief degrees. Perhaps some readers
may complain that I never clarify what uncertainty is in this book. I think
we can answer it this way now. Uncertainty is anything that follows the laws
of uncertainty theory (i.e., the four axioms of uncertainty theory). From then
on, “uncertainty” became a scientific terminology on the basis of uncertainty
theory.

C.12 How can we distinguish between randomness and


uncertainty in practice?
There are two types of indeterminacy: randomness and uncertainty. Ran-
domness is anything that follows the laws of probability theory (i.e., the three
axioms of probability theory plus product probability theorem), and uncer-
tainty is anything that follows the laws of uncertainty theory (i.e., the four
axioms of uncertainty theory).
Of course, we can distinguish between randomness and uncertainty by the
above definitions. However, in practice, we can quickly distinguish between
them in this way. For any given indeterminate quantity, we first produce
a distribution function no matter what method is used. If we believe the
distribution function is close enough to the frequency, then it can be treated
as randomness. Otherwise, it has to be treated as uncertainty.
Probability theory provides a rigorous mathematical foundation to study
randomness, while uncertainty theory provides a rigorous mathematical foun-
dation to study uncertainty.
Bibliography

[1] Bachelier L, Théorie de la spéculation, Annales Scientifiques de L’École Nor-


male Supérieure, Vol.17, 21-86, 1900.
[2] Barbacioru IC, Uncertainty functional differential equations for finance, Sur-
veys in Mathematics and its Applications, Vol.5, 275-284, 2010.
[3] Black F, and Scholes M, The pricing of option and corporate liabilities, Jour-
nal of Political Economy, Vol.81, 637-654, 1973.
[4] Charnes A, and Cooper WW, Management Models and Industrial Applica-
tions of Linear Programming, Wiley, New York, 1961.
[5] Chen XW, and Liu B, Existence and uniqueness theorem for uncertain dif-
ferential equations, Fuzzy Optimization and Decision Making, Vol.9, No.1,
69-81, 2010.
[6] Chen XW, American option pricing formula for uncertain financial market,
International Journal of Operations Research, Vol.8, No.2, 32-37, 2011.
[7] Chen XW, and Ralescu DA, A note on truth value in uncertain logic, Expert
Systems with Applications, Vol.38, No.12, 15582-15586, 2011.
[8] Chen XW, and Dai W, Maximum entropy principle for uncertain variables,
International Journal of Fuzzy Systems, Vol.13, No.3, 232-236, 2011.
[9] Chen XW, Kar S, and Ralescu DA, Cross-entropy measure of uncertain vari-
ables, Information Sciences, Vol.201, 53-60, 2012.
[10] Chen XW, Variation analysis of uncertain stationary independent increment
process, European Journal of Operational Research, Vol.222, No.2, 312-316,
2012.
[11] Chen XW, and Ralescu DA, B-spline method of uncertain statistics with
applications to estimate travel distance, Journal of Uncertain Systems, Vol.6,
No.4, 256-262, 2012.
[12] Chen XW, Liu YH, and Ralescu DA, Uncertain stock model with periodic
dividends, Fuzzy Optimization and Decision Making, Vol.12, No.1, 111-123,
2013.
[13] Chen XW, and Ralescu DA, Liu process and uncertain calculus, Journal of
Uncertainty Analysis and Applications, Vol.1, Article 3, 2013.
[14] Chen XW, and Gao J, Uncertain term structure model of interest rate, Soft
Computing, Vol.17, No.4, 597-604, 2013.
484 Bibliography

[15] Chen XW, Li XF, and Ralescu DA, A note on uncertain sequence, Inter-
national Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,
Vol.22, No.2, 305-314, 2014.
[16] Chen XW, Uncertain calculus with finite variation processes, Soft Computing,
Vol.19, No.10, 2905-2912, 2015.
[17] Chen XW, and Gao J, Two-factor term structure model with uncertain
volatility risk, Soft Computing, to be published.
[18] Chen XW, Theory of Uncertain Finance, http://orsc.edu.cn/chen/tuf.pdf.
[19] Cox RT, Probability, frequency and reasonable expectation, American Jour-
nal of Physics, Vol.14, 1-13, 1946.
[20] Dai W, and Chen XW, Entropy of function of uncertain variables, Mathe-
matical and Computer Modelling, Vol.55, Nos.3-4, 754-760, 2012.
[21] Dai W, Quadratic entropy of uncertain variables, Soft Computing, to be pub-
lished.
[22] Dantzig GB, Linear programming under uncertainty, Management Science,
Vol.1, 197-206, 1955.
[23] de Finetti B, La prévision: ses lois logiques, ses sources subjectives, Annales
de l’Institut Henri Poincaré, Vol.7, 1-68, 1937.
[24] de Luca A, and Termini S, A definition of nonprobabilistic entropy in the
setting of fuzzy sets theory, Information and Control, Vol.20, 301-312, 1972.
[25] Dijkstra EW, A note on two problems in connection with graphs, Numerical
Mathematics, Vol.1, No.1, 269-271, 1959.
[26] Ding SB, Uncertain minimum cost flow problem, Soft Computing, Vol.18,
No.11, 2201-2207, 2014.
[27] Dubois D, and Prade H, Possibility Theory: An Approach to Computerized
Processing of Uncertainty, Plenum, New York, 1988.
[28] Elkan C, The paradoxical success of fuzzy logic, IEEE Expert, Vol.9, No.4,
3-8, 1994.
[29] Erdős P, and Rényi A, On random graphs, Publicationes Mathematicae, Vol.6,
290-297, 1959.
[30] Frank H, and Hakimi SL, Probabilistic flows through a communication net-
work, IEEE Transactions on Circuit Theory, Vol.12, 413-414, 1965.
[31] Gao J, and Yao K, Some concepts and theorems of uncertain random process,
International Journal of Intelligent Systems, Vol.30, No.1, 52-65, 2015.
[32] Gao R, Milne method for solving uncertain differential equations, Applied
Mathematics and Computation, Vol.274, 774-785, 2016.
[33] Gao R, and Sheng YH, Law of large numbers for uncertain random variables
with different chance distributions, Journal of Intelligent & Fuzzy Systems,
Vol.31, No.3, 1227-1234, 2016.
[34] Gao R, and Yao K, Importance index of component in uncertain reliability
system, Journal of Uncertainty Analysis and Applications, Vol.4, Article 7,
2016.
Bibliography 485

[35] Gao R, and Yao K, Importance index of components in uncertain random


systems, Knowledge-Based Systems, Vol.109, 208-217, 2016.
[36] Gao R, and Ahmadzade H, Moment analysis of uncertain stationary inde-
pendent increment processes, Journal of Uncertain Systems, Vol.10, No.4,
260-268, 2016.
[37] Gao R, Uncertain wave equation with infinite half-boundary, Applied Math-
ematics and Computation, Vol.304, 28-40, 2017.
[38] Gao R, Sun Y, and Ralescu DA, Order statistics of uncertain random vari-
ables with application to k-out-of-n system, Fuzzy Optimization and Decision
Making, Vol.16, No.2, 159-181, 2017.
[39] Gao R, and Chen XW, Some concepts and properties of uncertain fields,
Journal of Intelligent & Fuzzy Systems, Vol.32, No.6, 4367-4378, 2017.
[40] Gao R, and Ralescu DA, Covergence in distribution for uncertain random
variables, IEEE Transactions on Fuzzy Systems, to be published.
[41] Gao X, Some properties of continuous uncertain measure, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.17, No.3, 419-
426, 2009.
[42] Gao X, Gao Y, and Ralescu DA, On Liu’s inference rule for uncertain sys-
tems, International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, Vol.18, No.1, 1-11, 2010.
[43] Gao XL, and Gao Y, Connectedness index of uncertain graphs, International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.21, No.1,
127-137, 2013.
[44] Gao XL, Regularity index of uncertain graph, Journal of Intelligent & Fuzzy
Systems, Vol.27, No.4, 1671-1678, 2014.
[45] Gao Y, Shortest path problem with uncertain arc lengths, Computers and
Mathematics with Applications, Vol.62, No.6, 2591-2600, 2011.
[46] Gao Y, Uncertain inference control for balancing inverted pendulum, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 481-492, 2012.
[47] Gao Y, Existence and uniqueness theorem on uncertain differential equations
with local Lipschitz condition, Journal of Uncertain Systems, Vol.6, No.3,
223-232, 2012.
[48] Gao Y, Gao R, and Yang LX, Analysis of order statistics of uncertain vari-
ables, Journal of Uncertainty Analysis and Applications, Vol.3, Article 1,
2015.
[49] Gao Y, and Qin ZF, On computing the edge-connectivity of an uncertain
graph, IEEE Transactions on Fuzzy Systems, Vol.24, No.4, 981-991, 2016.
[50] Ge XT, and Zhu Y, Existence and uniqueness theorem for uncertain delay
differential equations, Journal of Computational Information Systems, Vol.8,
No.20, 8341-8347, 2012.
[51] Gilbert EN, Random graphs, Annals of Mathematical Statistics, Vol.30, No.4,
1141-1144, 1959.
[52] Guo HY, and Wang XS, Variance of uncertain random variables, Journal of
Uncertainty Analysis and Applications, Vol.2, Article 6, 2014.
486 Bibliography

[53] Guo HY, Wang XS, Wang LL, and Chen D, Delphi method for estimating
membership function of uncertain set, Journal of Uncertainty Analysis and
Applications, Vol.4, Article 3, 2016.
[54] Han SW, Peng ZX, and Wang SQ, The maximum flow problem of uncertain
network, Information Sciences, Vol.265, 167-175, 2014.
[55] Hou YC, Subadditivity of chance measure, Journal of Uncertainty Analysis
and Applications, Vol.2, Article 14, 2014.
[56] Ito K, Stochastic integral, Proceedings of the Japan Academy Series A, Vol.20,
No.8, 519-524, 1944.
[57] Ito K, On stochastic differential equations, Memoirs of the American Math-
ematical Society, No.4, 1-51, 1951.
[58] Iwamura K, and Kageyama M, Exact construction of Liu process, Applied
Mathematical Sciences, Vol.6, No.58, 2871-2880, 2012.
[59] Iwamura K, and Xu YL, Estimating the variance of the square of canonical
process, Applied Mathematical Sciences, Vol.7, No.75, 3731-3738, 2013.
[60] Jaynes ET, Information theory and statistical mechanics, Physical Reviews,
Vol.106, No.4, 620-630, 1957.
[61] Jaynes ET, Probability Theory: The Logic of Science, Cambridge University
Press, 2003.
[62] Ji XY, and Zhou J, Option pricing for an uncertain stock model with jumps,
Soft Computing, Vol.19, No.11, 3323-3329, 2015.
[63] Jia LF, and Dai W, Uncertain forced vibration equation of spring mass sys-
tem, Technical Report, 2017.
[64] Jiao DY, and Yao K, An interest rate model in uncertain environment, Soft
Computing, Vol.19, No.3, 775-780, 2015.
[65] Kahneman D, and Tversky A, Prospect theory: An analysis of decision under
risk, Econometrica, Vol.47, No.2, 263-292, 1979.
[66] Ke H, Su TY, and Ni YD, Uncertain random multilevel programming with
application to product control problem, Soft Computing, Vol.19, No.6, 1739-
1746, 2015.
[67] Ke H, and Yao K, Block replacement policy in uncertain environment, Reli-
ability Engineering & System Safety, Vol.148, 119-124, 2016.
[68] Keynes JM, The General Theory of Employment, Interest, and Money, Har-
court, New York, 1936.
[69] Knight FH, Risk, Uncertainty, and Profit, Houghton Mifflin, Boston, 1921.
[70] Kolmogorov AN, Grundbegriffe der Wahrscheinlichkeitsrechnung, Julius
Springer, Berlin, 1933.
[71] Li SG, Peng J, and Zhang B, Multifactor uncertain differential equation,
Journal of Uncertainty Analysis and Applications, Vol.3, Article 7, 2015.
[72] Li X, and Liu B, Hybrid logic and uncertain logic, Journal of Uncertain
Systems, Vol.3, No.2, 83-94, 2009.
Bibliography 487

[73] Lio W, and Liu B, Uncertain data envelopment analysis with imprecisely
observed inputs and outputs, Fuzzy Optimization and Decision Making, to
be published.
[74] Lio W, and Liu B, Residual and confidence interval for uncertain regression
model with imprecise observations, Technical Report, 2017.
[75] Liu B, Theory and Practice of Uncertain Programming, Physica-Verlag, Hei-
delberg, 2002.
[76] Liu B, and Liu YK, Expected value of fuzzy variable and fuzzy expected value
models, IEEE Transactions on Fuzzy Systems, Vol.10, No.4, 445-450, 2002.
[77] Liu B, Uncertainty Theory, 2nd edn, Springer-Verlag, Berlin, 2007.
[78] Liu B, Fuzzy process, hybrid process and uncertain process, Journal of Un-
certain Systems, Vol.2, No.1, 3-16, 2008.
[79] Liu B, Theory and Practice of Uncertain Programming, 2nd edn, Springer-
Verlag, Berlin, 2009.
[80] Liu B, Some research problems in uncertainty theory, Journal of Uncertain
Systems, Vol.3, No.1, 3-10, 2009.
[81] Liu B, Uncertain entailment and modus ponens in the framework of uncertain
logic, Journal of Uncertain Systems, Vol.3, No.4, 243-251, 2009.
[82] Liu B, Uncertain set theory and uncertain inference rule with application to
uncertain control, Journal of Uncertain Systems, Vol.4, No.2, 83-98, 2010.
[83] Liu B, Uncertain risk analysis and uncertain reliability analysis, Journal of
Uncertain Systems, Vol.4, No.3, 163-170, 2010.
[84] Liu B, Uncertainty Theory: A Branch of Mathematics for Modeling Human
Uncertainty, Springer-Verlag, Berlin, 2010.
[85] Liu B, Uncertain logic for modeling human language, Journal of Uncertain
Systems, Vol.5, No.1, 3-20, 2011.
[86] Liu B, Why is there a need for uncertainty theory? Journal of Uncertain
Systems, Vol.6, No.1, 3-10, 2012.
[87] Liu B, and Yao K, Uncertain integral with respect to multiple canonical
processes, Journal of Uncertain Systems, Vol.6, No.4, 250-255, 2012.
[88] Liu B, Membership functions and operational law of uncertain sets, Fuzzy
Optimization and Decision Making, Vol.11, No.4, 387-410, 2012.
[89] Liu B, Toward uncertain finance theory, Journal of Uncertainty Analysis and
Applications, Vol.1, Article 1, 2013.
[90] Liu B, Extreme value theorems of uncertain process with application to in-
surance risk model, Soft Computing, Vol.17, No.4, 549-556, 2013.
[91] Liu B, A new definition of independence of uncertain sets, Fuzzy Optimization
and Decision Making, Vol.12, No.4, 451-461, 2013.
[92] Liu B, Polyrectangular theorem and independence of uncertain vectors, Jour-
nal of Uncertainty Analysis and Applications, Vol.1, Article 9, 2013.
[93] Liu B, Uncertain random graph and uncertain random network, Journal of
Uncertain Systems, Vol.8, No.1, 3-12, 2014.
488 Bibliography

[94] Liu B, Uncertainty distribution and independence of uncertain processes,


Fuzzy Optimization and Decision Making, Vol.13, No.3, 259-271, 2014.
[95] Liu B, Uncertainty Theory, 4th edn, Springer-Verlag, Berlin, 2015.
[96] Liu B, and Chen XW, Uncertain multiobjective programming and uncertain
goal programming, Journal of Uncertainty Analysis and Applications, Vol.3,
Article 10, 2015.
[97] Liu B, and Yao K, Uncertain multilevel programming: Algorithm and appli-
cations, Computers & Industrial Engineering, Vol.89, 235-240, 2015.
[98] Liu B, Some preliminary results about uncertain matrix, Journal of Uncer-
tainty Analysis and Applications, Vol.4, Article 11, 2016.
[99] Liu B, Totally ordered uncertain sets, Fuzzy Optimization and Decision Mak-
ing, to be published.
[100] Liu HJ, and Fei WY, Neutral uncertain delay differential equations, Infor-
mation: An International Interdisciplinary Journal, Vol.16, No.2, 1225-1232,
2013.
[101] Liu HJ, Ke H, and Fei WY, Almost sure stability for uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.13, No.4, 463-473,
2014.
[102] Liu JJ, Uncertain comprehensive evaluation method, Journal of Information
& Computational Science, Vol.8, No.2, 336-344, 2011.
[103] Liu W, and Xu JP, Some properties on expected value operator for uncertain
variables, Information: An International Interdisciplinary Journal, Vol.13,
No.5, 1693-1699, 2010.
[104] Liu YH, and Ha MH, Expected value of function of uncertain variables, Jour-
nal of Uncertain Systems, Vol.4, No.3, 181-186, 2010.
[105] Liu YH, An analytic method for solving uncertain differential equations, Jour-
nal of Uncertain Systems, Vol.6, No.4, 244-249, 2012.
[106] Liu YH, Uncertain random variables: A mixture of uncertainty and random-
ness, Soft Computing, Vol.17, No.4, 625-634, 2013.
[107] Liu YH, Uncertain random programming with applications, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.2, 153-169, 2013.
[108] Liu YH, and Ralescu DA, Risk index in uncertain random risk analysis, In-
ternational Journal of Uncertainty, Fuzziness & Knowledge-Based Systems,
Vol.22, No.4, 491-504, 2014.
[109] Liu YH, Chen XW, and Ralescu DA, Uncertain currency model and currency
option pricing, International Journal of Intelligent Systems, Vol.30, No.1, 40-
51, 2015.
[110] Liu YH, and Ralescu DA, Value-at-risk in uncertain random risk analysis,
Information Sciences, Vol.391, 1-8, 2017.
[111] Liu YH, and Yao K, Uncertain random logic and uncertain random entail-
ment, Journal of Ambient Intelligence and Humanized Computing, Vol.8,
No.5, 695-706, 2017.
Bibliography 489

[112] Liu YH, and Ralescu DA, Expected loss of uncertain random systems, Soft
Computing, to be published.
[113] Matheron G, Random Sets and Integral Geometry, Wiley, New York, 1975.
[114] Merton RC, Theory of rational option pricing, Bell Journal of Economics and
Management Science, Vol.4, 141-183, 1973.
[115] Morgan JP, Risk Metrics TM – Technical Document, 4th edn, Morgan Guar-
anty Trust Companies, New York, 1996.
[116] Nahmias S, Fuzzy variables, Fuzzy Sets and Systems, Vol.1, 97-110, 1978.
[117] Nejad ZM, and Ghaffari-Hadigheh A, A novel DEA model based on uncer-
tainty theory, Annals of Operations Research, to be published.
[118] Nilsson NJ, Probabilistic logic, Artificial Intelligence, Vol.28, 71-87, 1986.
[119] Ning YF, Ke H, and Fu ZF, Triangular entropy of uncertain variables with
application to portfolio selection, Soft Computing, Vol.19, No.8, 2203-2209,
2015.
[120] Peng J, and Yao K, A new option pricing model for stocks in uncertainty
markets, International Journal of Operations Research, Vol.8, No.2, 18-26,
2011.
[121] Peng J, Risk metrics of loss function for uncertain system, Fuzzy Optimization
and Decision Making, Vol.12, No.1, 53-64, 2013.
[122] Peng ZX, and Iwamura K, A sufficient and necessary condition of uncertainty
distribution, Journal of Interdisciplinary Mathematics, Vol.13, No.3, 277-285,
2010.
[123] Peng ZX, and Iwamura K, Some properties of product uncertain measure,
Journal of Uncertain Systems, Vol.6, No.4, 263-269, 2012.
[124] Peng ZX, and Chen XW, Uncertain systems are universal approximators,
Journal of Uncertainty Analysis and Applications, Vol.2, Article 13, 2014.
[125] Pugsley AG, A philosophy of strength factors, Aircraft Engineering and
Aerospace Technology, Vol.16, No.1, 18-19, 1944.
[126] Qin ZF, and Gao X, Fractional Liu process with application to finance, Math-
ematical and Computer Modelling, Vol.50, Nos.9-10, 1538-1543, 2009.
[127] Qin ZF, Uncertain random goal programming, Fuzzy Optimization and De-
cision Making, to be published.
[128] Ramsey FP, Truth and probability, In Foundations of Mathematics and Other
Logical Essays, Humanities Press, New York, 1931.
[129] Reichenbach H, The Theory of Probability, University of California Press,
Berkeley, 1948.
[130] Robbins HE, On the measure of a random set, Annals of Mathematical Statis-
tics, Vol.15, No.1, 70-74, 1944.
[131] Roy AD, Safety-first and the holding of assets, Econometrica, Vol.20, 431-149,
1952.
[132] Samuelson PA, Rational theory of warrant pricing, Industrial Management
Review, Vol.6, 13-31, 1965.
490 Bibliography

[133] Savage LJ, The Foundations of Statistics, Wiley, New York, 1954.
[134] Savage LJ, The Foundations of Statistical Inference, Methuen, London, 1962.
[135] Shannon CE, The Mathematical Theory of Communication, The University
of Illinois Press, Urbana, 1949.
[136] Shen YY, and Yao K, A mean-reverting currency model in an uncertain
environment, Soft Computing, Vol.20, No.10, 4131-4138, 2016.
[137] Sheng YH, and Wang CG, Stability in the p-th moment for uncertain differen-
tial equation, Journal of Intelligent & Fuzzy Systems, Vol.26, No.3, 1263-1271,
2014.
[138] Sheng YH, and Yao K, Some formulas of variance of uncertain random vari-
able, Journal of Uncertainty Analysis and Applications, Vol.2, Article 12,
2014.
[139] Sheng YH, and Gao J, Chance distribution of the maximum flow of uncertain
random network, Journal of Uncertainty Analysis and Applications, Vol.2,
Article 15, 2014.
[140] Sheng YH, and Kar S, Some results of moments of uncertain variable through
inverse uncertainty distribution, Fuzzy Optimization and Decision Making,
Vol.14, No.1, 57-76, 2015.
[141] Sheng YH, and Gao J, Exponential stability of uncertain differential equation,
Soft Computing, Vol.20, No.9, 3673-3678, 2016.
[142] Sheng YH, Qin ZF, and Shi G, Minimum spanning tree problem of uncertain
random network, Journal of Intelligent Manufacturing, Vol.28, No.3, 565-574,
2017.
[143] Sheng YH, Gao R, and Zhang ZQ, Uncertain population model with age-
structure, Journal of Intelligent & Fuzzy Systems, Vol.33, No.2, 853-858,
2017.
[144] Sun JJ, and Chen XW, Asian option pricing formula for uncertain financial
market, Journal of Uncertainty Analysis and Applications, Vol.3, Article 11,
2015.
[145] Tian JF, Inequalities and mathematical properties of uncertain variables,
Fuzzy Optimization and Decision Making, Vol.10, No.4, 357-368, 2011.
[146] Venn J, The Logic of Chance, MacMillan, London, 1866.
[147] von Mises R, Wahrscheinlichkeit, Statistik und Wahrheit, Springer, Berlin,
1928.
[148] von Mises R, Wahrscheinlichkeitsrechnung und ihre Anwendung in der Statis-
tik und Theoretischen Physik, Leipzig and Wien, Franz Deuticke, 1931.
[149] Wang X, Ning YF, Moughal TA, and Chen XM, Adams-Simpson method for
solving uncertain differential equation, Applied Mathematics and Computa-
tion, Vol.271, 209-219, 2015.
[150] Wang X, and Ning YF, An uncertain currency model with floating interest
rates, Soft Computing, Vol.21, No.22, 6739-6754, 2017.
[151] Wang XS, Gao ZC, and Guo HY, Uncertain hypothesis testing for two ex-
perts’ empirical data, Mathematical and Computer Modelling, Vol.55, 1478-
1482, 2012.
Bibliography 491

[152] Wang XS, Gao ZC, and Guo HY, Delphi method for estimating uncer-
tainty distributions, Information: An International Interdisciplinary Journal,
Vol.15, No.2, 449-460, 2012.
[153] Wang XS, and Ha MH, Quadratic entropy of uncertain sets, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 99-109, 2013.
[154] Wang XS, and Peng ZX, Method of moments for estimating uncertainty dis-
tributions, Journal of Uncertainty Analysis and Applications, Vol.2, Article
5, 2014.
[155] Wen ML, and Kang R, Reliability analysis in uncertain random system, Fuzzy
Optimization and Decision Making, Vol.15, No.4, 491-506, 2016.
[156] Wen ML, Zhang QY, Kang R, and Yang Y, Some new ranking criteria in data
envelopment analysis under uncertain environment, Computers & Industrial
Engineering, Vol.110, 498-504, 2017.
[157] Wiener N, Differential space, Journal of Mathematical Physics, Vol.2, 131-
174, 1923.
[158] Yang XF, and Gao J, Uncertain differential games with application to capi-
talism, Journal of Uncertainty Analysis and Applications, Vol.1, Article 17,
2013.
[159] Yang XF, and Gao J, Some results of moments of uncertain set, Journal of
Intelligent & Fuzzy Systems, Vol.28, No.6, 2433-2442, 2015.
[160] Yang XF, and Ralescu DA, Adams method for solving uncertain differential
equations, Applied Mathematics and Computation, Vol.270, 993-1003, 2015.
[161] Yang XF, and Shen YY, Runge-Kutta method for solving uncertain differ-
ential equations, Journal of Uncertainty Analysis and Applications, Vol.3,
Article 17, 2015.
[162] Yang XF, and Gao J, Linear-quadratic uncertain differential game with appli-
cation to resource extraction problem, IEEE Transactions on Fuzzy Systems,
Vol.24, No.4, 819-826, 2016.
[163] Yang XF, Ni YD, and Zhang YS, Stability in inverse distribution for uncertain
differential equations, Journal of Intelligent & Fuzzy Systems, Vol.32, No.3,
2051-2059, 2017.
[164] Yang XF, and Yao K, Uncertain partial differential equation with application
to heat conduction, Fuzzy Optimization and Decision Making, Vol.16, No.3,
379-403, 2017.
[165] Yang XF, Gao J, and Ni YD, Resolution principle in uncertain random envi-
ronment, IEEE Transactions on Fuzzy Systems, to be published.
[166] Yang XF, and Liu B, Uncertain time series analysis with imprecise observa-
tions, Technical Report, 2017.
[167] Yang XH, On comonotonic functions of uncertain variables, Fuzzy Optimiza-
tion and Decision Making, Vol.12, No.1, 89-98, 2013.
[168] Yao K, Uncertain calculus with renewal process, Fuzzy Optimization and
Decision Making, Vol.11, No.3, 285-297, 2012.
[169] Yao K, and Li X, Uncertain alternating renewal process and its application,
IEEE Transactions on Fuzzy Systems, Vol.20, No.6, 1154-1160, 2012.
492 Bibliography

[170] Yao K, Gao J, and Gao Y, Some stability theorems of uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.12, No.1, 3-13, 2013.
[171] Yao K, Extreme values and integral of solution of uncertain differential equa-
tion, Journal of Uncertainty Analysis and Applications, Vol.1, Article 2, 2013.
[172] Yao K, and Ralescu DA, Age replacement policy in uncertain environment,
Iranian Journal of Fuzzy Systems, Vol.10, No.2, 29-39, 2013.
[173] Yao K, and Chen XW, A numerical method for solving uncertain differential
equations, Journal of Intelligent & Fuzzy Systems, Vol.25, No.3, 825-832,
2013.
[174] Yao K, A type of nonlinear uncertain differential equations with analytic
solution, Journal of Uncertainty Analysis and Applications, Vol.1, Article 8,
2013.
[175] Yao K, and Ke H, Entropy operator for membership function of uncertain
set, Applied Mathematics and Computation, Vol.242, 898-906, 2014.
[176] Yao K, A no-arbitrage theorem for uncertain stock model, Fuzzy Optimization
and Decision Making, Vol.14, No.2, 227-242, 2015.
[177] Yao K, Ke H, and Sheng YH, Stability in mean for uncertain differential
equation, Fuzzy Optimization and Decision Making, Vol.14, No.3, 365-379,
2015.
[178] Yao K, A formula to calculate the variance of uncertain variable, Soft Com-
puting, Vol.19, No.10, 2947-2953, 2015.
[179] Yao K, and Gao J, Uncertain random alternating renewal process with appli-
cation to interval availability, IEEE Transactions on Fuzzy Systems, Vol.23,
No.5, 1333-1342, 2015.
[180] Yao K, Inclusion relationship of uncertain sets, Journal of Uncertainty Anal-
ysis and Applications, Vol.3, Article 13, 2015.
[181] Yao K, Uncertain contour process and its application in stock model with
floating interest rate, Fuzzy Optimization and Decision Making, Vol.14, No.4,
399-424, 2015.
[182] Yao K, and Gao J, Law of large numbers for uncertain random variables,
IEEE Transactions on Fuzzy Systems, Vol.24, No.3, 615-621, 2016.
[183] Yao K, and Zhou J, Uncertain random renewal reward process with appli-
cation to block replacement policy, IEEE Transactions on Fuzzy Systems,
Vol.24, No.6, 1637-1647, 2016.
[184] Yao K, Uncertain Differential Equations, Springer-Verlag, Berlin, 2016.
[185] Yao K, Ruin time of uncertain insurance risk process, IEEE Transactions on
Fuzzy Systems, to be published.
[186] Yao K, Conditional uncertain set and conditional membership function, Fuzzy
Optimization and Decision Making, to be published.
[187] Yao K, and Liu B, Uncertain regression analysis: An approach for imprecise
observations, Soft Computing, to be published.
[188] Yao K, and Zhou J, Renewal reward process with uncertain interarrival times
and random rewards, IEEE Transactions on Fuzzy Systems, to be published.
Bibliography 493

[189] Yao K, Extreme value and time integral of uncertain independent increment
process, http://orsc.edu.cn/online/130302.pdf.
[190] You C, Some convergence theorems of uncertain sequences, Mathematical and
Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
[191] Yu XC, A stock model with jumps for uncertain markets, International Jour-
nal of Uncertainty, Fuzziness & Knowledge-Based Systems, Vol.20, No.3, 421-
432, 2012.
[192] Zadeh LA, Fuzzy sets, Information and Control, Vol.8, 338-353, 1965.
[193] Zadeh LA, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and
Systems, Vol.1, 3-28, 1978.
[194] Zadeh LA, A theory of approximate reasoning, In: J Hayes, D Michie and
RM Thrall, eds., Mathematical Frontiers of the Social and Policy Sciences,
Westview Press, Boulder, Cororado, 69-129, 1979.
[195] Zeng ZG, Wen ML, Kang R, Belief reliability: A new metrics for products’ re-
liability, Fuzzy Optimization and Decision Making, Vol.12, No.1, 15-27, 2013.
[196] Zeng ZG, Kang R, Wen ML, and Zio E, Uncertainty theory as a basis for
belief reliability, Information Sciences, Vol.429, 26-36, 2018.
[197] Zhang B, and Peng J, Euler index in uncertain graph, Applied Mathematics
and Computation, Vol.218, No.20, 10279-10288, 2012.
[198] Zhang B, Peng J, and Li SG, Euler index of uncertain random graph, Inter-
national Journal of Computer Mathematics, Vol.94, No.2, 217-229, 2017.
[199] Zhang CX, and Guo CR, Uncertain block replacement policy with no re-
placement at failure, Journal of Intelligent & Fuzzy Systems, Vol.27, No.4,
1991-1997, 2014.
[200] Zhang XF, Ning YF, and Meng GW, Delayed renewal process with uncertain
interarrival times, Fuzzy Optimization and Decision Making, Vol.12, No.1,
79-87, 2013.
[201] Zhang XF, and Li X, A semantic study of the first-order predicate logic with
uncertainty involved, Fuzzy Optimization and Decision Making, Vol.13, No.4,
357-367, 2014.
[202] Zhang Y, Gao J, and Huang ZY, Hamming method for solving uncertain
differential equations, Applied Mathematics and Computation, Vol.313, 331-
341, 2017.
[203] Zhang ZM, Some discussions on uncertain measure, Fuzzy Optimization and
Decision Making, Vol.10, No.1, 31-43, 2011.
[204] Zhang ZQ, and Liu WQ, Geometric average Asian option pricing for uncertain
financial market, Journal of Uncertain Systems, Vol.8, No.4, 317-320, 2014.
[205] Zhang ZQ, Ralescu DA, and Liu WQ, Valuation of interest rate ceiling and
floor in uncertain financial market, Fuzzy Optimization and Decision Making,
Vol.15, No.2, 139-154, 2016.
[206] Zhou J, Yang F, and Wang K, Multi-objective optimization in uncertain ran-
dom environments, Fuzzy Optimization and Decision Making, Vol.13, No.4,
397-413, 2014.
[207] Zhu Y, Uncertain optimal control with application to a portfolio selection
model, Cybernetics and Systems, Vol.41, No.7, 535-547, 2010.
List of Frequently Used Symbols

M uncertain measure
(Γ, L, M) uncertainty space
ξ, η, τ uncertain variables
Φ, Ψ, Υ uncertainty distributions
Φ−1 , Ψ−1 , Υ−1 inverse uncertainty distributions
µ, ν, λ membership functions
µ , ν , λ−1
−1 −1
inverse membership functions
L(a, b) linear uncertain variable
Z(a, b, c) zigzag uncertain variable
N (e, σ) normal uncertain variable
LOGN (e, σ) lognormal uncertain variable
(a, b, c) triangular uncertain set
(a, b, c, d) trapezoidal uncertain set
E expected value
V variance
H entropy
Xt , Yt , Zt uncertain processes
Ct Liu process
Nt renewal process
Q uncertain quantifier
(Q, S, P ) uncertain proposition
∀ universal quantifier
∃ existential quantifier
∨ maximum operator
∧ minimum operator
¬ negation symbol
Pr probability measure
(Ω, A, Pr) probability space
Ch chance measure
k-max the kth largest value
k-min the kth smallest value
∅ the empty set
< the set of real numbers
iid independent and identically distributed
Index

age replacement policy, 306 disturbance term, 397, 404


algebra, 11 drift, 319, 324
α-path, 343 dual quantifier, 241
alternating renewal process, 310 duality axiom, 13
American option, 363 Ellsberg experiment, 468
Asian option, 365 empirical membership function, 394
asymptotic theorem, 17 empirical uncertainty distribution, 42
autoregressive model, 403 entropy, 89, 226
belief degree, 3 Euler method, 355
betting ratio, 471 European option, 359
bisection method, 58 event, 13
block replacement policy, 299 expected loss, 143, 442
Boolean function, 66 expected value, 71, 216, 426
Boolean uncertain variable, 66 expert’s experimental data, 385, 393
Borel algebra, 12 extreme value theorem, 62, 282
Borel set, 12 fair price principle, 360
bridge system, 150 feasible solution, 113
chain rule, 327 first hitting time, 285, 351
chance distribution, 417 forecast value, 401, 406
chance inversion theorem, 418 frequency, 2
chance measure, 412 fundamental theorem of calculus, 325
change of variables, 327 fuzzy set, 474
Chebyshev inequality, 82 goal programming, 130
Chen-Ralescu theorem, 157 hazard distribution, 144
comonotonic function, 77 Hölder’s inequality, 79
complement of uncertain set, 177, 205 hypothetical syllogism, 170
complete uncertainty space, 19 imaginary inclusion, 216
conditional uncertainty, 29, 95, 230 inclusion, 213
confidence interval, 401, 407 independence, 25, 46, 197
containment, 213 independent increment, 280
convergence almost surely, 98 indeterminacy, 1
convergence in distribution, 99 individual feature data, 235
convergence in mean, 99 inference rule, 261
convergence in measure, 99 integration by parts, 328
currency option, 378 interest rate ceiling, 375
Delphi method, 392 interest rate floor, 377
De Morgan’s law, 180 intersection of uncertain sets, 177, 202
diffusion, 319, 324 inverse membership function, 195
distance, 87, 225 inverse uncertainty distribution, 44
496 Index

inverted pendulum, 268 Pareto solution, 129


investment risk analysis, 142 Peng-Iwamura theorem, 38
Ito’s formula, 479 polyrectangular theorem, 27
Jensen’s inequality, 80 portfolio selection, 370
k-out-of-n system, 134 possibility measure, 474
law of contradiction, xiv, 179 power set, 12
law of excluded middle, xiv, 179 principle of least squares, 388, 394
law of large numbers, 433 product axiom, 20
law of truth conservation, xiv product probability theorem, 469
Lebesgue measure, 15 product uncertain measure, 20
linear uncertain variable, 40 project scheduling problem, 125
linguistic summarizer, 256 randomness, definition of, 482
Liu integral, 320 regression model, 397
Liu process, 315 regular membership function, 197
logical equivalence theorem, 250 regular uncertainty distribution, 43
lognormal uncertain variable, 41 reliability index, 149, 442
loss function, 133 renewal process, 295, 449
machine scheduling problem, 118 renewal reward process, 300
Markov inequality, 79 residual, 399, 405
maximum entropy principle, 94 risk index, 135, 438
maximum flow problem, 449 ruin index, 303
maximum uncertainty principle, xiv ruin time, 304
measurable function, 33 rule-base, 265
measurable set, 12 Runge-Kutta method, 356
measure inversion formula, 182 sample path, 274
measure inversion theorem, 42 series system, 133
membership function, 182 shortest path problem, 448
method of moments, 390 σ-algebra, 11
Minkowski inequality, 80 stability, 341
modus ponens, 168 Stackelberg-Nash equilibrium, 132
modus tollens, 169 standby system, 134
moment, 84 stationary increment, 290
monotone quantifier, 239 strictly decreasing function, 54
monotonicity theorem, 16 strictly increasing function, 48
multilevel programming, 131 strictly monotone function, 55
multiobjective programming, 129 structural risk analysis, 138
multivariate normal distribution, 107 structure function, 147
Nash equilibrium, 132 subadditivity axiom, 13
negated quantifier, 240 time integral, 286, 353
nonempty uncertain set, 177 totally ordered uncertain set, 189
normal uncertain variable, 41 trapezoidal uncertain set, 186
normal uncertain vector, 106 triangular uncertain set, 186
normality axiom, 13 truth value, 155, 250
operational law, 48, 200, 279, 419 uncertain calculus, 315
optimal solution, 114 uncertain control, 268
option pricing, 359 uncertain currency model, 378
order statistic, 61, 422 uncertain differential equation, 331
parallel system, 134 uncertain entailment, 166
Index 497

uncertain finance, 359 uncertain sequence, 98


uncertain graph, 444 uncertain set, 173
uncertain inference, 261 uncertain statistics, 385
uncertain insurance model, 302 uncertain stock model, 359
uncertain integral, 320 uncertain system, 265
uncertain interest rate model, 374 uncertain time series analysis, 403
uncertain logic, 235 uncertain variable, 33
uncertain matrix, 108 uncertain vector, 104
uncertain measure, 14 uncertainty, definition of, 482
uncertain network, 447 uncertainty distribution, 36, 274
uncertain process, 273 uncertainty space, 18
uncertain programming, 113 unimodal quantifier, 239
uncertain proposition, 153, 249 union of uncertain sets, 177, 200
uncertain quantifier, 236 urn problem, 465
uncertain random process, 449 value-at-risk, 142, 441
uncertain random programming, 435 variance, 81, 223, 430
uncertain random variable, 415 vehicle routing problem, 121
uncertain regression analysis, 396 Wiener process, 478
uncertain reliability analysis, 148 Yao-Chen formula, 344
uncertain renewal process, 295 zero-coupon bond, 374
uncertain risk analysis, 133 zigzag uncertain variable, 41
Baoding Liu
Uncertainty Theory
When no samples are available to estimate a probability distribution, we have
to invite some domain experts to evaluate the belief degree that each event
will happen. Perhaps some people think that the belief degree should be
modeled by subjective probability or fuzzy set theory. However, it is usually
inappropriate because both of them may lead to counterintuitive results in
this case. In order to rationally deal with personal belief degrees, uncertainty
theory was founded in 2007 and subsequently studied by many researchers.
Nowadays, uncertainty theory has become a branch of mathematics.
This is an introductory textbook on uncertainty theory, uncertain program-
ming, uncertain risk analysis, uncertain reliability analysis, uncertain set,
uncertain logic, uncertain inference, uncertain process, uncertain calculus,
uncertain differential equation, and uncertain statistics. This textbook also
shows applications of uncertainty theory to scheduling, logistics, network op-
timization, data mining, control, and finance.
Axiom 1. (Normality Axiom) M{Γ} = 1 for the universal set Γ.
Axiom 2. (Duality Axiom) M{Λ} + M{Λc } = 1 for any event Λ.
Axiom 3. (Subadditivity Axiom) For every countable sequence of events Λ1 ,
Λ2 , · · · , we have (∞ )
[ X∞
M Λi ≤ M{Λi }.
i=1 i=1

Axiom 4. (Product Axiom) Let (Γk , Lk , Mk ) be uncertainty spaces for k =


1, 2, · · · The product uncertain measure M is an uncertain measure satisfying
(∞ ) ∞
Y ^
M Λk = Mk {Λk }
k=1 k=1

where Λk are arbitrarily chosen events from Lk for k = 1, 2, · · · , respectively.

.... ....
........ ........ ....... ........ .......
.... .............................. .... .....................
................... .... .... . . ...............
... ... .......................
... .................. .... .... .... ... . .....
... . .
.. .. .. .. .. ... ................... ...
... ................ .... .... .... ... ... ....... ... ...
... ..... .. ... ... ... ... ... ..... .. .. ..
..... ... ... ... ... .. ....... .. .. .
...
................. ..... .... .... .... ....
... ..... .. .. .. ...
... ... ... .... .... ..... .... ...
... ..... ... ... ... ... ... ...
. ... ... ... ... ... .... ...
... ..... ... .. ... ... .. ... ... ... ....... .... ... ... ...
... ................. .... ..... .... .... .... ... ... ... .. .. .. ... .. ...
... ... . . . .. .. .. .. ... .. ..... ..... .... .... ..... ...
... ..... ..... ..... ..... .... .... .... .... ...
.
. .... . . . .
.
.
............ .. .. .. .. .. .. .. ... .... .... .... .... .... ..... ....
... .... ... .. .. .. .. .. ... .. ... ... ... ... ... ... ... ... ...
... ............. .. ... ... ... ... ... .. .. ... ....
... ...... .. .. .. .. .. .. .. .. .. ... .......
. ......... .... .... .... .... .... ...
... ................... .... ..... .... .... .... .... .... ..... .... ... ...........
. .
. .
.. ... ... ... ... ... ... ...
. . . .. . . .
.... .. . . . . . ....
................................................................................................................................................................................................. ..............................................................................................................................................................................................
... ...
.... ....
... Probability ... Uncertainty

Probability theory is a branch of mathematics for modelling frequencies, while


uncertainty theory is a branch of mathematics for modelling belief degrees.

Вам также может понравиться