Академический Документы
Профессиональный Документы
Культура Документы
1
1.1
Introduction
Motivation
The ever growing application of rolling element bearings in the industry has been
a constant source of motivation for the design engineers to come up with a design technology that gives long lasting, more efficient and highly reliable bearing
design. The objectives are hard to satisfy, thus making it a numerically challenging problem. Further there is a need to collectively optimise them. Hence
the numerical toughness and a need to optimise them collectively demands an
application of (evolutionary) multiobjective optimization. The objective functions that are to be optimized are dynamic capacity (Cd ), static capacity (Cs )
and minimum film thickness (hmin ).
1.2
Previous work
Due to the aforementioned toughness of the problem. There have been very less
attempts to optimise the objectives simultaneously. Wan [Cha91] has proposed
separate objective functions with some specified geometric constraints. The
reported work where multiobjective optimization was implemented [Kal01] does
a weighted combination of these individual objective functions namely - dynamic
capacity, static capacity and minimum film thickness (all three of them have
to be maximized). Thus we convert our multiobjective problem into a scalar
optimization problem. But in the work done; weights for combination were
chosen randomly, and the results obtained leave room for an improvement. Both
deterministic as well as stochastic approaches are tried out in [Kal01]. Penalty
function forms the deterministic approach used while simulated annealing and
genetic algorithm form the stochastic ones. The results were found to be good
when dealt with single objectives, but inclusion of more than one objective in
the genetic algorithm formulation led to non conclusive results.
1.3
Our work
1.4
Outline
The rest of the report is organized as follows, in the section 2 we present the
geometry and thus give a physical framework to the problem, followed by mathematical modeling of problem in section 3 as set of our objective functions,
parameters and applied constraints that restrict the feasible parameter space.
Section 5 debates the point of whether to use deterministic or stochastic approach, followed by 6 where we introduce and explain the details of used multiobjective evolutionary algorithm. Then next section 7 discusses the application
of the algorithm to our problem, and the results obtained. We end our report
with conclusions in section 8 followed by important references. In appendix we
list the implementation code overview, compilation and usage, refer [Kal01] for
the the important symbols used in this report.
The ball and roller bearings appear to be simple, but their internal geometry
is quite complex. Different kind of stresses, deflections, load distribution have
effect on performance and life of the bearing. The common nomenclature of a
typical ball bearing is shown in figure 1.
Problem formulation
3.1
Parameters
The design parameters chosen in this problem are diameter of the balls (Db ),
mean diameter (Dm ), number of balls (Z), curvature radius coefficient of inner
raceway groove (fi ) and curvature radius coefficient of outer raceway groove
(fo ). The given geometries are the bearing bore (d) and the outside diameter
(D).
Apart from the varying parameters, we also have some constants defined.
Here for this problem we have six constants in total - KD min, KD max, , e, o , .
The values of these constants are fixed. All angles are measured in radians, the
distances in millimeteres (mm), force in newtons (N).
KDmin
KDmax
e
o
3.2
=
=
=
=
=
=
0.5
0.8
0.2
0.1
4.712388
0
Constraints
There are eight inequality constraints in total. Out of them six are used up to
define parametric bounds. The remaining two are kept as problem constraints,
and are supposed to be handled by the optimization algorithm. The following
are the eight problem constraints:
2Db KDmin (D d) 0
(1)
KDmax (D d) 2Db 0
Dm (0.5 e)(D + d) 0
(2)
(3)
(0.5 + e)(D + d) Dm 0
fi 0.515 0
(4)
(5)
fo 0.515 0
(6)
o
Z +10
Db
2sin1 D
m
(7)
0.5(D Dm Db ) Db 0
(8)
We can rewrite the top 6 to formulate the parametric bounds for four of the
five parameters. The remaining parameter is Z (number of balls). This is given
an intuitional bound of 6 to 50. This is based on our previous experiences.
KDmin (D d)
KDmax (D d)
Db
2
2
(0.5 e)(D + d)
Dm (0.5 + e)(D + d)
2
0.515 fi 0.6
0.515 fo 0.6
(9)
(10)
(11)
(12)
The remaining 2 give problem constraints. These are the ones which are
employed to constrain the optimization.
3.3
o
Z +10
Db
2sin1 D
m
(13)
0.5(D Dm Db ) Db 0
(14)
Objective functions
This objective forms the requirement for longest fatigue life. We are going to
list the formula specific to the deep groove ball bearings. The symbols used are
explained in detail by [Kal01]. In the following equation, the factor is in a
reduced form (complete form: = Db cos/Dm , here = 0). The objective to
be maximized is Cd .
fc Z 3 Db1.8
Db 25.4mm
2
3.647fc Z 3 Db1.4 Db > 25.4mm
0.3
1.72
0.41 10
3
1
fi (2fo 1)
fc = 37.91 1 + 1.04
1+
fo (2fi 1)
0.3
0.41
(1 )1.39
2fi
1
2fi 1
(1 + ) 3
Cd =
= Db /Dm
(15)
3.3.2
Static Capacity(Cs )
Static capacity is defined for both inner as well as outer raceway. The objective
functions takes the minimum of both the values and maximizes that. In the
following equation, Cs has to be maximized, i represents the number of rows,
here i = 1, and cos = 1, as = 0 for deep groove bearing. a , b is found
by some separate equations, they are not given here to keep things concise,
read [Kal01] for more details.
C(s,i) =
2
2
4 f1i + 1
C(s,o) =
2
2
4 f1o 1+
Cs = min(C(s,i) , C(s,o) )
3.3.3
(16)
The third and the last objective function is minimum film thickness. This
directly refers to the longest wear life. We want least amount of wear, therefore
the optimization problem for these types of bearings is that for given sizes of the
bearing outline and operation conditions, the film thickness should be maximum.
This performance will also differ for inner and outer raceway, therefore we take
the minimum of two and maximize that. First we will list some constants apart
from global constants of the problem. For significance and more information
about these constants read [Kal01].
1
0
ni
E0
Fr
=
=
=
=
=
1 108 P a1
0.02 P a.S
5000 rpm
2.25e + 11 P a
15000 N
(17)
Q =
R(x,i)
R(y,i)
R(x,o)
R(y,o)
5Fr
iZcos
Db /2(1 )
fi Db
2fi 1
Db /2(1 + )
fo Db
2fo 1
0.68
ni Dm 0 (1 2 )
120
R(y,i) 0.636
1 exp 0.703
R(x,i)
0.68
2
0.49 0.466 0.117 0.073 ni Dm 0 (1 )
= 3.631 R(x,o) E0
Q
120
R(y,o) 0.636
1 exp 0.703
R(x,o)
hmin,o
(18)
In this section we will give an overview of the previous approach used to tackle
this problem. A weight based approach was used; the idea was to combine
the three objective into one single scalar function, and then use deterministic
or stochastic approach to handle this single objective. We have to normalize
the individual functions when combining them. This is required since there are
upper and lower limits on each of these objective functions and a combination
necessiates normalization. Thus it gives uniformity to the problem. To explain
normalizaton let us take a case of some objective function f (X), if it has a and
b as its lower and upper limits, then we can normalize the function as = f (X) a
f (X)
ba
(19)
The objective function obtained upon combining all three individual objectives using weights can be given as f (X) = w1
Cd a1
Cs a2
hmin a3
+ w2
+ w3
b1 a1
b2 a2
b3 a3
(20)
(21)
Now we can run any optimization algorithm for this particular objective
function with given parameter set and apply the constraints mentioned earlier
in this report. The biggest drawback of this approach is that we do not have
any fixed criteria for choosing the weights, and each selection will give us some
arbitary trade-off point. In contrast the MOEA used by us guarantees a good
distribution of trade-off points to cover majority of the Pareto front.
The decision to choose between the deterministic and stochastic approach is one
of the most difficult ones to make. The choice has to take into account number
Deterministic
Low computation cost
Search is directed
Easy to determine the limiting
constraints
Could get stuck in local optima
in most of the cases
Convergence is guaranteed
Stochastic
High computation cost
Search is random
Not easy to determine the
limiting constraints
Random search helps us to
reach global optima
Convergence is assumed when
improvement between generations
become very low
Good with inequality constraints
Increase in complexity is not
so drastic
Sensitive genetic operating
parameters
Good for a new incompletely
specified
As we saw in 3 the different objectives were combined with help of weighted summation. This was the approach used in the previous work [Kal01], but there is
also a need to choose well suited weights for different objectives. This is required
in order to obtain useful non-dominated points on the Pareto front [Mie99].
6.1
Genetic algorithm got its name from biological concept of genes. Here we represent our solution as chromosomes, which are nothing but representation of
our sample solution as a string of values. These few initial chromosomes form
a population. The best ones from the population are chosen for mating. After
which they are probabilistically chosen to be crossed over with others. Last
step is mutation with a very smally probability. These three steps form a complete cycle or generation as termed in GA. We keep repeating these steps until
we notice very nominal improvement from one generation to the next. This is
effectively taken to be the occurence of convergence.
6.2
6.3
This is the first stage for any non-dominated sorting based algorithm. This
does a quick sorting on the solution space, and extract the set of non-dominated
points or the Pareto points.
6.4
In this component, we strip out the non-dominated fronts one by one from
the solution space and give them ranks. Like, the solution points belonging to
the first one, first non-dominated front, are given a rank of 1, and the those
belonging to next non-dominatet front a rank of 2, and so on. This way we get
a set of non-dominated fronts.
6.5
Density estimation
It is very important to keep the solution points well spread out, for this efficient
measures are required for controlling the crowding in one region. This basically
gives a distance measure to every solution point. This distance is an estimate of
largest cuboid enclosing that solution point without including any other solution
point. A graphical illustration is in figure 3.
10
6.6
This operator is used to rank the individuals (solutions) on the basis of two
things,
Rank: from non-dominated front they are belonging to, front 1, rank 1,
front 2, rank 2 and so on.
Distance: Crowding distance, from previous section on density estimation.
6.7
Main loop
This part defines the main control loop. All the components described above
are effectively employed in this. We also show a graphical representation of the
same in figure 4. The main loop does the task of taking parent (Pt ) and child
(Qt ) population from tth generation and generating parent (Pt+1 ) and child
(Qt+1 ) of t + 1th generation. Parent population of a generation is the incoming
population from previous generation, and child population is the one obtained
after selection, crossover and mutation.
11
Previous
7516
4685
0.23088
New
7578
4873
0.23088
4900
"sp"
"sp"
4830
4850
4820
4800
4810
4800
4750
4790
4700
4780
4770
4650
4760
4600
4750
4740
7420
7440
7460
7480
7500
7520
7540
7560
7580
4550
7200
7600
7250
7300
7350
7400
7450
7500
7550
7600
4900
4850
"sp"
"sp"
4850
4800
4800
4750
4750
4700
4700
4650
4650
4600
4600
4550
6800
6900
7000
7100
7200
7300
7400
7500
4550
7000
7600
7100
7200
7300
7400
7500
7600
The final pair is that of static capacity and minimum film thickness. The
graphs are shown by figure 7. The best solution is that from plot 7(c). We
see a huge cavity in the Pareto front. This can be analyzed to give insightful
information about bearing design. The other parameters are similar, low values
of distribution indices, high value of population, and a generation count ranging
from 30 to 80.
Obtainment of good results while using low values of distribution indices
can be interpreted in a simple way. The distribution indices represent a kind
of averaging effect while doing mutation and crossover (real mutation and real
crossover ). We want less of this averaging effect because we want less crowding
around one particular solution. We need a spread of solutions (non-dominated
points). Basically we want more of randomness in our optimization run to get
a better spread of solution and therefore we want less of sharing, i.e. less of
distribution.
For the final case, we take the optimization of all three objective functions
simultaneously. Here the results obtained have a direct basis of comparison
with the older work done. The improvement we see is significant. In fact, even
the best solution obtained by previous approach is completely dominated by
solutions obtained by our approach. The table 3 shows the final results. We
must keep in mind that while using previous approach, obtaining each solution
point meant a complete optimization run. In contrast to this our approach gives
nearly 600-700 Pareto points in one run.
Objectives
Weights Solution vector
Dynamic capacity
0.4
7393
Static capacity
0.4
4439
Minimum film thickness
0.2
0.20236
Dynamic capacity
0.4
7238
Static capacity
0.2
4623
Minimum film thickness
0.4
0.2141
Objectives
Solution number Solution vector
Dynamic capacity
7472
Static capacity
1
4773.61
Minimum film thickness
0.21397
Dynamic capacity
7330
Static capacity
2
4692
Minimum film thickness
0.2154
Dynamic capacity
...
Static capacity
..
...
Minimum film thickness
...
Dynamic capacity
7164
Static capacity
n
4830
Minimum film thickness
0.2225
Table 3: Old approach (top) only 2 solution vectors were there in previous work,
New approach (bottom) n is in the order of 600-700
14
Conclusion
In this technical report to our work, we have introduced the problem statement.
Then discussed the previous approach (weight based), pointing out the shortcomings of approaches used. Also we have debated the applicability of deterministic or stochastic approach to the problem at hand. Where we find stochastic
approach better suited for handling this multiobjective problem. Then we justify the use of NSGA II to solve the multiobjective design optimization problem
for rolling element bearings. After a brief look at NSGA II algorithm and its advantages, we give its application in our problem. The results obtained are very
encouraging for both 2 objective and 3 objective optimizations. The obtained
graphs give insightful information about rolling element bearings performance
variations.
References
[Cha91]
[Coe]
[Mie99]
[SGA03]
G. Stehr, H. Graeb, , and K. Antreich. Performance trade-off analysis of analog circuits by normal-boundary intersection. In 40th
Design Automation Conference (DAC) 2003, Anaheim, CA, USA,
June 2003.
[Zit99]
15
0.23
0.205
"sp"
"sp"
0.228
0.2045
0.226
0.204
0.224
0.2035
0.222
0.203
0.22
0.2025
0.218
0.202
0.216
0.214
6400
6500
6600
6700
6800
6900
7000
7100
7200
7300
7400
0.2015
7340
7500
7360
7380
7400
7420
7440
7460
7480
7500
7520
0.235
0.23
"sp"
"sp"
0.228
0.23
0.226
0.225
0.224
0.22
0.222
0.22
0.215
0.218
0.21
0.216
0.205
0.214
0.2
6400
6600
6800
7000
7200
7400
0.212
6400
7600
6600
6800
7000
7200
7400
0.235
0.235
"sp"
"sp"
0.23
0.23
0.225
0.225
0.22
0.22
0.215
0.215
0.21
0.21
0.205
0.205
0.2
5800
6000
6200
6400
6600
6800
7000
7200
7400
7600
0.2
6000
7600
6200
6400
6600
6800
7000
7200
7400
16
7600
0.231
0.231
"sp"
"sp"
0.2305
0.2305
0.23
0.23
0.2295
0.2295
0.229
0.229
0.2285
0.2285
0.228
0.2275
4600
0.228
4650
4700
4750
4800
4850
0.2275
4760
4900
4770
4780
4790
4800
4810
4820
4830
4840
4850
4870
0.231
0.231
"sp"
"sp"
0.2305
0.2305
0.23
0.23
0.2295
0.2295
0.229
0.229
0.2285
0.2285
0.228
0.228
0.2275
0.2275
0.227
4500
4860
4550
4600
4650
4700
4750
4800
4850
0.227
3900
4900
4000
4100
4200
4300
4400
4500
4600
4700
4800
17
4900
Implementation
This section gives a brief overview about usage of the coding work done to get
the results shown in this report. We will explain the important code files, the
other files in the code directory can be safely ignored.
nsga2.c: Main code which describes the control flow of NSGA II algorithm.
bearing-constants.h: This is a file containing bearing constants like E, , e
etc. User can modify it if need be.
config-gen.c: This is an important file and helps in generation of inputs to be
provided to the NSGA II. Everything from mutation, crossover probability,
to parametric bounds can be modified from here.
elliptic.h: This is a file that need not be touched, it defines the common functions used for finding objective function values. Especially to find a and
b (refer section 3).
func-con.h: This file contains, the definition of objective functions, and constraints. This is most important only when user needs to modify objective
functions. This kind of need is not foreseen.
compile: One file that does it all. Run this after any kind of modification to
any part of the code, it will first do a compilation. Then it will ask for
D (outer diameter) and d (bore diameter) as bearing input. Finally, it
will ask for functions that have to be optimized, after which it will start
execution of optimization algorithm and the results will be given out in
several *.out files.
Therefore to run the optimization, just typing ./compile will do. It takes
care of all other inputs. If you want to set some parameters for optimization
manually, please edit the config-gen.c file accordingly. The settings left by
default were found to be optimum.
18