Вы находитесь на странице: 1из 10

A Comparative study using Genetic Algorithm and Particle Swarm Optimization for Lower order System Modelling

A Comparative Study Using Genetic Algorithm and Particle Swarm Optimization for Lower Order System Modelling
S.N. Sivanandam1
1

S.N.Deepa2

2 Professor and Head Research Scholar Department of Computer Science and Engineering PSG College of Technology Coimbatore 641 004, India E-mail Id: snsivanandam@yahoo.co.in, deepapsg@rediffmail.com

Abstract In recent years Evolutionary Computation has its growth to extent. Amidst various Evolutionary computation approaches, Genetic Algorithms and Particle swarm optimisation are used in optimisation problems. The two approaches find a solution to a given objective function employing different procedures and computational techniques; as a result their performance can be evaluated and compared. The problem area chosen is that of lower order system modelling used in control systems engineering. Particle Swarm Optimization and Genetic Algorithm obtains a better lower order approximant that reflects the characteristics of the original higher order system and the performance evaluated using these methods are compared. Integral square error is used as an indicator for selecting the lower order model. Keywords Particle Swarm Optimization Genetic Algorithm - Lower Order System Modelling - Integral square error. I. Introduction During the early 1950s researchers studied evolutionary systems as an

optimisation tool, with an introduction to the basics of evolutionary computing. Until 1960s evolutionary systems was working in parallel with Genetic Algorithm (GA) research. At this stage, evolutionary programming was developed with the concepts of evolution, selection and mutation. John Holland [1] introduced the concept of Genetic Algorithm as a principle of Charles Darwinian theory of evolution to natural biology. The working of genetic algorithm starts with a population of random chromosomes. The algorithm then evaluates these structures and allocates reproductive opportunities such that chromosomes, which have a better solution to the problem, are give more chance to reproduce. While selecting the best candidates, new fitter offspring are produced and reinserted, and the less fit is removed. The exchange of characteristics of chromosomes takes placeusing operators like crossover and mutation. The solution is defined with respect to the current population. GA operation basically depends on the Schema theorem. GAs are recognized as best function optimisers and is used broadly in pattern discovery, image processing, signal processing, and in training Neural Networks. Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Eberhart and

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September - December, 2009) pp 1 -10

S.N. Sivanandam and S.N.Deepa

Kennedy [2] in 1995, inspired by social behavior of bird flocking or fish schooling. The PSO method is a member of the wide category of Swarm Intelligence methods [3]. PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialised with a population of random solutions and searches for optima by updating generations. However, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. PSO can be easily implemented and is computationally inexpensive sine its memory and CPU speed requirements are low. Also, it does not require gradient information of the objective function being considered, only its values. PSO is proving itself to be an efficient method for several optimization problems, and in certain cases it does not suffer from the problems encountered by other Evolutionary Computation techniques. PSO has been successfully applied in many areas: function optimization, artificial neural network training, fuzzy system control, and other areas where GA can be applied. Even though, PSO typically moves quickly towards the best general area in the solution space for a problem, it often has difficulty in making the fine grain search required to find the absolute best point. Many control system applications, such as satellite altitude control, fighter aircraft control, model-based predictive control, control of fuel injectors, automobile spark timer, possess a mathematical model of the process with higher order, due to which the system defined becomes complex. These higher order models are cumbersome to handle. As a result, lower order system modelling can be performed, which helps in alleviating computational complexity and implementation difficulties involved in the

design of controllers and compensators for higher order systems. Further, the development and usage of micro controllers and microprocessors in the design and implementation of control system components has increased the importance of lower order system modelling [4-6]. Thus, in this paper, Genetic Algorithm and Particle Swarm Optimization are used independently to higher order systems and a suitable lower order system is modelled. 2. Problem Definition Consider an nth order linear time invariant system with q inputs and r outputs described in time domain by state space equations as,
x = Ax ( t ) + Bu ( t ) y( t ) = Cx ( t )

(1)

where x is n dimensional state vector, u is q dimensional control vector and y is r dimensional output vector with q n and r n. Also, A in n n system matrix, B is n q input matrix and C is r n output matrix.

Alternatively, equation (1) can be described in frequency domain by the transfer function matrix of order r q as,
n 1

G( s ) =
(2)

N( s ) = D( s )

i =0 n

Ai s ai s

i =0

where N(s) is the numerator matrix polynomial and D(s) is the common denominator polynomial of the higher order system. Also, Ai and ai are the constant matrices of numerator and denominator polynomial respectively.

A Comparative study using Genetic Algorithm and Particle Swarm Optimization for Lower order System Modelling

Irrespective of the form represented in equation (1) or (2) of the original system G(s), the problem is to find a mth lower order model Rm(s), where m<n in the following form represented by equation (3), such that the reduced model retains the important characteristics of the original system and approximates its response as closely as possible for the same type of inputs with minimum integral square error.
m 1 m i Bi s

To illustrate the working process of genetic algorithm, the steps to realise a basic GA [8] are as follows: Step 1: Represent the problem variable domain as a chromosome of fixed length; choose the size of the chromosome population, the crossover rate and mutation rate. Step 2: Define a fitness function to measure the performance of an individual chromosome in the problem domain. Step 3: Generate randomly. initial population

Rm( s ) =

N (s) D (s)
m

i=0 m

i=0

bi s

(3)

where, Nm(s) and Dm(s) are the numerator matrix polynomial and common denominator of the reduced order model respectively. Also, Bi and bi are the constant matrices of numerator and denominator polynomial of the same order respectively. Mathematically, the integral square error [7] can be expressed as,

Step 4: Compute the fitness of each individual chromosome. Step 5: Select a pair of chromosomes to mate from the current population. Parent chromosomes are selected with a probability related to their fitness. The chromosomes with high fitness have higher probability to be selected for mating than chromosomes with less fitness. Step 6: Apply genetic operators like crossover, mutation, inversion, diploidy to create a pair of new offspring chromosomes. Step 7: Place the created offspring chromosomes in the new population. Step 8: Perform step 5 upto the size of the new population equals that of the initial population. Step 9: Replace the initial (parent) chromosome population with the new (offspring) population.

E = ( Yt y t ) 2
t =o

(4)

where, Yt is the unit step time response of the given higher order system at the tth instant in the time interval 0 t , where is to be chosen and yt is the unit step time response of the lower order system at the tth time instant. The objective is to model a system Rm(s), which closely approximates G(s) for a specified set of inputs. 3. Genetic Algorithm Operation

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September - December, 2009) pp 1 -10

S.N. Sivanandam and S.N.Deepa

Step 10: Go to Step 4, and perform the process until the stopping condition is satisfied. GA is an iterative process where each iteration is called a generation. A typical number of generations for a simple GA can vary from 50 to over 500. In general, GA is terminated after a specified number of generations is reached and then to examine the best chromosomes in the population. 4. Particle Swarm Algorithm Operation Optimization

value found so far by particle i and the entire population respectively; w is a parameter controlling the dynamics of flying; R1 and R2 are random variables in the range [0,1]; c1 and c2 are factors controlling the related weighting of corresponding terms. The random variables help the PSO with the ability of stochastic searching. Step 3: Position updating The positions of all particles are updated according to, p i = p i + vi (6) After updating, pi should be checked and limited to the allowed range. Step 4: Memory updating Update pi,best and gi,best when condition is met,
p i ,best = pi g i ,best = g i if f ( pi ) > f ( pi ,best ) if f ( g i ) > f ( g i ,best )

Particle Swarm Optimization [9-14] optimizes an objective function by undertaking a population based search. The population consists of potential solutions, named particles, which are metaphor of birds in flocks. These particles are randomly initialized and freely fly across the multi dimensional search space. During flight, each particle updates its own velocity and position based on the best experience of its own and the entire population. The various steps involved in Particle Swarm Optimization Algorithm [15] are as follows: Step 1: The velocity and position of all particles are randomly set to within pre-defined ranges. Step 2: Velocity updating At each iteration, the velocities of all particles are updated according to,
v i = vi + c1 R1 ( p i ,best p i ) + c 2 R 2 ( g i ,best p i )

(7)

where f(x) is the objective function to be optimized. Step 5: Stopping Condition The algorithm repeats steps 2 to 4 until certain stopping conditions are met, such as a pre-defined number of iterations. Once stopped, the algorithm reports the values of gbest and f(gbest) as its solution. PSO [16] utilizes several searching points and the searching points gradually get close to the global optimal point using its pbest and gbest. Initial positions of pbest and gbest are different. However, using thee different direction of pbest and gbest, all agents gradually get close to the global optimum.

(5) where pi and vi are the position and velocity of particle i, respectively; pi,best and gi,best is the position with the best objective

A Comparative study using Genetic Algorithm and Particle Swarm Optimization for Lower order System Modelling

5. Lower Order System Modelling Results Simulations were conducted on a Pentium 4.0, 2.8 GHz computer, in the MATLAB 7 Environment. The flowchart depicting the entire lower order system modelling using Genetic Algorithm and particle Swarm Optimization is as shown in Figure 1.
G(s) =

The higher order system considered for lower order modelling is given by [17]:

35 s 7 +1086 s 6 +13285 s 5 + 82402 s 4 + 278376 s 3 + 511812 s 2 + 482964 s +194480 s 8 + 33 s 7 + 437 s 6 + 3017 s 5 +11870 s 4 + 27470 s 3 + 37492 s 2 + 28880 s + 9600

(8)

The transient and steady state gain for given G(s) in above equation is calculated as below:
TG |G (s) = 35 194480 = 35 and SSG |G (s) = = 20 .26 1 9600

(9) A simple auxiliary scheme discussed in Appendix is used to obtain a basic lower order model R(s) from the given G(s), whose coefficients are used as initial seed values for training in Genetic Algorithm and Particle Swarm Optimization. On applying the auxiliary scheme to G(s) in equation (8), the basic lower order model R(s) is given by,
R(s) = 482964 s +194480 37492 s 2 + 28880 s + 9600

and steady state gain of given G(s). Gain adjustments are performed to maintain the characteristics of R(s) in par with that of G(s). Thus on scaling and adjusting gains, R(s) becomes,
R( s) = 3 s +5.1 8 5 87 s 2 +0.7 0 s +0.2 6 73 51 = B s +B 1 0 b s 2 +b s +b 2 1 0

(10)

The above equation (10), should be scaled and tuned to satisfy the transient gain Table 1 Lower Order System Modelling Results using GA and PSO Approach
Algorithm Genetic Algorithm Particle Swarm Optimization B0 66.2670 61.9485 b1 3.3571 3.2467 b0 3.2708 3.0577 No. of generations 100 100 Time taken for simulation 24 seconds 9 seconds

(11) where the algorithms had to identify B0, b1 and b0. The proposed parameters (B0=5.1887, b1=0.7703, b0=0.2561) are used as initial seed values for tuning in GA and PSO using the Integral Square Error (ISE) as the objective function to be minimized. For both the algorithms the population was set to 40 individuals and a maximum generation of 100. The results of applying the GA and PSO to the lower order system modelling problem are provided in Table 1.

Integral Square Error 1.7519 1.0069

For each parameter the final value determined by the respective algorithm is given, followed the number of generations the simulation was run. The first to last column reports the time taken in seconds required by the CPU for the complete

simulation of 100 generations. And the final column presents the minimized fitness value of the objective function Integral Square Error. The time taken for optimization process is comparatively higher for GA search than PSO search algorithm. The

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September - December, 2009) pp 1 -10

S.N. Sivanandam and S.N.Deepa

Integral Square Error (ISE) computed is also minimal in PSO compared to GA.
Start Read the coefficients of the numerator and denominator from, which represents the transfer function of the given higher order continuous system.
Calculate transient gain (TG) and steady state constraint (SSG)

Apply the proposed auxiliary scheme in appendix to G(s) and obtain the approximate lower order model R(s), scale it and tune it to maintain the transient gain of G(s).

Invoke Genetic Algorithm by passing numerator and denominator coefficients of R(s)

Invoke Particle Swarm optimization by passing numerator and denominator coefficients of R(s)

InitializePopulation

InitializeSwarm

Evaluate Fitness
Perform Selection

Calculate Velocities

Calculate New Positions


Evaluate Swarm

Create new population via Crossover

Perform mutation

Update memory of each particle

Optimal or good solution found?

No

No

Optimal or good solution found?

Yes

Yes

Return the optimized lower order model obtained by invoking GA and PSO Stop

A Comparative study using Genetic Algorithm and Particle Swarm Optimization for Lower order System Modelling

Figure 1 Flowchart for Lower order System Modelling using Genetic Algorithm and Particle Swarm Optimization integral square error with respect to other From Table 1 the lower second order methods considered for comparison. model generated for the given higher order system G(s), can be written as, The unit step responses of the given original higher order system represented in equation (8), the lower order models shown 35 s + 66 .267 R(s) | = in equations (12) and (13) obtained from (12) u sin g GA s 2 + 3.3571 s + 3.2708 algorithms GA and PSO are shown in Figure 2 respectively. The figures depict that 35 s + 61 .9485 R( s) | = (13) output produced using both algorithms u sin g PSO s 2 + 3.2467 s + 3.0577 appears closer to the ideal i.e., the lower order model maintains the original It is observed from Table 2 that the characteristics of the given higher order proposed scheme yields better value for system.

Figure 2 Unit step response curves using Genetic Algorithm and Particle Swarm Optimization Approach

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September - December, 2009) pp 1 -10

S.N. Sivanandam and S.N.Deepa

The parameters used during GA [18] search are cross over rate 0.1, mutation rate 0.001, selection method Roulette Wheel Selection. The values of parameters involved in PSO [9] search are, c1 and c2 0.5, no. of particles 40. 6. Discussion

particle gives out the information to others. It is a one-way information sharing mechanism, the evolution only looks for the best solution. Compared to GAs, the advantages of PSO are that PSO is easy to implement and there are few parameters to adjust. 7. Conclusion

The GAs strength is in the parallel nature of their search. The genetic operators used are central to the success of the search. All GAs requires some form of recombination, as this allows the creation of new solutions that have, by virtue of their parents success. In general, crossover is the principal genetic operator, whereas mutation is used less frequently. Crossover attempts to benefit offspring solutions and to eliminate undesirable components, while the random nature of mutation is probably to degrade a strong offspring solution than to improve it. The algorithms power is the implicit parallelism inherent in the evolutionary metaphor. By restricting the reproduction of weak offsprings, GAs eliminates not only that solution but also all of its descendants. This makes the algorithm converge towards high quality solutions within a few generations. Particle Swarm optimization shares many similarities with Evolutionary Computation (EC) techniques in general and GAs in particular. All these techniques begin with a group of a randomly generated population and utilize a fitness value to evaluate the population. They all update the population and search for the optimum with random techniques. The main difference between the PSO approach compared to EC and GA is that PSO does not have genetic operators such as crossover and mutation. Particles update themselves with the internal velocity; they also have a memory important to the algorithm. Also, in PSO only the best

In this paper a comparative study has been made using Particle Swarm Optimization and Genetic Algorithm for lower order system modelling. Overall the simulation results indicate that both GAs and PSO can be used in the search of parameters during system modelling. With respect to minimizing the objective function Integral Square Error, the PSO determines a minimal value than does the GA. In terms of computational time, the PSO approach is faster than GA, although it is noted that neither algorithm takes what can be considered an unacceptably long time to determine the results. The algorithms like GA and PSO are inspired by nature, and have proved themselves to be effective solutions to optimization problems. These techniques possess apparent robustness. There are various control parameters involved in these meta heuristics, and appropriate setting of these parameters is a key point for success. In general, any meta-heuristic should not be thought of as an isolation: the possibility of performing hybrid approaches should be considered. Additionally for both approaches the major issue in implementation is based on the selection of an appropriate objective function. Acknowledgement

A Comparative study using Genetic Algorithm and Particle Swarm Optimization for Lower order System Modelling

The authors wishes to thank the All India Council technical Education (AICTE), India for providing the grant to carry out this research work. Appendix - Auxiliary Scheme Consider an nth linear time invariant continuous higher order system represented by its transfer function as:
n 1

selection of approximate lower helps to set the initial values parameters to be used in algorithm particle swarm process.

order models of operating the genetic optimisation

G( s ) =
(14)
=

N( s ) = D( s )

i =0 n

Ai s ai s

i=o

An 1 s n 1 + An 2 s n 2 + ... + A3 s 3 + A2 s 2 + A1 s + A0 a n s n + a n 1 s n 1 + ... + a 3 s 3 + a 2 s 2 + a 1 s + a 0

(15) The auxiliary scheme for obtaining the approximated lower order models from the given higher order system is as follows: First order: (16) Second order: (17) . . . (n-1)th order:
An 2 s n 2 + An 3 s n 3 + ... + A1 s + A0 a n 1 s n 1 + a n 2 s n 2 + ... + a 1 s + a 0
A0 a1 s + a0

A1 s + A0 a 2 s 2 + a1 s + a0

(18) Equations (16) through (18), gives the lower order models formulated using auxiliary scheme from the given higher order system G(s). Based on the requirement, suitable lower order model can be selected and operates. It should be noted for a higher order system of order n, (n-1) lower order models could be formulated. This method of

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September - December, 2009) pp 1 -10

S.N. Sivanandam and S.N.Deepa

References J.Holland (1975), Adaptation in Natural and Artificial Systems, University of Michigan Press. [2] J.Kennedy and R.C.Eberhart (1995), Particle Swarm Optimization, Proceedings IEEE International Conference on Neural Networks, Vol. 4, pp.1942-1948, IEEE Service Centre, Piscataway, NJ. [3] J.Kennedy and R.C.Eberhart (2001), Swarm Intelligence, Morgan Kaufman Publishers, California. [4] R.Prasad (2000), Pade type model order reduction for multivariable systems using Routh approximation, Computers and Electrical Engineering, Vol. 26. [5] R.Prasad, et al. (2003), Improved Pade approximants for multivariable systems using stability equation method, Journal of Institution of Engineers (India), Vol.84, pp.161-165. [6] R.Prasad, et al. (2003), Linear model reduction using the advantages of Mikhailov criterion and factor division, Journal of Institution of Engineers (India), Vol.84, pp.9-10. [7] M.Gopal (2003), Control System Principles and Design, Tata McGraw Hill Publishing Company Ltd, New Delhi. [8] David E. Goldberg (2000), Genetic Algorithms in search, optimization and machine learning, Pearson Education Asia Ltd, New Delhi, 2000. [9] R.C.Eberhart and Y.Shi (2001), Particle Swarm Optimization: developments, applications and resources, Proceedings Congress on Evolutionary Computation, IEEE Service Centre, Piscataway, NJ, Seoul, Korea. [10] Shi.Y and Eberhart.R.C (1998), A modified particle swarm optimizer, Proc. IEEE International Conference
[1] [11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

on Evolutionary Computation, IEEE Press, Piscataway, NJ, pp. 69-73. Y.Shi, and R.C. Eberhart (1998), A modified particle swarm optimiser, Proc. IEEE Intl. Conf. Evolutionary Computation, IEEE Press, Piscataway, NJ, pp: 69-73. P.J.Angeline (1998), Using selection to improve particle swarm optimisation, Proc. IEEE Intl. Conf. Evolutionary Computation, pp: 84-89. Xiao-Feng, et al. (2002), A dissipative particle swarm optimisation, Congress on Evolutionary Computation, Hawaii, USA, pp: 14561461. J.Kennedy (2000), Stereotyping: Improving particle swarm performance with cluster analysis, Proc. Intl. Conf. on Evolutionary Computation, pp: 1507-1512. A.M.Abdelbar and S. Abdelshahid (2003), Swarm optimization with instinct driven particles, Proc. IEEE Congress on Evolutionary Compuatation, pp: 777-782. U.Baumgartner, et al.(2004), Pareto optimality and particle swarm optimisation, IEEE Trans. Magnetics, Vol. 40, pp. 1172-1175. V.Krishnamurthy and V.Seshadri (1978), Model reduction using Routh Stability criterion, IEEE Transactions on Automatic Control, Vol. AC-23, pp.729-731, Augus. Alden H.Wright (1991), Foundations of Genetic Algorithms, Morgan Kaufmann Publishers.

10

Вам также может понравиться