Вы находитесь на странице: 1из 18

Chapter 1 - Introduction

CHAPTER 1
1. INTRODUCTION
1.1 GIST OF THE PROECT
Embedded system can be defined as a collection of hardware and software. In the
beginning, the processes of hardware design and software design were separated to two
courses, which are completed by hardware designers and software designers. This old manner
can only improve the performances of hardware or software, while it is limited design space,
and cannot optimize the performance of the system.
The concurrent design of hardware and software starting with a partially in complete
specification requires close cooperation of all participants in the design process hardware and
software designers and system architects must synchronize their work progress to optimize
and debug a system in a joint effort.
ith the increasing comple!ity of embedded system, embedded system hardware"
software co" design becomes an effective way to improve design quality. Every application
system has an appropriate optimal combination of the function of the system#s hardware and
software. The system co"design can be defined as the partitioning, co"synthesis, co"
verification and co"simulation of the hardware"software parts.
$ardware"software partitioning is one of the key processes. In addition, the objective
of hardware" software co"design is to produce computer systems that have a balance of
hardware and software components, which work together to satisfy a given specification.
$ardware"software partitioning is a critical step towards achieving such a balance.
The role of hardware"software partitioning is the optimized distribution of system
functions to software and hardware components. In other words, hardware" software
partitioning is to determine that each task will be implemented by software or hardware
describes the process of hardware software partitioning.
$ardware"software partitioning problem is a %&"hard optimization problem. 'enetic
algorithms (')* are proven to be very successful of %&"hard problem optimization problems.
In this paper, we also propose genetic algorithm to settle the hardware"software partitioning
problem.
1
Chapter 1 - Introduction
+ince increased interest in embedded system hardware"software co"design has been
fuelled by the for cheap and reliable embedded systems that can obtain reasonable e!ecution
performances ,which can#t be over design or under design .e use cost,performance ratio to
evaluate the system hardware software partitioning scheme "which is different from the
present system co"design using 'enetic )lgorithm.
1.! HISTOR" OF GA
'enetic algorithms are a part of evolutionary computing, which is a rapidly growing
area of artificial intelligence. )s you can guess, genetic algorithms are inspired by -arwin.s
theory about evolution. +imply said, solution to a problem solved by genetic algorithms is
evolved.
Idea of evolutionary computing was introduced in the /012s by I. 3echenberg in his
work 4Evolution strategies4 (Evolutions strategy in original*. $is idea was then developed by
other researchers. 'enetic )lgorithms (')s* were invented by 5ohn $olland and developed
by him and his students and colleagues. This lead to $olland.s book 4)daption in %atural and
)rtificial +ystems4 published in /067.
In /008 5ohn 9oza has used genetic algorithm to evolve programs to perform certain
tasks. $e called his method 4genetic programming4 ('&*. :I+& programs were used, because
programs in this language can e!press in the form of a 4parse tree4, which is the object the
') works on.
1.# $ASIC DESCRIPTION OF GENETIC A%GORITH&
'enetic algorithms are inspired by -arwin.s theory about evolution. +olution to a
problem solved by genetic algorithms is evolved. )lgorithm is started with a set of solutions
(represented by chromosomes* called population. +olutions from one population are taken
and used to form a new population. This is motivated by a hope, that the new population will
be better than the old one.
2
Chapter 1 - Introduction
+olutions which are selected to form new solutions (offspring* are selected according
to their fitness " the more suitable they are the more chances they have to reproduce. This is
repeated until some condition (for e!ample number of populations or improvement of the
best solution* is satisfied.
Example
)s you already know from the chapter about search space, problem solving can be
often e!pressed as looking for e!treme of a function. This is e!actly what the problem shown
here is. +ome function is given and ') tries to find minimum of the function.
;ou can try to run genetic algorithm at the following applet by pressing button +tart.
'raph represents some search space and vertical lines represent solutions (points in search
space*. The red line is the best solution< green lines are the other ones. =utton +tart starts the
algorithm, +tep performs one step (i.e. forming one new generation*, +top stops the algorithm
and 3eset resets the population.
1.3.1 ENCODINGS AND OPTIMIZATION PROBLEMS
>sually there are only two main components of most genetic algorithms that are
problem dependent the problem encoding and the evaluation function. ?onsider a parameter
optimization problem where we must optimize a set of variables either to ma!imize some
target such as profit or to minimize cost or some measure of error.
e might view such a problem as a black bo! with a series of control dials
representing different parameters the only output of the black bo! is a value returned by an
evaluation function indicating how well a particular combination of parameter settings solves
the optimization problem. The goal is to set the various parameters so as to optimize some
output.
@ost users of genetic algorithms typically are concerned with problems that are
nonlinear. This also often implies that it is not possible to treat each parameter as an
independent variable which can be solved in isolation from the other variables.
There are interactions such that the combined effects of the parameters must be
considered in order to ma!imize or minimize the output of the black bo!. In the genetic
algorithm community the interaction between variables is sometimes referred to as epistasis.
3
Chapter 1 - Introduction
The first assumption that is typically made is that the variables representing
parameters can be represented by bit strings .This means that the variables are discretized in
an a prior fashion and that the range of the discretization corresponds to some power of 8.
Aor e!ample with /2 bits per parameter, we obtain a range with /28B discrete values.
If the parameters are actually continuous then this discretization is not a particular
problem .This assume, of course, that the discretization provides enough resolution to make it
possible to adjust the output with the desired level of precision. It also assumes that the
discretization is in some sense representative of the underlying function.
It is helpful to view the e!ecution of the genetic algorithm as a two stage process. It
starts with the current population< +election is applied to the current population to create an
intermediate population. Then recombination and mutation are applied to the intermediate
population to create the ne!t population. The process of going from the current population to
the ne!t population constitutes one generation in the e!ecution of a genetic algorithm.
1.' ENCODING OF A CHRO&OSO&E
The chromosome should in some way contain information about solution which it
represents. The most used way of encoding is a binary string. The chromosome then could
look like thisC
?hromosome / //2//22/22//2//2
?hromosome 8 //2////2222////2
Each chromosome has one binary string. Each bit in this string can represent some
characteristic of the solution, or the whole string can represent a number " this has been used
in the basic ') applet. Df course, there are many other ways of encoding. This depends
mainly on the solved problem. Aor e!ample, one can encode directly integer or real numbers<
sometimes it is useful to encode some permutations and so on.
1.4.1 CROSSOVER
4
Chapter 1 - Introduction
)fter we have decided what encoding we will use, we can make a step to crossover.
?rossover selects genes from parent chromosomes and creates a new offspring. The simplest
way how to do this is to choose randomly some crossover point and everything before this
point to point copy from a first parent and then everything after a crossover point copy from
the second parent. ?rossover can then look like this ( E is the crossover point*C
?hromosome / //2// E 22/22//2//2
?hromosome 8 //2// E //2222////2
Dffspring / //2// E //2222////2
Dffspring 8 //2// E 22/22//2//2
There are other ways how to make crossover, for e!ample we can choose more
crossover points. ?rossover can be rather complicated and very depends on encoding of the
encoding of chromosome. +pecific crossover made for a specific problem can improve
performance of the genetic algorithm.
1.4.2 MUTATION
)fter a crossover is performed, mutation takes place. This is to prevent falling all
solutions in population into a local optimum of solved problem. @utation changes randomly
the new offspring. Aor binary encoding we can switch a few randomly chosen bits from / to 2
or from 2 to /. @utation can then be followingC
Driginal offspring / //2////2222////2
Driginal offspring 8 //2//22/22//2//2
@utated offspring / //22///2222////2
@utated offspring 8 //2//2//22//2//2
The mutation depends on the encoding as well as the crossover. Aor e!ample when we
are encoding permutations, mutation could be e!changing two genes.
1.( PARA&ETERS OF GA
1.5.1 CROSSOVER AND MUTATION PROBABILITY
5
Chapter 1 - Introduction
There are two basic parameters of ') " crossover probability and mutation
probability. ?rossover probability says how often will be crossover performed. If there is no
crossover, offspring is e!act copy of parents.
If there is a crossover, offspring is made from parts of parents. chromosome. If
crossover probability is /22F, then all offspring is made by crossover. If it is 2F, whole new
generation is made from e!act copies of chromosomes from old population (but this does not
mean that the new generation is the sameG*.
?rossover is made in hope that new chromosomes will have good parts of old
chromosomes and maybe the new chromosomes will be better. $owever it is good to leave
some part of population survive to ne!t generation. @utation probability says how often will
be parts of chromosome mutated.
If there is no mutation, offspring is taken after crossover (or copy* without any
change. If mutation is performed, part of chromosome is changed. If mutation probability is
/22F, whole chromosome is changed, if it is 2F, nothing is changed. @utation is made to
prevent falling ') into local e!treme, but it should not occur very often, because then ')
will in fact change to random search.
1.5.2 OTHER PARAMETERS
There are also some other parameters of '). Dne also important parameter is
population size. &opulation size says how many chromosomes are in population (in one
generation*. If there are too few chromosomes, ') has a few possibilities to perform
crossover and only a small part of search space is e!plored. Dn the other hand, if there are too
many chromosomes, ') slows down.
3esearch shows that after some limit (which depends mainly on encoding and the
problem* it is not useful to increase population size, because it does not make solving the
problem faster. +ome recommendations for all parameters can be found in one of the
following chapters.
1.) $INAR" ENCODING
1.6.1 CROSSOVER
6
Chapter 1 - Introduction
Sin*+e point cro,,o-er " one crossover point is selected, binary string from beginning
of chromosome to the crossover point is copied from one parent, and the rest is copied from
the second parent
11..1.11/11.11111 0 11..1111
T1o point cro,,o-er " two crossover point are selected, binary string from beginning
of chromosome to the first crossover point is copied from one parent, the part from the first to
the second crossover point is copied from the second parent and the rest is copied from the
first parent
11..1.11 / 11.11111 0 11.11111
Uni2or3 cro,,o-er " bits are randomly copied from the first or from the second parent
11..1.11 / 11.111.1 0 11.11111
Arith3etic cro,,o-er - some arithmetic operation is performed to make a new offspring
7
Chapter 1 - Introduction
11..1.11 / 11.11111 0 11..1..1 4AND5
1.6.2 MUTATION
$it in-er,ion - selected bits are inverted

11..1..1 06 1...1..1
1.7 PER&UTATION ENCODING
1.7.1 CROSSOVER
Sin*+e point cro,,o-er - one crossover point is selected, till this point the permutation is
copied from the first parent, then the second parent is scanned and if the number is not yet in
the offspring it is added
41 ! # ' ( ) 7 8 95 / 4' ( # ) 8 9 7 ! 15 0 41 ! # ' ( ) 8 9 75
1.7.2 MUTATION
Order chan*in* - two numbers are selected and e!changed
41 ! # ' ( ) 8 9 75 06 41 8 # ' ( ) ! 9 75
1.8 :A%UE ENCODING
1.8.1 CROSSOVER
)ll crossovers from binary encoding can be used
8
Chapter 1 - Introduction
1.8.2 MUTATION
)dding a small number (for real value encoding* " to selected values is added (or subtracted*
a small number
41.!9 (.)8 !.8) '.11 (.((5 06 41.!9 (.)8 !.7# '.!! (.((5
1.9 TREE ENCODING
1.9.1 CROSSOVER
Tree cro,,o-er - in both parent one crossover point is selected, parents are divided in that
point and e!change part below crossover point to produce new offspring
1.9.2 MUTATION
?hanging operator, number " selected nodes are changed
1.1. CHRO&OSO&ES ENCODING PRO$%E&S
Encoding of chromosomes is one of the problems, when you are starting to solve
problem with '). Encoding very depends on the problem. In this we will be introduced some
encodings, which have been already used with some success.
9
Chapter 1 - Introduction
1.1.1 BINARY ENCODING
=inary encoding is the most common, mainly because first works about ') used this
type of encoding. In ;inar< encodin* every chromosome is a string of ;it,, . or 1.
?hromosome ) /2//22/2//22/2/2///22/2/
?hromosome = ///////22222//22222/////
Example of chromosomes with binary encoding
=inary encoding gives many possible chromosomes even with a small number of
alleles. Dn the other hand, this encoding is often not natural for many problems and
sometimes corrections must be made after crossover and,or mutation.
E=a3p+e o2 Pro;+e3> 9napsack problem
The pro;+e3> There are things with given value and size. The knapsack has given
capacity. +elect things to ma!imize the value of things in knapsack, but do not e!tend
knapsack capacity.
Encodin*C Each bit says, if the corresponding thing is in knapsack.
1.1.2 PERMUTATION ENCODING
&ermutation encoding can be used in ordering problems, such as travelling salesman
problem or task ordering problem.
In per3utation encodin*, every chromosome is a string of numbers, which
represents number in a ,e?uence.
10
Chapter 1 - Introduction
?hromosome ) / 7 H 8 1 B 6 0 I
?hromosome = I 7 1 6 8 H / B 0
Example of chromosomes with permutation encoding
&ermutation encoding is only useful for ordering problems. Even for this problems for
some types of crossover and mutation corrections must be made to leave the chromosome
consistent (i.e. have real sequence in it*.
E=a3p+e o2 Pro;+e3C Travelling salesman problem (T+&*
The pro;+e3C There are cities and given distances between them. Travelling
salesman has to visit all of them, but he does not to travel very much. Aind a sequence of
cities to minimize travelled distance.
Encodin*C ?hromosome says order of cities, in which salesman will visit them.
1.1.3 VALUE ENCODING
-irect value encoding can be used in problems, where some complicated value, such
as real numbers, is used. >se of binary encoding for this type of problems would be very
difficult. In value encoding, every chromosome is a string of some values. Jalues can be
anything connected to problem, form numbers, real numbers or chars to some complicated
objects.
?hromosome ) /.8H8B 7.H8BH 2.B771 8.H80H 8.B7B7
?hromosome = )=-5EIA5-$-IE35A-:-A:AE'T
?hromosome ? (back*, (back*, (right*, (forward*, (left*
11
Chapter 1 - Introduction
Example of chromosomes with value encoding
Jalue encoding is very good for some special problems. Dn the other hand, for this encoding
is often necessary to develop some new crossover and mutation specific for the problem.
E=a3p+e o2 Pro;+e3C Ainding weights for neural network
The pro;+e3C There is some neural network with given architecture. Aind weights for
inputs of neurons to train the network for wanted output.
Encodin*C 3eal values in chromosomes represent corresponding weights for inputs.
1.1.4 TREE ENCODING
Tree encoding is used mainly for evolving programs or e!pressions, for genetic
programming. In tree encoding every chromosome is a tree of some objects, such as functions
or commands in programming language.
?hromosome ) ?hromosome =
( K ! ( , 7 y * * ( doLuntil step wall *
Example of chromosomes with tree encoding
Tree encoding is good for evolving programs. &rogramming language :I+& is often
used to this, because programs in it are represented in this form and can be easily parsed as a
tree, so the crossover and mutation can be done relatively easily.
12
Chapter 1 - Introduction
1.11 GA APP%ICATIONS
1.11.1 !INANCE APPLICATIONS
@odels for tactical asset allocation and international equity strategies have been improved
with the use of ')s. They report an I8F improvement in cumulative portfolio value over a
passive benchmark model and a BIF improvement over a non"') model designed to
improve over the passive benchmark.
'enetic algorithms are particularly well"suited for financial modeling applications for
three reasonsC
/. They are payoff driven. &ayoffs can be improvements in predictive power or returns
over a benchmark. There is an e!cellent match between the tool and the problems
addressed.
8. They are inherently quantitative, and well"suited to parameter optimisation (unlike
most symbolic machine learning techniques*.
H. They are robust, allowing a wide variety of e!tensions and constraints that cannot be
accommodated in traditional method
1.11.2 IN!ORMATION SYSTEMS APPLICATIONS
-istributed computer network topologies are designed by a '), using three different
objective functions to optimize network reliability parameters, namely diameter, average
distance, and computer network reliability. The ') has successfully designed networks with
/22 orders of nodes. ') has also been used to determine file allocation for a distributed
system.
The objective is to ma!imize the programs abilities to reference the file s located on
remote nodes. The problem is solved with the following three different constraint setsC
/. There is e!actly one copy of each file to be distributed.
8. There may be any number of copies of each file subject to a finite memory constraint
at each node.
H. The number of copies and the amount of memory are both limited.
13
Chapter 1 - Introduction
1.11.3 PRODUCTION"OPERATION APPLICATIONS
'enetic )lgorithm has been used to schedule jobs in a sequence dependent setup
environment for a minimal total tardiness. )ll jobs are scheduled on a single machine< each
job has a processing time and a due date.
The setup time of each job is dependent upon the job which immediately precedes it.
The ') is able to find good, but not necessarily optimal schedules, fairly quickly. ') is also
used to schedule jobs in non"sequence dependent setup environment.
The jobs are scheduled on one machine with the objective of minimizing the total
generally weighted penalty for earliness or tardiness from the jobs. due dates. $owever, this
does not guarantee that it will generate optimal solutions for all schedules. ') is developed
for solving the machine"component grouping problem required for cellular manufacturing
systems.
') provides a collection of satisfactory solutions for a two objective environment
(minimizing cell load variation and minimizing volume of inter cell movement*, allowing the
decision maker to then select the best alternative.
1.11.4 ROLE IN DECISION MA#ING
)pplying the well established decision processing phase model of +imon (/012*,
'enetic )lgorithms appear to be very well suited for supporting the design and choice phases
of decision making.
In solving a single objective problem, ') designs many solutions until no further
improvement (no increase in fitness* can be achieved or some pre determined number of
generations has evolved or when the allotted processing time is complete.
The most fitting solution in the final generation is the one that ma!imizes or
minimizes the objective (fitness* function< this solution can be thought of as the ') has
recommended choice. Therefore with single objective problems the user of ') is assisted in
the choice phase of decision processing.
14
Chapter 1 - Introduction
hen solving multi"objective problems, ') gives out many satisfactory solutions in
terms of the objectives, and then allows the decision maker to select the best alternative.
Therefore ')s assist with the design phase of decision processing with multi"objective
problems. The target of each set of test data contains different functional nodes, different
connections between nodes, and different I,D variables of each connection, respectively. The
efficiency and quality of ') is largely dependent on the key parameters, which will influence
the performance of ') with different values, such as the population size, selection operator,
crossover operator, mutation operator and so on.
1.1! CO&PARISON $ET@EEN GA :, EA
The efficiency and quality of ') is largely dependent on the key parameters, which
will influence the performance of ') with different values, such as the population size,
selection operator, crossover operator, mutation operator and so on.
The ') and the associated scheduling algorithms are coded in the @)T:)= 1.7 with
the following parameter settingsC population size (H7*, linear roulette"wheel selection (2.I*,
double"point crossover (2.6*, and random mutation (2.2H"2.26*. )dditionally, in e!periments,
the algorithm is compared with Evolutionary )lgorithm (E)*, which has the same selection
strategy, mutation parameters as ').

Aig /././" 'raph ') Js E)
1.1# HARD@AREASOFT@ARE PARTITIONING
15
Chapter 1 - Introduction
$ardware,software partitioning is the problem of dividing an application.s
computations into a part that e!ecutes as sequential instructions on a microprocessor (the
4software4* and a part that runs as parallel circuits on some I? fabric like an )+I? or A&')
(the 4hardware4*, such as to achieve design goals set for metrics like performance, power,
size, and cost.
The circuit part commonly acts as a coprocessor for the microprocessor. Aor e!ample,
a video compression application may be partitioned such that most of the frame handling
computations e!ecute on a microprocessor, while the compute"intensive -?T (discrete cosine
transformation* part of the compression application is offloaded to e!ecute in a fast -?T
coprocessor circuit.
?ircuits can e!ecute some computations thousands of times faster than sequential
instructions, due largely to their parallel e!ecution. Aor e!ample, if a computation consists of
/22 multiplications of independent data items, then while a microprocessor would have to
e!ecute the multiplications one (or a few* at a time thus requiring hundreds of clock cycles, a
circuit could potentially (subject to data availability* e!ecute all /22 multiplications in
parallel using /22 multipliers and thus requiring just / or a few clock cycles. Energy
reductions can also result.
1.13.1 HARD$ARE%SO!T$ARE PARTITIONING MODEL
e assume that hardware"software partitioning is to find an implementation scheme
that can satisfy the system constraints. )lso it is known that embedded systems must meet
tight cost, power consumption, and performance constraints. $owever, these constraints can#t
come true together, while we can give a tradeoff among them.
In our approach, we use cost, performance ratio to present the performance of every
hardware or software implementation methods. Then in the ne!t paragraphs, we will describe
our model.
1.13.2 HARD$ARE%SO!T$ARE PARTITIONING USING GA
16
Chapter 1 - Introduction
') is based on natural evolution and on the -arwinian natural selection. -arwinian
natural selection theory is a biology doctrine of evolution, which is widely accepted. This
theory considers that the biology in the world must struggle for e!istence to survive.
In the survival struggle, the individuals which have the advantageous variations to the
living environment are easy to survive and have more opportunities to pass the advantageous
variations down, while the individuals which have disadvantageous variations will eliminate.
Therefore all the individuals that win in the survival struggle have the strong
adaptability to the environment. -arwin named this survival struggle as natural selection.
'enetic )lgorithm simulates the process of -arwin#s natural selection. The complete flow of
'enetic )lgorithm involves data encoding, population initialization, fitness evaluation,
selection, crossover, mutation and terminate condition.
1.13.3 DATA ENCODING AND !ITNESS EVALUATION
The variable as real number may be regarded as the phenotype form of 'enetic
)lgorithm. The process, mapping from phenotype to genotype, is considered as encoding. In
this paper, we employ binary code, which represent a M2,/N binary string as the variable
value.
/ represents hardware implementation while 2 represent software implementation. e
use bi to represent the implementation of %o.i function block, and = to represent a
implementation scheme.
=Ob/b8bHPbm
There are many function blocks, and each has n implementation means including
hardware implementation and software implementation. If bi is /, we will choose an
implementation in the hardware implementation (I
i

$
*,otherwise we will choose one in the
software implementation (I
i
+
*.
The fitness evaluation is belowC
Aitness O / ,
/K cp
where cp has been defined in equation .
1.13.4 SELECTION OPERATOR
17
Chapter 1 - Introduction
The final goal of this process is to select e!cellent individuals and enable them to
have the opportunity to be the parents of ne!t generation. )ccording to each individual#s
fitness value, some e!cellent individuals are inherited to the ne!t generation by certain
selection rules.
'enetic )lgorithm employs selection operator to manifest this thought, which
incarnates the -arwin#s survival of the fittest principle.
e compute the selection probabilities of every individual by the following function,
&i O fitness
i
m
Q fitness
i
kO/
)fter giving each individual#s selection probability, generate a number in R2,/S
randomly to decide which individual can be selected into the ne!t generation.
18

Вам также может понравиться