Академический Документы
Профессиональный Документы
Культура Документы
INTRODUCTION
are able to respond to rapid changes in designs and/or demands without the
that the remaining part can be solved/ has been solved independently without
affecting the solution of the problem at hand. This is in fact the common
problems, more often than not, are combinatorial in nature with a large
.
number of variables. This implies that the number of the possible solutions
increases very rapidly with the size of the problem being considered and the
solution. Further, the thumb rules have to be designed explicitly for each
problem based on the experience. With the advent of enormous and cheap
way that the assumptions reduce the number of variables at hand. This
Chapter I 2
ensure tractability of the problem from the computational point of view but at
These approaches have provided some solutions and these have traditionally
been used in the absence of any other way of handling the complexity of the
manufacturing problems.
algorithm computing that requires the precise statement of the step by step
written for the use of computer to solve the problem. The primary aim of Soft
achieve tractability, robustness and low cost. At this juncture, the major
prompted investigation of many areas of engineering and the results are quite
The main thrust of this research work was to investigate the applicability
Chapter I 3
Intelligent Manufacturing with special emphasis on the modelling and
obtain the desired fmal configuration. The tools store the desired geometry and
express with quantitative relationships. The metal flow, the friction at the tool-
material interface, the heat generation and transfer during plastic flow, and the
strain rates, strains) between the deformed and undeformed part, i.e. predicting
Chapter I 4
metal flow; (2) establishing the limits of formability or producibility, i.e.
defects; and (3) predicting the forces and stresses necessary to execute the
the billet (geometry and material), the tooling (geometry and material), the
the equipment used and the characteristics of the fmal product, and fmally the
operation i.e. obtaining the desired shape and properties, is the understanding
and control of metal flow. The direction of metal flow, the .magnitude of
the formed components. Metal flow determines both the mechanical properties
folds at or below the surface. The local metal flow is in turn influenced by the
Chapter I 5
2
Sa
MATERIAL VARlAJJLES:
history (microstructure), the flow stress (or effective stress), and the
deformation, stresses and strain history) and (2) material variables (such as
FRICTION:
Chapter I 6
factor m There are various methods of evaluating friction, i.e. estimating the
DEFORMATION MECHANICS:
the desired produ~. Metal flow is influenced mainly by (1) tool geometry (2)
influence the quality and properties of the formed product and the force and
modelling. The process modelling by the finite element method is the one
PRODUCT PROPERTIES:
The macro and the microgeometry of the product i.e. its dimensions
variations taking place dwing deformation and often influence final product
properties [Ber 98, Gao 2000]. Consequently, a realistic system approach must
Chapter 1 7
1.3 PROCESS MODELLING
mechanics, it would not be possible to design the dies and the equipment
modelling for computer simulation has been a major concern in modern metal-
have been developed and applied to various forming processes. The methods
most well known are the slab method, the slip-line field method, upper- (and
lower-) bound techniques, Hill's general method, and, more recently, the
several slabs. For each slab, simplifying assumptions are made mainly with
t-o
respect stress distributions. The resulting approximate equilibrium equations
~
Chapter I 8
are solved with impositions of stress compatibility between slabs and
The slip-line field method is used in plane strain for perfectly plastic
materials (constant yield stress) and uses the hyperbolic properties that the
predicting resuhs that give good correlation with experimental work. From the
equations.
fields, among which the best one is chosen by minimising total potential
distributions.
processes when the plastic flow is unconstrained. The method is based on the
tube sinking.
Chapter 1 9
For further reference, the books [Avt 68, Row 77, Che 97] provide not
only the detail descriptions of above methods but also a wealth of solutions to
many metal forming problems using these methods. These methods have been
parameters on the detailed metal flow became possible only when the [mite-
element method was developed for the analyses. Since then, the finite-element
forming processes.
FORMING PROCESSES
application, in many cases, can be justified only by its solution reliability and
the analysis of compression and other simple processes. Since those early
Chapter I 10
days, many developments of the numerical techniques have occurred as well
appropriate for cold forming processes [Owe 80]. Various aspects of this
approach were studied for the last two decades by many researchers [Mas 87,
Che 88, Bel 88, Gel 89, Mak 89, Gra 89]. However, large deformation elastic
plastic formulations require large computer capacity and large computer time.
The realisation that plastic problems of metal forming with very large
[Zie 74, Zie 75, Zie 84] and · [Mon 88, Cia 89, Che 89a,
Che 89b, Cou 89, Fou 89]. The first application to the flow of hot metals with
important step forward was the explicit inclusion of thermal coupling in the
Chapter I II
solution [Ode 73, Lee 73, Zie 74]. A noteworthy step in the development of
flow procedures was their extension to include elastic strain effects [Tho 84].
The flow formulations are most suitable for hot working, though they
can be applied in a modified form to cold working. For interface friction, this
[Che 78].
rigid plastic finite element approach and assuming fully plastic deformation of
many industrial situations in the field of metal forming. These involve large
deformations in which the strains are related to large plastic flow. The
dies and work-piece, and temperatures of billet and dies. The designer who
develops an extrusion operation must know first the power required, which
sure that the desired shape can be achieved for specified reductions in area and
Chapter 1 12
analysis of extrusion processes has been realised by . [Lee 79], by
elevated temperature and high strain rates requires the combined application of
c. A numerical too~ such as the finite element method in order to utilise the
transfer. Finite difference method has been used to develop a computer code
[Sun 87], which can predict dynamic thermal profiles in the simple hot
upsetting process.
Chapter 1 13
investigate special types of loadings: pure torsion, uniaxial loading, and cyclic
[Kob 84].
finite element formulation for the analysis of metal forming coupled with
dimensional finite element model for the simulation of isothermal hot forging.
Chapter I 14
study the flow of metal during the transient analysis of an axi-symmetric
and the three dimensional finite element program developed is also used to
relatively close to the final geometry. For the cold deformation of metal
extension of the Von Mises theory for dense plastic materials. There are
relatively few finite element models for the prediction of industrial forming
[Abo 88], and ' [Che 89b]. Forging of powder metallurgy preforms was
Chapter I 15
done by _[Im 86], ___ ~ [Oh 87] , . . [Wey 87], and
improving the quality at the same time. All these points are basically
metal forming evolved into a potentially accurate and flexible technique for
domain-specific code capable of meeting the demands of the user. One such
axisymmetric) of metal forming with thermal and friction coupling for both
The theoretical basis of the FORGE2R along with its features such as thermo-
Chapter 1 16
problems. Automatic remeshing scheme for 6-node elements enables the
various parameters like strain rate and flow direction can be visualised. The
Often in industry, when the material chosen does not exhibit strain
one often has to choose between various lubricants for a particular process.
This choice can be made very easily once it is possible to simulate the process
with friction conditions arising out of the use of a particular lubricant and then
observing its effect on various parameters like forging force, strain and stress.
Forging processes for a product of AISI 4130 steel has been developed
Chapter I 17
development of design rules driven CAD applications [Yu 85, Ras 89] in the
forging pieces and dies, based on design rules and variational geometry has
families ; then, forging design rules were identified for draft angles, radius,
flash land and performs. The application was implemented in 1/EMS from
work has been strictly concerned with the production of solid rod components,
whereas radial extrusion of tubular parts have received little attention. Major
process parameters such as the ratio between inner and outer diameter of the
tube, the ratio between wall thickness and gap height together with the design
of the die chamber are important dimensional parameters that determine the
[Pet 94].
Chapter I 18
A numerical method of construction of axi-symmetric slip-line fields
and their associated velocity fields is applied to investigate the problem of axi-
symmetric tube extrusion in [Chi 97]. Analysis of tapered preforms for upset
process design and analysis. Friction affects forming force/energy, tool life
and the product quality (surfilce finish, internal structure, product life etc.).
[Wan 97].
Neural Network (ANN). ANN models easily capture the intricate relationships
between various process parameters and can be easily integrated into existing
manufacturing environment.
Computing. The primary aim of Soft Computing is to exploit the tolerance for
Chapter l 19
imprecision and uncertainty to achieve' tractability, robustness and low cost. In
Soft computing may use training methods to evolve programs that are
environment and it accepts data that may be fuzzy or rough, and responds by
producing information that also has these characteristics. The choice of Soft
Computing for complex optimisation problem is justified from the fact that
these are very well suited to deal with all those problems that usually represent
objectives etc. The basic idea is to do away with the rigidity of the traditional
algorithmic computing that requires the precise statement of the step by step
written for the use of computer to solve the problem. Soft Computing might be
Chapter 1 20
Soft Computing attempts to emulate and automate the pragmatic
general has always been viewed as intended to have just those characteristics,
which "Soft" Computing purposely avoids. In the past all the scientific and
engineering effort that has gone into computer science and engineering has
something like Soft Computing in no way negates the importance, even the
improve that discipline in every way. Soft Computing arises simply from a
themselves to solution by hard computing and that other methods exist which
Computing techniques.
Chapter I 21
via dense interconnection of simple computational elements, which are called,
via axon connectiQns. Such a model resembles the axons and dendrites of the
nervous system Because of its self-organising and adaptive nature, the model
provides a new parallel and distributed paradigm that has the potential to be
,
more robust and user-friendly than traditional schemes [Wid 90, Oba 92, Zur
teclmology that made neural network implementation faster and more efficient
Chapt~ I 22
solutions, especially in the field of learning, pattern recognition, optimisation,
its input paths by adding up weighted sum of all inputs. The output of aPE is
connecting the inputs to PE. There are various ways in which PE' s are
there exists a variety of training (learning) rules that determine how, when and
A suitable activation function can be chosen for each layer by trial and
error method from among several co·In.monly used functions such as TanH,
the input patterns. Differential error at each hidden layer is computed and the
corresponding data weights are added to all the weights in the system This is
done for each (input, derived output) pair when delta-learning rule is in effect.
However, the convergence speed for ANNs can be improved when other rules
are used along with a suitable momentum factor. The direction of signal flow,
number of neurons in each layer, etc. are a few of the current active research
Chapter I 23
areas. The following design factors are important aspects to the
3) ANN structures;
5) Activation functions
solved, and some available training data. The parallel and distributed
Chapter I 24
that the brain contains over 100 billion (10 11 ) neurons of different types and
10 14 synapses in the human nervous system. Recent studies in the brain have
found that there are more than 1000 synapses on the output/input of each
neuron. A neuron is the fundamental cellular unit of the nervous system and,
in particular, the brain. The "artificial neuron" is the basic building block unit
of any ANN. Each neuron can be regarded as a simple processing element that
receives and combines signals from many other neurons through input
greater than a certain threshold, the firing of a neuron takes place resulting in
an output signal that is transmitted along a cell component called "Axon". The
the "Synapse". The strength of the signal transmitted across a synapse is called
The main objective of the first type is to develop a synthetic element for
verifying hypothesis related to biological systems. The ANNs are not used
Chapter I 25
comprises massively parallel adaptive PE's with an interconnection network.
general, neurons and axons are mathematically modelled by activation and net
ANN models are only loosely tied to the biological realities. They are tightly
(a) Parallel and distributed processing capability: they employ a large number
organisation rules.
Chapter I 26
Neural networks can be classified into different categories based on the
learning, each input pattern received from the environment is associated with a
specific desired target pattern. The weights are usually synthesised gradually,
and at each step of the learning process they are updated in order to minimise
the error between the networks output and a corresponding derived target, in
groups without the aid of a training set. The goal here is to separate the given
metric/function in terms of the output activity of the units. The weights and
capture the statistical regularities of the input data. In order to accomplish this
evaluate the criterion for each possible assignment, therefore, a method must
Chapter I 27
that attempt to maximise the probability of positive external reinforcement for
should have a good noise immunity capability, which is critical for some
applications.
f(x) is found. The cost function is easy_to find for some applications, however,
in other applications, it has to be derived from a given cost criterion and some
constraints related to the problem at hand. One of the main issues related to
a local minimum instead of a global minimum. Among the techniques that are
Chapter I 28
proposed to tackle this problem are the simulated annealing and mean field-
trained based on the supervised training scheme using a large data set. An
for the trained data set and can provide smooth interpolations for the untrained
it has not been trained. The system must induce the salient feature of the input
and detect the regularity. This regularity discovery is vital for many
the entire data set, although it has been trained only by a limited portion of the
entire data set [Han 2000]. It is important to note that each ANN model tends
to impose its own prejudice in how to generalise from a finite set training data.
task is the completion of information, that is, recovery of original data given
capture and model complex input-output relationships even without the help of
Chapter I 29
a mathematical model [Fau 94]. This property of the ANN's is extremely
90, Osk 91] have been made to apply this technique as an opportunity to
shorten the reaction time of the manufacturing systems, increase the product
quality, make systems more reliable and enhance the system's intelligence by
techniques and concluded that a proper neural network model could estimate
forward neural network to learn and optimise turning processes. In that study,
approach is better suited for simple models because, for complicated models,
Chapter I 30
the eonversion process will become vary tedious and difficult to realise.
networks can be used to model and optimise grinding processes. In their work,
model the process and a Boltzman factor was integrated with the BP algorithm
their review paper [Mon 92], stated that the past applications of neural
recognition and image processing, etc., the field of neural networks attracts a
Chapter I 31
and contro~ optimisatio~ routing in parallel computer systems and high-speed
computing. These areas are brought together with the common theme of
ofbiological neural. systems [Zur 91, Kun 93, Oba 97, Kha 97, Oba 98a].
have a better chance of surviving and reproducing, whilst individuals who are
less fit are eliminated. This means that the genes from the highly fit
adapted ancestors may produce even more fit offsprings. In this way, species
Chapter I 32
evolve to become more and more adaptive to their environment. A GA
values. The higher the value of the objective function, the higher is the
two steps. First, members of the newly reproduced strings in the mating pool
are mated at random Secondly, each pair of strings undergoes the crossover
these three operations, often known as offspring as children, form the next
generation's population.
Chapter I 33
After every generation, highly fit individuals or solutions are given
in the crossover procedure, with other highly fit individuals. This produces .
new offspring solutions (i.e. children), which share some characteristics taken
from both parents. Mutation is often applied after crossover by altering some
genes in the strings. The offspring can either replace the whole population
number of times (up to the point wher_e the system ceases to improve or the
follows:
Chapter I 34
(6) Output, the best solution is found.
are more flexible than most of the other search methods. GAs manipulate
similarities among high performance strings. Other methods usually deal with
functions and their control variables directly. Because GAs work from a
population with wide sample points, the probability of reaching a false peak is
reduced. The transition rules of GAs are stochastic via sampling; most other
methods are deterministic transition rules. Therefore, GAs are more suitable
for muhiple-peak.s functions. Also they can work with any complicated
objective functions, as they need to only compute the function values and d.o
operation.
In theory, GAs can not guarantee to attain the best solution. However,
to get the best solution. Because of this reason, GAs have been applied in a
Chapter I 35
1.5.2.4 SOME SALIENT DEVELOPMENTS IN GENETIC
ALGORITHM APPLICATIONS
[Dej 75] have laid the foundations of GAs. Numerous papers and
applications (Gol 87]. The robustness of GAs is due to their capacity to locate
89, Dav 91]. Further research on GAs has witnessed the emergence of new
trends that break the traditional mould of 'neat' GAs that are characterised by
static crossover and mutation rates, fixed length encoding of solutions, and
length solutions for GAs in [Gol 89] and [Gol 90], and has shown that the
'messy' GAs perform very well. [Dav 89] has recommended the
Chapter I 36
of space structures on a network of workstations using a parallel processing
teclmiques to reduce the computational time. GAs have been extensively used
in production scheduling [Del 95, Fan 96, Mat 96, Lin 97, Bie 99].
[Miz 94) have applied GAs for optimal tool selection in minimizing the
total machining time and the uncut area in milling operation involving
to tackle this problem [Ger 84, Hin 86, Laa 87, Her 91 , Che 94, Bru 97].
Chapter 1 37
ofminimising a cost function. Ordinarily, the number of parameters contained
in the system under consideration is very large. Thus, fmding the minimum
cost solution of a system is like that of finding low Energy State in physical
system. Another issue of concern is that deterministic algorithms used for cost
procedures i.e. the algorithm may get 'stuck' in a local 'minima' that are not
found. In such a situation, the system must be capable of escaping from local
Both of these issues are addressed in the SA algorithm. The basic idea
"When optimising a very large and complex system (i.e. a system with many
The idea can neatly be illustrated with a 'balls and hills' diagram, as
Chapter I 38
Current
Configuration··
Cost
Allowed
Downhill
Perturbation
Configurations
38a
• The algorithms need not get 'stuck', since transition out of a local minima
Specifically, gross features of the final state of the system are seen at
temperatures.
The choices the designer of SA has to make can be classified into two
categories:
• Problem specific
• Generic
set of states to which the system can move from S with non-zero
probability.
configuration
Chapter I 39
The generic parameters are:
c) Stopping criterion
below:
exp((fty) - ftx))/t > p where ft*) is the value of the objective function for
Chapter I 40
To implement a finite time approximation of the SA algorithm, a set of
interest as follows:
chosen high enough to ensure that virtually all proposed transitions are
unity. Typical value of A lie between 0.8 and 0.99. At each temperature,
• Final value of the temperature: The system is frozen and annealing stops if
temperatures.
temperature.
Chapter l 41
1.6 OPTIMISATION IN METAL FORMING
They derive from two distinct strategies that can be briefly described as
domain of artificial intelligence that proceeds in three steps. First the expertise
context. Then, the expertise must be gathered and stored in a convenient way.
Finally, the expert system can be developed out of this set of rules, as done for
instance in [Len 94]. But collecting the human expertise is a tough and
delicate task that requires time. Moreover it is specific to each process and
factory. So, in order to avoid this difficulty, other CAE have been based on
developed both from human expertise and from simplified equations. They
can be used by the expert to solve new and more complex design problems
because they are more general than the specific set of rules that are used for
the usual processes. In [Mar 96], these models have been incorporated in
software. It facilitates their use and put them within the reach of less
Chapter I 42
complex problem, as in [Han 92] or in [Kob 89]. It also provides a general
frame for the forging process optimisation. These methods are called inverse
methods of design [Che 96a, Fou 96c, Fou 97, Fou 98]. On this basis, several
zero order methods have been developed. They do not require large
modifications of the forging software that are used as black boxes. The forging
in [Lor 98]. Usually a large number of process simulations are required to find
a satisfactory solution. On the other hand, the problem can be solved directly,
starting from the desired final part and using the backward tracing method to
fmd a satisfactory shape ofthe preform, as in [Zha 95]. Another way to speed
Method has been used in its Variational Form in [Bad 96] while its Discrete
form has been preferred in [Fou 96a, Fou 96b, Zha 97] .
presented the optimal shape design techniques for an extrusion die. Two
Chapter l 43
optimisation problems were examined. The first problem was the
determination of the die contour, which minimises the total extrusion force for
a given material. The second was to find the die shape, which would produce
a uniform exit velocity during the extrusion process. The direct search non-
gradient method of Hooke and Jeenes, and the steepest descent and Fletcher-
Powell gradient methods are used as the optimisation procedures. Jan Kusiak
concluded the following (1) The optimisation technique can be useful in metal
added to the analysis in order to prevent designs, which are totally impractical
Micro Genetic Algorithm was implemented. The chosen design variables were
number of extrusion and upset operations, the amount of upset in each upset,
and the included angle in each upset, and the included angle in the extrusion
Chapter 1 44
A finite element model is presented in [Ven 97] to obtain the
temperature distribution in the work piece as well as in the tooling in hot and
the various forming process. The chosen design variables are die geometry,
area reduction ratios and the total number of forming stages. The selected
forming processes are multi-pass cold drawing of a tabular profile and cold
orthogonal array and concept, length of the die land, reduction percentage,
inlet angle and comer fillet were selected as process parameters. The analysis
showed that the inlet angle, fiction coefficient and length of die land have the
preform tool shapes in forging. In the frame of a two step forging sequence,
Chapter I 45
algorithms. First, an objective function ~ that represents the goal to be
achieved (the prescribed final shape of the part) and the defects to be
and derivatives.
dimensional function space. However , for a certain class of forgings the class
demonstrated for the simulated forging of a TiAl turbine disk. The functional
strain rate distribution. The only way to..accurately calculate this relationship is
Chapter 1 46
[Big 98] presented a shape optimisation method for
shape is taken as the starting point, and the die is moved in the reverse
direction with boundary nodes released as the die is raised. The optimum die
for each backward time increment based on geometrical features and the
analysis for an axisymmetric H-section disk is presented and it was shown that
component.
discs for uniform exit flow. Bearings for this die are to be designed so that the
material exits both holes with parallel balanced flow. The finite element
Derivatives of the objective function used during the optimisation phase are
Chapter 1 47
reached after few iterations of the optimisation procedure. Available
experimental data for a two-out extrusion die with bearings have been used to
Chapter 1 48