Вы находитесь на странице: 1из 50

Chapter 1

INTRODUCTION

1.1 INTELLIGENT MANUFACTURING

Intelligent Manufacturing Systems (IMS) are manufacturing systems that

are able to respond to rapid changes in designs and/or demands without the

intervention of humans. In the extreme, these systems are capable of

producing part types of single lot size economically. To respond to changing

demand scenarios the system must be equipped with a comprehensive

manufacturing planning and control system which incorporates vast amount of

manufacturing knowledge in a form that is accessible rapidly. The design and

implementation of this system is one of the challenges facing the

manufacturing engineer today in the realisation of the !MS.

The real world manufacturing engineering problems are complex. The

number of variables is usually large and therefore, the problem is managed by

looking at a "reasonable" part of the problem at a time under the assumption

that the remaining part can be solved/ has been solved independently without
affecting the solution of the problem at hand. This is in fact the common

thread rwming through all the disciplines.

This problem is all the more prevalent in the manufacturing engineering

research because of the very nature of the problems in manufacturing. The

problems, more often than not, are combinatorial in nature with a large
.
number of variables. This implies that the number of the possible solutions

increases very rapidly with the size of the problem being considered and the

number of variables in the problem This precludes the possibility of any

efficient polynomial time solutions to these problems. One possibility is to

take recourse to the development of heuristics or thumb rules based on

experience. These thumb rules provide reasonable solutions without much

computational effort. However there is no guarantee regarding the quality of

solution. Further, the thumb rules have to be designed explicitly for each

problem based on the experience. With the advent of enormous and cheap

computational power it is possible to explore much larger variety of

possibilities than relying on simple-minded thumb rules and, therefore, the

emphasis on the development and use of the thumb rules is decreasing.

The other possible solution, then, is to formulate the problem in such a

way that the assumptions reduce the number of variables at hand. This

reduction in the number of variables makes it possible to use the traditional

mathematical techniques for these problems. This is the most common

approach used in the manufacturing research. The assumptions, in many cases,

Chapter I 2
ensure tractability of the problem from the computational point of view but at

the cost of generality and comprehensiveness of the problem formulation.

These approaches have provided some solutions and these have traditionally

been used in the absence of any other way of handling the complexity of the

manufacturing problems.

In recent years, there has been much emphasis on the development of

alternate and novel models of computing collectively referred to as Soft


U:, / 1-J,

Computing. The basic idea is to do away ,. _ the rigidity of the traditional

algorithm computing that requires the precise statement of the step by step

procedure to be employed for problem solving before a program could be

written for the use of computer to solve the problem. The primary aim of Soft

Computing is to exploit the tolerance for imprecision and uncertainty to

achieve tractability, robustness and low cost. At this juncture, the major

components of soft computing are Fo.zzy Logic, Artificial Neural Networks

(ANN), and probabilistic reasoning teclmiques like Simulated Annealing,

Genetic Algorithms, Chaos theory and parts of Learning Theory. The

enhanced modelling power and solution power of these approaches has

prompted investigation of many areas of engineering and the results are quite

encouraging. This has prompted the manufacturing engineers also to turn

towards these approaches in an attempt to find solutions to their problems.

The main thrust of this research work was to investigate the applicability

and relative effectiveness of the Soft Computing techniques in the domain of

Chapter I 3
Intelligent Manufacturing with special emphasis on the modelling and

optimisation of Metal Forming Processes.

1.2 METAL FORMING

The numerous advantages such as material savmg, economy m


a:nd
production, improved material properties, precision have caused the
"
prominence of metal forming in industry for producing quality products. In

metal forming simple part geometry is transformed into a complex one to

obtain the desired fmal configuration. The tools store the desired geometry and

impart pressure on the deforming material through the tool-material interface.

The physical phenomena constituting a forming operation are difficult to

express with quantitative relationships. The metal flow, the friction at the tool-

material interface, the heat generation and transfer during plastic flow, and the

relationships between microstructurcP properties and process conditions are

difficult to predict and analyse. Often, in producing discrete parts, several

forming operations (preforming) are required to transform the initial "simple"

geometry, to a final one without causing material failure. Consequently, the

most significant objective of any method of analysis is to assist the forming

engineer in process design. For a given operation, such design essentially

consists of ( 1) establishing the kinematic relationships (shape, velocities,

strain rates, strains) between the deformed and undeformed part, i.e. predicting

Chapter I 4
metal flow; (2) establishing the limits of formability or producibility, i.e.

determining whether it is possible to form the part without surface or internal

defects; and (3) predicting the forces and stresses necessary to execute the

forming operation so that tooling and equipment can be designed or selected.

A metal forming system which comprises of many subsystems such as

the billet (geometry and material), the tooling (geometry and material), the

conditions at the tool-material interface, the mechanics of plastic deformation,

the equipment used and the characteristics of the fmal product, and fmally the

plant environment in which the process is being conducted [Sem 96]. A

schematic of such a system is shown in figure-1. The "system approach" in

metal forming allows study of the effects of process variables on product

quality and process economics. The key to a successful metal-forming

operation i.e. obtaining the desired shape and properties, is the understanding

and control of metal flow. The direction of metal flow, the .magnitude of

deformation, and the temperatures involved greatly influence the properties of

the formed components. Metal flow determines both the mechanical properties

related to local deformation and the formation of defects such as cracks or

folds at or below the surface. The local metal flow is in turn influenced by the

process variables of all subsystems and components. A brief description of the

same is given below:

Chapter I 5
2

Figure- I Illustration of metal forming system using open-die forging as an example:


1, biDet; 2, dies; 3, interface; 4, deformation mechanics; 5, forming machine;
6, product; 7, environment

Sa
MATERIAL VARlAJJLES:

For a given material composition and deformation/heat treatment

history (microstructure), the flow stress (or effective stress), and the

workability (or formability) in various directions (anisotropy) are the most

important material variables in the analysis of metal-forming process. For a

given microstructure, the flow stress is expressed as a function of strain, strain

rate and temperature. It is resistance to plastic deformation. Workability or

formability is the capability of a material to deform without failure; it depends

on ( 1) conditions existing during deformation (such as temperature, rate of

deformation, stresses and strain history) and (2) material variables (such as

composition, voids, inclusions, and initial microstructure). In hot forming

processes, temperature gradients in the deforming material also influence

metal flow and failure phenomena.

TOOLING AND EQUIPMENT:

The selection of a machine for a given process is influenced by the

time, accuracy, and load-energy characteristics of that machine. Optimal

equipment selection requires consideration of the entire forming system.

FRICTION:

The mechanisms of interface friction are very complex. One way of

expressing friction quantitatively is through a friction coefficient )..1., or a shear

Chapter I 6
factor m There are various methods of evaluating friction, i.e. estimating the

value ofJ.l or m [Xu 94, Wan 97].

DEFORMATION MECHANICS:

In forming, material is deformed plastically to generate the shape of

the desired produ~. Metal flow is influenced mainly by (1) tool geometry (2)

friction conditions (3) characteristics of the stock material, and ( 4) thermal

conditions existing in the deformation zone. The details of metal flow

influence the quality and properties of the formed product and the force and

energy requirements of the process. The mechanics of deformation i.e. the

metal flow, strains, strain-rates, and stresses, can be investigated by process

modelling. The process modelling by the finite element method is the one

most commonly used.

PRODUCT PROPERTIES:

The macro and the microgeometry of the product i.e. its dimensions

and surface finish, are influenced · by process variables. The processing

conditions (temperature, strain and strain-rate) determine the microstructural

variations taking place dwing deformation and often influence final product

properties [Ber 98, Gao 2000]. Consequently, a realistic system approach must

include consideration of (1) the relationships between properties and

microstructure of the formed material and (2) the quantitative influences of


a.nd...
process conditions on metal flow resulting microstructures.
"

Chapter 1 7
1.3 PROCESS MODELLING

A goal in manufacturing reseaFCh and development is to determine the

optimum means of producing sound products. The optimisation criteria may

vary, depending on product requirements, but establishing an appropriate

criterion requires thorough understanding of manufacturing processes. In

metal-forming technology, proper design and control requires, among other

things, the determination of deformation mechanics involved in the processes.

Without the knowledge of the influence of variables such as friction

conditions, material properties, and workpiece geometry on the process

mechanics, it would not be possible to design the dies and the equipment

adequately, or to predict and prevent the occurrence of defects. Thus, process

modelling for computer simulation has been a major concern in modern metal-

forming technology. In the past a number of approximate methods of analysis

have been developed and applied to various forming processes. The methods

most well known are the slab method, the slip-line field method, upper- (and

lower-) bound techniques, Hill's general method, and, more recently, the

finite-element method (FEM).

In the slab method, the workpiece being deformed is decomposed in

several slabs. For each slab, simplifying assumptions are made mainly with
t-o
respect stress distributions. The resulting approximate equilibrium equations
~

Chapter I 8
are solved with impositions of stress compatibility between slabs and

boWldary tractions. The final result is a reasonable load prediction with an

approximate stress distribution.

The slip-line field method is used in plane strain for perfectly plastic

materials (constant yield stress) and uses the hyperbolic properties that the

stress equations have in such cases. The construction of slip-line fields,

ahhough producing an "exact" stress distribution, is still quite limited in

predicting resuhs that give good correlation with experimental work. From the

stress distributions, velocity fields can be calculated through plasticity

equations.

The upper-bound method requires the guessing of admissible velocity

fields, among which the best one is chosen by minimising total potential

energy. Information leading to a good selection of velocity fields comes from

experimental evidence and: experience. This method, with experience, can

deliver a fast and relatively accurate prediction of loads and velocity

distributions.

Hill has g1ven a general method of analysis for metal-working

processes when the plastic flow is unconstrained. The method is based on the

virtual work-rate principle. The method was applied to the analysis of

compression with barrelling, spread in bar drawing, and thickness change in

tube sinking.

Chapter 1 9
For further reference, the books [Avt 68, Row 77, Che 97] provide not

only the detail descriptions of above methods but also a wealth of solutions to

many metal forming problems using these methods. These methods have been

useful in predicting forming loads, overall geometry changes of deforming

workpieces, and metal flow, and in determining approximate optimum process

conditions. However, accurate determination of the effects of various process

parameters on the detailed metal flow became possible only when the [mite-

element method was developed for the analyses. Since then, the finite-element

method has assumed steadily increased importance in simulation of metal

forming processes.

1.4 FINITE ELEMENT MODELLING OF METAL

FORMING PROCESSES

The application of the finite-element method to metal-forming

problems began as an extension of structural analysis technique to the plastic

deformation regime. An analysis method in the area of metal-forming

application, in many cases, can be justified only by its solution reliability and

computational efficiency. This realisation has led to the development of the

numerical procedures based on the flow formulation. Initial applications of the

rigid-plastic finite-element method to metal-forming processes were mainly in

the analysis of compression and other simple processes. Since those early

Chapter I 10
days, many developments of the numerical techniques have occurred as well

as there has been continuous growth in the field of applications.

Early applications of the Finite Element Method for metal forming

processes have been possible by [Arg 65] , using the elastoplastic

formulation. The application was extended by others, vtz. , · [Pop 66],

[Mar 67], [Yam 68] and [Zie 69]. For

complex geometries, the metal flow is three-dimensional and such a problem

was analysed with elastoplastic defori¥tion by · [Nag 73] on block

compression, while for the rigid plastic formulation,

[Kan 78] analysed three-dimensional forming processes. This approach is very

appropriate for cold forming processes [Owe 80]. Various aspects of this

approach were studied for the last two decades by many researchers [Mas 87,

Che 88, Bel 88, Gel 89, Mak 89, Gra 89]. However, large deformation elastic

plastic formulations require large computer capacity and large computer time.

The realisation that plastic problems of metal forming with very large

deformations are amenable to flow formulation treatment appears first to have

been introduced by [Goo 68] in 1968 and later many important

contributions in flow formulation were made by

[Zie 74, Zie 75, Zie 84] and · [Mon 88, Cia 89, Che 89a,

Che 89b, Cou 89, Fou 89]. The first application to the flow of hot metals with

varying viscosity was made by [Cor 73]. The most

important step forward was the explicit inclusion of thermal coupling in the

Chapter I II
solution [Ode 73, Lee 73, Zie 74]. A noteworthy step in the development of

flow procedures was their extension to include elastic strain effects [Tho 84].

The flow formulations are most suitable for hot working, though they

can be applied in a modified form to cold working. For interface friction, this

approach uses mainly the velocity dependent tangential stresses developed by

[Che 78].

[Che 79] proposed the finite element simulation using the

rigid plastic finite element approach and assuming fully plastic deformation of

a non-hardening material. To calculate the flow patterns, the strain rate

distribution and the stresses, an approach has been proposed by ..


[Zie 78].

Axisymmetric forward extrusion processes arise quite naturally in

many industrial situations in the field of metal forming. These involve large

deformations in which the strains are related to large plastic flow. The

deformations are strongly dependent upon contact conditions, friction between

dies and work-piece, and temperatures of billet and dies. The designer who

develops an extrusion operation must know first the power required, which

presupposes the knowledge of the load deflection curve. Next, he needs to be

sure that the desired shape can be achieved for specified reductions in area and

semi-cone angles. Finally be needs to have an accurate estimate of the stress

and strain distribution in the product. The elastic-plastic finite element

Chapter 1 12
analysis of extrusion processes has been realised by . [Lee 79], by

using an incremental Lagrangian approach.

Modelling of the thermo-mechanical behaviour of metallic parts at

elevated temperature and high strain rates requires the combined application of

the theories of thermodynamics, continuum mechanics, viscoplasticity and

coupled thermoviscoplasticity. The implementation of the coupled

thermoviscoplasticity to problem requires:

a. The application of the rational theory of thermodynamics

b. A viscoplastic constitutive equation which takes into account the

temperature and rate effects, and

c. A numerical too~ such as the finite element method in order to utilise the

developed equations to solve realistic boundary value problems.

[Zie 78] have made a coupled thermal analysis in

steady-state extrusion. (Reb 80] developed the method for a

coupled analysis of non-steady state viscoplastic deformation and heat

transfer. Finite difference method has been used to develop a computer code

[Sun 87], which can predict dynamic thermal profiles in the simple hot

upsetting process.

. [Cer 80] presented a coupled thermoviscoplasticity

theory based on the overstress constitutive equation and applied it to

Chapter 1 13
investigate special types of loadings: pure torsion, uniaxial loading, and cyclic

loading. [Leh 85] defmed a coupled thermoviscoplasticity

theory and applied it to investigate the necking process in a uniaxial tension

problem [Rub 82] developed a thermo elastic viscoplastic constitutive

equation based on a modification of the inviscid theory of plasticity.

[All 86] developed coupled thermoviscoplasticity equations

based on the rational theory of thermodynamics. In a similar analysis,

[Gho 86] presented slightly different equations and applied them to

solve the problem of dynamic loading of a thick walled cylinder as well as

compression of a constrained ended, cylinder.

. [Par 84], and [Sun 84] have

introduced a three dimensional fmite element analysis on block compression,

and _ [Mor 84] on large deformation in metal forming. A study of two

finite strain plasticity models is attempted by _ [Im 87]. The most

common way to introduce thermal coupling is to flow formulation is that by

[Kob 84].

[Van 86] developed a combined Eulerian lagrangian

finite element formulation for the analysis of metal forming coupled with

thermal effects, [Sur 86, Sur 87] presented a three

dimensional finite element model for the simulation of isothermal hot forging.

, [Tho 88] used the pseudo-concentration method to

Chapter I 14
study the flow of metal during the transient analysis of an axi-symmetric

forging problem. A significant contribution is made in modelling hot flat

rolling and hot shape rolling processes at CEMEF by · [Ber 88],

and the three dimensional finite element program developed is also used to

simulate the forging of a connecting rod by [Soy 88]. Studies on

coupling of thermal and mechanical analysis of hot metal forming processes

are proposed by . [Alb 90] and _ ·. [Ana 90]. An important

step forward is the introduction of anisotropy in the numerical simulation of

the forging process by . [Bay 90]. Diverse applications are found

for the visco-plastic models developed in literature. Using automatic mesh

generation and regeneration features hot closed die forging of a compressor

disk is attempted by [Han 92].

There is a continuous interest in industry for the forming processes of

non-dense metallic materials directly from powders obtained by different ways

(like atomisation) or from sintered preforms, the shape of which can be

relatively close to the final geometry. For the cold deformation of metal

powders, the constitutive equation that has been proposed is a natural

extension of the Von Mises theory for dense plastic materials. There are

relatively few finite element models for the prediction of industrial forming

processes of non-dense metals.

Hot isostatic pressing was modelled and simulated by

[Abo 88], and ' [Che 89b]. Forging of powder metallurgy preforms was

Chapter I 15
done by _[Im 86], ___ ~ [Oh 87] , . . [Wey 87], and

.. [Che 90b]. An industrial application of powder forming

simulation was attempted by " ·. [Han 92].

In the Nineties the forging manufacturers challenge is to increase their

competitiveness by reducing lead time and costs of their product and

improving the quality at the same time. All these points are basically

conditioned during the forging design phase. Finite element modelling of

metal forming evolved into a potentially accurate and flexible technique for

many industrial applications. A number of finite element codes are

commercially available. Once the problem is identified it is beneficial to use a

domain-specific code capable of meeting the demands of the user. One such

user-oriented domain specific program, FORGE2R developed by Chenot, J.L.,

and co-workers at CEMEF, France, is explained by . [Han 92].

This code is capable of modelling two-dimensional situations (plane strain and

axisymmetric) of metal forming with thermal and friction coupling for both

compressible and incompressible materials with automatic mesh regeneration.

The theoretical basis of the FORGE2R along with its features such as thermo-

viscoplastic coupling, material compressibility and automatic mesh

regeneration is reviewed in this work and an attempt is made to simulate a few

industrial forming processes taking into account the complex friction

phenomena and thermal environment. The advantages of using a domain-

specific code are amply illustrated here by applying FORGE2R to industrial

Chapter 1 16
problems. Automatic remeshing scheme for 6-node elements enables the

simulation of industrial processes with complex geometry and boundary

conditions. The simulation makes possible to study the effect of change of

material parameters with temperature. Also the effect of temperature on

various parameters like strain rate and flow direction can be visualised. The

advantage of using a robust and reliable process-specific code like FORGE2 R

for simulation of industrial forging is proved in this work.

Often in industry, when the material chosen does not exhibit strain

softening, a folding defect of material can be observed in the product. Also,

one often has to choose between various lubricants for a particular process.

This choice can be made very easily once it is possible to simulate the process

with friction conditions arising out of the use of a particular lubricant and then

observing its effect on various parameters like forging force, strain and stress.

This phenomenon is studied using simulations in [Cor 91].

Forging processes for a product of AISI 4130 steel has been developed

using three modelling and design approaches in [Lee 94]. As an experimental

approach. physical modelling with plasticity has been performed. A

commercial FEM code DEFORM and a knowledge-based process planning

have also been used as computational methods.

The valuable resource in any design, engineering or manufacturing

environment is the knowledge of the staff in the organisation. The

Chapter I 17
development of design rules driven CAD applications [Yu 85, Ras 89] in the

engineering, design and manufacturing environment, is one way to keep and

preserve the experience of the company's experts and to distribute this

knowledge to a variety of people. An application to design axisymmetrical

forging pieces and dies, based on design rules and variational geometry has

been described in [Anz 94]. AxisyllUl1etrical pieces were classified into

families ; then, forging design rules were identified for draft angles, radius,

flash land and performs. The application was implemented in 1/EMS from

Intergraph using the parametric progranuning language (PPL) and the

I/FORMS interface. In order to reach a wider range of CAD softwares and

hardware platforms a prototype of the application has also been implemented

in the object oriented language.

Radial extrusion has been studied by several researchers. Most of this

work has been strictly concerned with the production of solid rod components,

whereas radial extrusion of tubular parts have received little attention. Major

process parameters such as the ratio between inner and outer diameter of the

tube, the ratio between wall thickness and gap height together with the design

of the die chamber are important dimensional parameters that determine the

fmal geometry of a component. A comprehensive analysis of the radial

extrusion process using finite element method has been realised by -

[Pet 94].

Chapter I 18
A numerical method of construction of axi-symmetric slip-line fields

and their associated velocity fields is applied to investigate the problem of axi-

symmetric tube extrusion in [Chi 97]. Analysis of tapered preforms for upset

forging is recently reported in (Mus 99]. FE simulation of the influence of die-

elasticity on component dimensions in forward extrusion is reported in [Qin

97]. The accurac): of numerical simulation is influenced by the tool-workpiece

interfacial friction condition, which is a major concern in metal forming

process design and analysis. Friction affects forming force/energy, tool life

and the product quality (surfilce finish, internal structure, product life etc.).

Considering the influence of bulk material a friction model is developed by

[Wan 97].

A new modelling strategy in intelligent manufacturing is developed by

[Han 2000] for various forming processes using Artificial

Neural Network (ANN). ANN models easily capture the intricate relationships

between various process parameters and can be easily integrated into existing

manufacturing environment.

1.5 SOFf COMPUTING TECHNIQUES

In recent years there has been much emphasis on the development of

alternate and novel models of computing collectively referred to as Soft

Computing. The primary aim of Soft Computing is to exploit the tolerance for

Chapter l 19
imprecision and uncertainty to achieve' tractability, robustness and low cost. In

Soft Computing, what is usually sought is an approximate solution to a

precisely formulated problem, or, more typically, an approximate solution to

an imprecisely formulated problem [Zad 94].

Soft computing may use training methods to evolve programs that are

explicitly intended to adapt their behaviour to changing data and a changing

environment and it accepts data that may be fuzzy or rough, and responds by

producing information that also has these characteristics. The choice of Soft

Computing for complex optimisation problem is justified from the fact that

these are very well suited to deal with all those problems that usually represent

nightmares for researchers and developers: integer variables, non-convex

functions, non-differentiable functions, multiple local optima, multiple

objectives etc. The basic idea is to do away with the rigidity of the traditional

algorithmic computing that requires the precise statement of the step by step

procedure to be employed for problem solving before a program could be

written for the use of computer to solve the problem. Soft Computing might be

characterised as automated intelligent estimation. It is intended to provide an

alternative to more conventional hard computing that allows a formal and

systematic treatment of problems which, due to their size, complexity or

uncertainties, are otherwise not practical to process and solve.

Chapter 1 20
Soft Computing attempts to emulate and automate the pragmatic

techniques used by intelligent humans to deal adequately and quickly both

with routine problems and with crises. It is an attempt to automate what is

often called "human intuition". It is important to note that computing in

general has always been viewed as intended to have just those characteristics,

which "Soft" Computing purposely avoids. In the past all the scientific and

engineering effort that has gone into computer science and engineering has

been devoted to making computing "hard". The recognition of a need for

something like Soft Computing in no way negates the importance, even the

essential value, of conventional hard computing and the need to continue to

improve that discipline in every way. Soft Computing arises simply from a

common recognition that some important problems do not readily lend

themselves to solution by hard computing and that other methods exist which

do help solve these problems, if only it is possible to redefine the standards of

an acceptable solution. In this work we have employed the following Soft

Computing techniques.

1.5.1 ARTIFICIAL NEURAL NETWORKS

Artificial neural network (ANN) models go by many names such as

connectionist models, parallel distributed processmg models, and

neuromorphic systems. These models attempt to achieve good performance

Chapter I 21
via dense interconnection of simple computational elements, which are called,

processing units. These networks are considered fine-graded parallel

implementation of non-linear dynamic and static systems. An ANN is an

abstract simulation of real nervous system that contains a collection of

processing units or processing elements (PE' s) communicating with each other

via axon connectiQns. Such a model resembles the axons and dendrites of the

nervous system Because of its self-organising and adaptive nature, the model

provides a new parallel and distributed paradigm that has the potential to be
,
more robust and user-friendly than traditional schemes [Wid 90, Oba 92, Zur

91, Tra 91, Her 91, Bru 90, Oba 98b].

The study of artificial neural networks is an attempt to simulate and

tmderstand biological processes in an intriguing manner. It is of interest to

define ahemative computational paradigms that attempt to mimic the brain's

operation in several ways. Neural Networks are alternative approach to the

traditional Von Neumann programming schemes.

Interest in neural networks has increased in recent days due partly to

some significant breakthroughs in research in architecture, learning/training

algorithms, and operational characteristics. Advances in computer hardware

teclmology that made neural network implementation faster and more efficient

have also contnbuted to the progress in research and development in neural

networks. Much of the drive has arisen because of numerous succes:icrueved


"
in demonstrating the ability of neural network to deliver elegant and powerful

Chapt~ I 22
solutions, especially in the field of learning, pattern recognition, optimisation,

and classification, which have proved to be difficult and computationally

intensive for traditional Von Neumann computing schemes.

In artificial neural networks, the element that corresponds to a

biological neuron is called a processing element (PE). A simple PE combines

its input paths by adding up weighted sum of all inputs. The output of aPE is

the signal that is generated by applying the combined inputs to an appropriate

transfer function. Learning takes place in the form of adjustment of weights

connecting the inputs to PE. There are various ways in which PE' s are

interconnected in neural networks as groups called layer or slab, and also,

there exists a variety of training (learning) rules that determine how, when and

by what magnitudes are the weights updated.

A suitable activation function can be chosen for each layer by trial and

error method from among several co·In.monly used functions such as TanH,

sigmoid, and linear. The information propagates through ANNs in response to

the input patterns. Differential error at each hidden layer is computed and the

corresponding data weights are added to all the weights in the system This is

done for each (input, derived output) pair when delta-learning rule is in effect.

However, the convergence speed for ANNs can be improved when other rules

are used along with a suitable momentum factor. The direction of signal flow,

types of training and activation function, values for learning parameters,

number of neurons in each layer, etc. are a few of the current active research

Chapter I 23
areas. The following design factors are important aspects to the

characterisation of ANN models [And 88b].

1) Supervised, unsupervised or reinforcement learning paradigms;

2) Decision and approximation I optimisation formulations ;

3) ANN structures;

4) Static and temporal recognition;

5) Activation functions

6) Individual and mutual training strategies.

1.5.1.1 CHARACTERISTICS OF ANN MODELS

A fundamental feature of ANN is their adaptive nature, where learning

by examples replaces programming in solving problems. This important

feature rriakes such computational paradigms very appealing in the application

domain where one has little or incomplete understanding of the problem to be

solved, and some available training data. The parallel and distributed

architecture feature of ANNs allows for fast computation of solutions when

these networks are implemented on parallel and/or distributed computer

systems [And 88a].

One aspect of ANNs is the use of simple processing elements, which

are essentially approximate models of the neurons in the brains. It is estimated

Chapter I 24
that the brain contains over 100 billion (10 11 ) neurons of different types and

10 14 synapses in the human nervous system. Recent studies in the brain have

found that there are more than 1000 synapses on the output/input of each

neuron. A neuron is the fundamental cellular unit of the nervous system and,

in particular, the brain. The "artificial neuron" is the basic building block unit

of any ANN. Each neuron can be regarded as a simple processing element that

receives and combines signals from many other neurons through input

structures called "Dendrites". For a combined input signal having values

greater than a certain threshold, the firing of a neuron takes place resulting in

an output signal that is transmitted along a cell component called "Axon". The

axon of a neuron splits up and connects to dendrites of other neurons through

the "Synapse". The strength of the signal transmitted across a synapse is called

synaptic efficiency, which is modified as the brain learns [And 88a].

There are two types of ANN models:

1) The biological type, which encompasses networks mimicking biological

systems such as audio functions (cochlea) or early vision functions (retina).

The main objective of the first type is to develop a synthetic element for

verifying hypothesis related to biological systems. The ANNs are not used

directly for data processing.

2) The second type is the application-driven neuron, which depends less on

faithfulness to neurobiology. An application -driven ANN architecture that

Chapter I 25
comprises massively parallel adaptive PE's with an interconnection network.

In these models the architecture is dictated by the application needs. In

general, neurons and axons are mathematically modelled by activation and net

functions respectively. The selection of these functions depends in most cases

on the application of the ANN models. In other words, application-driven

ANN models are only loosely tied to the biological realities. They are tightly

associated with advanced and intelligent processing in recognition,

optimisation and classification. Such models have the potential of offering a

revolutionary technology for modem high performance computer and

information processing. The reasons for the strengths of application driven

ANNs can be enumerated as follows [Zur 91].

(a) Parallel and distributed processing capability: they employ a large number

of PE's that are interconnected together using an interconnection scheme that

depends on the type of paradigm.

(b) Adaptiveness and self-organisation: they offer adaptive and robust

processing capabilities by adopting adaptive learning (training) and self-

organisation rules.

(c) Fault-tolerance: this is very attractive feature for many applications.

(d) Non-linear processing: this charac~~ristic is very important as it enhances

the network approximation, noise immunity and classification capabilities.

Chapter I 26
Neural networks can be classified into different categories based on the

selected criterion. Based on the learning method (learning algorithm), we can

divide neural networks into supervised, unsupervised, and reinforcement

learning types. Supervised learning refers to the design of a classifier in the

case that the underlying class of available samples is known. In supervised

learning, each input pattern received from the environment is associated with a

specific desired target pattern. The weights are usually synthesised gradually,

and at each step of the learning process they are updated in order to minimise

the error between the networks output and a corresponding derived target, in

the unsupervised learning case, it is necessary to classify data into a number of

groups without the aid of a training set. The goal here is to separate the given

data into M classes. The idea is to optimise some criterion on performance

metric/function in terms of the output activity of the units. The weights and

outputs in this case are usually expected to converge to representations that

capture the statistical regularities of the input data. In order to accomplish this

a clustering criterion is defmed which assigns a numerical value to each

possible assignment of samples to clusters. It is too costly in general to simply .

evaluate the criterion for each possible assignment, therefore, a method must

be used to find an optimal assignment. The third class is the reinforcement-

training algorithm, which is between supervised and unsupervised learning.

Reinforcement learning involves updating the networks weights in response to

an "evaluative" teacher who basically tells whether the answer is correct or

incorrect. It involves rules that may be viewed as stochastic search mechanism

Chapter I 27
that attempt to maximise the probability of positive external reinforcement for

a given training set.

1.5.1.2 APPLICATION CATEGORIES

We can divide the application domain of ANNs into the following

main categories: ( 1) classification, clustering, diagnosis and association; (2)

optimisation; (3) regression or generalisation; and (4) pattern completion. A

brief description of each category follows.

Category 1 - Classification, clustering, diagnosis, and association: In this

paradigm, input static patterns or temporal signals are to be recognised or

classified. A classifier should be trained so that when a slightly distorted

version of a stimulus is presented it can still be recognised. The network

should have a good noise immunity capability, which is critical for some

applications.

Category 2- Optimisation: ANNs are appealing for solving optimisation

problems which involve finding a global minimum function. The

determination of synaptic weights is relatively easy once the energy function.

f(x) is found. The cost function is easy_to find for some applications, however,

in other applications, it has to be derived from a given cost criterion and some

constraints related to the problem at hand. One of the main issues related to

optimisation problems is the probability of obtaining a solution converging to

a local minimum instead of a global minimum. Among the techniques that are

Chapter I 28
proposed to tackle this problem are the simulated annealing and mean field-

, anneaHng [Kir 83].

Category 3- Function approximation and Generalisation: Function

approximation problem has been studied extensively. Usually, the system is

trained based on the supervised training scheme using a large data set. An

ANN is considered "successful if it can closely approximate the teacher values

for the trained data set and can provide smooth interpolations for the untrained

data set. Generalisation is used to yield a correct response to an input to which

it has not been trained. The system must induce the salient feature of the input

and detect the regularity. This regularity discovery is vital for many

applications [Lia 98]. It enables the system to function efficiently throughout

the entire data set, although it has been trained only by a limited portion of the

entire data set [Han 2000]. It is important to note that each ANN model tends

to impose its own prejudice in how to generalise from a finite set training data.

Category 4 - Pattern Completion: In some classification problems, an implied

task is the completion of information, that is, recovery of original data given

only partial information.

1.5.1.3 ANN APPLICATIONS

A brief literature review of the applications of Neural Networks in

manufacturing process is attempted here. Artificial Neural Networks can

capture and model complex input-output relationships even without the help of

Chapter I 29
a mathematical model [Fau 94]. This property of the ANN's is extremely

useful in situations where it is hard to derive a mathematical model that links

the various parameters as in [Han 2000]. Some attempts by [Osk

90, Osk 91] have been made to apply this technique as an opportunity to

shorten the reaction time of the manufacturing systems, increase the product

quality, make systems more reliable and enhance the system's intelligence by

the learning capabilities of neural networks. Excellent surveys of these

attempts are found in _ ' [Mon 92] and [Zha 95].

[Mat 99] presented an approach for the condition monitoring in

reammg using the conventional Back Propagation Neural Network.

[Chr 90] evaluated different process modelling

techniques and concluded that a proper neural network model could estimate

state variables better. . [Mon 86] described the use of learning

procedures in machine tool monitoring and illustrated the effectiveness of the

procedure. . [Ran 89] initialised the use of a fee<l-

forward neural network to learn and optimise turning processes. In that study,

process optimisation was carried out using an augmented Lagrangian method

with minimising material removal rate as the objective subjected to some

constraints. · . [Sat 92] modelled the process of creep feed

grinding of superalloys using the standard BP algorithm. The learned model

was then converted into an explicit mathematical model to be optimised

analytically using an off-line multi-objective progranuning technique. Their

approach is better suited for simple models because, for complicated models,

Chapter I 30
the eonversion process will become vary tedious and difficult to realise.

[Lia 96] showed how Multi-Layer Perceptron (MLP) neural

networks can be used to model and optimise grinding processes. In their work,

a generalised 4-layer network was trained with the standard BP algorithm to

model the process and a Boltzman factor was integrated with the BP algorithm

for process optimisation. , [Lia 98] presented the results of their

investigation into the performance of several MLP training algorithms for

manufacturing process modelling and optimisation. [Han 99b]

investigated the successful application of Kohonen maps' self-organising

capability for solving Group Technology Cell problem. Monostori et aL, in

their review paper [Mon 92], stated that the past applications of neural

networks in intelligent manufacturing can only be regarded as initial steps.

Due to interdisciplinary nature, encornpassmg computing, biology,

neuropsychology, physics, engineering, biomedicine, communications, pattern

recognition and image processing, etc., the field of neural networks attracts a

variety of researchers and developers from a broad range of backgrounds.

Today, there are many different paradigms and applications of neural

networks, reflecting research and development groups. ANNs are viable

computational models for a wide variety of applications including pattern

recognition, speech recognition and synthesis, image compression, adaptive

interfaces between human and machines, clustering, forecasting and

prediction, diagnosis, function approximation, non-linear system modelling

Chapter I 31
and contro~ optimisatio~ routing in parallel computer systems and high-speed

networks, and associative memory. The field of neural networks links a

number of closely related areas that include parallel and distributed

computing. These areas are brought together with the common theme of

attempting to exhibit the comput~ method, which is witnessed in the study

ofbiological neural. systems [Zur 91, Kun 93, Oba 97, Kha 97, Oba 98a].

1.5.2 GENETIC ALGORITHMS

Genetic Algorithms (GAs) are search algorithms based on the

mechanics of natural selection and natural genetics. A GA can be understood

as an "intelligent" probabilistic search algorithm which can be applied to a

variety of combinatorial optimisation problems.

The idea of GAs is based on the evolutionary process of biological

organisms in nature. During the course of evolutio~ natural populations

evolve according to the principles of natural selection and "survival of the

fittest". Individuals who are more successful in adapting to that environment

have a better chance of surviving and reproducing, whilst individuals who are

less fit are eliminated. This means that the genes from the highly fit

individuals will spread to an increasing number of individuals in each

successive generation. The combination of good characteristics from highly

adapted ancestors may produce even more fit offsprings. In this way, species

Chapter I 32
evolve to become more and more adaptive to their environment. A GA

simulates these processes by taking an initial population of individuals i.e.

solutions in the search space of the problem at hand. In optimisation terms,

each individual in the population is represented by a sequence of genes or

binary numbers and is known as chromosome. A solution is determined by its

fitness function, which evaluates a chromosome with respect to the objective

function of the optimisation problem at hand. A judiciously selected set of

chromosomes is called a population and the population at a given time is a

generation. The population size remains same from generation to generation

and has a significant impact on the performance of the GA.

1.5.2.1 MECHANICS OF GAs

The fundamental understanding mechanics of GA operates on a

generation, and generally consists of three operators~ Reproduction is a process

in which individual strings are copied according to their objective function

values. The higher the value of the objective function, the higher is the

probability for copying. After reproduction simple crossover may proceed in

two steps. First, members of the newly reproduced strings in the mating pool

are mated at random Secondly, each pair of strings undergoes the crossover

operator. Mutation is the occasional (with small probability) randoin

alteration of the value of a string position. The chromosomes resulting from

these three operations, often known as offspring as children, form the next

generation's population.

Chapter I 33
After every generation, highly fit individuals or solutions are given

opportunities to reproduce by exchanging pieces of their genetics information,

in the crossover procedure, with other highly fit individuals. This produces .

new offspring solutions (i.e. children), which share some characteristics taken

from both parents. Mutation is often applied after crossover by altering some

genes in the strings. The offspring can either replace the whole population

(generational approach). This evaluation-selection-reproduction cycles can be

repeated until a satisfactory solution is found or it can be iterated for a derived

number of times (up to the point wher_e the system ceases to improve or the

population has converged to a set of well performing sequence).

1.5.2.2 BASIC STEPS OF GENETIC ALGORITHMS

As per the above description of a simple GA the basic steps are as

follows:

( 1) Generate initial population as the current generation. Evaluate fitness of

individuals in the population.

(2) Perform crossover between selected parents of the current generation.

(3) Perform mutation and evaluate fitness of the children.

(4) Select parents for the next generation.

(5) Repeat steps 2 and 4 until the stopping criterion is met.

Chapter I 34
(6) Output, the best solution is found.

The details of various ingredients ofGA are elaborated in appendix-A.

1.5.2.3 ADVANTAGES AND LIMITATIONS OF GAs

GAs do not require problem-specific information and, therefore, they

are more flexible than most of the other search methods. GAs manipulate

decisions or control variable representations at the string level to exploit

similarities among high performance strings. Other methods usually deal with

functions and their control variables directly. Because GAs work from a

population with wide sample points, the probability of reaching a false peak is

reduced. The transition rules of GAs are stochastic via sampling; most other

methods are deterministic transition rules. Therefore, GAs are more suitable

for muhiple-peak.s functions. Also they can work with any complicated

objective functions, as they need to only compute the function values and d.o

not need to perform complicated differentiation or any such mathematical

operation.

In theory, GAs can not guarantee to attain the best solution. However,

in practice, superior solutions are obtained and, sometimes, it is also possible

to get the best solution. Because of this reason, GAs have been applied in a

variety of engineering applications. .

Chapter I 35
1.5.2.4 SOME SALIENT DEVELOPMENTS IN GENETIC

ALGORITHM APPLICATIONS

Genetic algorithms (GAs) have been established across a diverse

domain of disciplines. Theoretical developments by [Hol 75] and

[Dej 75] have laid the foundations of GAs. Numerous papers and

dissertations establish the validity of the technique in optimisation and control

applications (Gol 87]. The robustness of GAs is due to their capacity to locate

the global optimum in a multimodal landscape. A comprehensive review of

various developments in GA till about 1990 is available in References (Gol

89, Dav 91]. Further research on GAs has witnessed the emergence of new

trends that break the traditional mould of 'neat' GAs that are characterised by

static crossover and mutation rates, fixed length encoding of solutions, and

populations of fixed size. Goldberg has introduced the notions of variable

length solutions for GAs in [Gol 89] and [Gol 90], and has shown that the

'messy' GAs perform very well. [Dav 89] has recommended the

technique of adapting operator probabilities dynamically based on the relative

performance of the operators. [Sri 94] adopted a messy

approach to determine the probabilities of crossover and mutation.

(Gol 93] have applied GA for optimising a

truss, subject to maximum and minimum stress constraints on each other.

[Ade 93] have applied GA for optimisation of space

structures. [Kum 94] has developed a distributed GA for optimisation

Chapter I 36
of space structures on a network of workstations using a parallel processing

teclmiques to reduce the computational time. GAs have been extensively used

in production scheduling [Del 95, Fan 96, Mat 96, Lin 97, Bie 99].

[Miz 94) have applied GAs for optimal tool selection in minimizing the

total machining time and the uncut area in milling operation involving

multiple tools. [Dug 94] described a method for design

optimisation of process variables in cold forging sequences. To minimise the

possibility of the initiation of tensile fracture in the outer race preform of a

constant velocity joint manufactured by cold forming operations, an adaptive

Micro Genetic Algorithm was implemented. [Roy 97] also

implemented micro genetic algorithm scheme for minimising a wide variety of

objective-cost functions relevant to the various forming processes.

1.5.3 SIMULATED ANNEALING

One of the main issues related to the optimisation problems is the

possibility of obtaining a solution converging to local minimum instead of

global minimum. Simulated Annealing (SA) is one teclmique that is proposed

to tackle this problem [Ger 84, Hin 86, Laa 87, Her 91 , Che 94, Bru 97].

Simulated Annealing is a stochastic technique derived from statistical

mechanics of solving combinatorial optimisation problems. It was introduced

by [Kir 83]. In practical situations, the objective is often that

Chapter 1 37
ofminimising a cost function. Ordinarily, the number of parameters contained

in the system under consideration is very large. Thus, fmding the minimum

cost solution of a system is like that of finding low Energy State in physical

system. Another issue of concern is that deterministic algorithms used for cost

minimisation suffer from a fundamental weakness of gradient descent

procedures i.e. the algorithm may get 'stuck' in a local 'minima' that are not

globally optimum. This issue is of particular concern in the case of systems

required to perform 'constraint-satisfaction tasks'. In these cases, a minimum

cost solution compatible with that of a particular input to the system is to be

found. In such a situation, the system must be capable of escaping from local

minima so as to reach the configuration that represents global minimum, given

the input of interest [Rut 89].

Both of these issues are addressed in the SA algorithm. The basic idea

of algorithm is quite simple and can be quite succinctly stated (for a

minimisation problem) as:

"When optimising a very large and complex system (i.e. a system with many

degrees offreedom), instead of "always" going downhill, try to go downhill

"most of the times".

The idea can neatly be illustrated with a 'balls and hills' diagram, as

shown in figure-2. SA differs from conventional iterative optimisation

algorithms in two important aspects:

Chapter I 38
Current
Configuration··
Cost

Allowed
Downhill
Perturbation

Configurations

Figure-2 Configuration space: balls and hills

38a
• The algorithms need not get 'stuck', since transition out of a local minima

is always possible when the system operates at a non-zero temperature.

• SA exhibits a divide-and-conquer feature that is adaptive in nature.

Specifically, gross features of the final state of the system are seen at

higher temperatures, while fine details of the state appear at lower

temperatures.

The choices the designer of SA has to make can be classified into two

categories:

• Problem specific

• Generic

The problem specific parameters are:

a) Configuration: Choice of a vector for representing the state of the system.

b) Neighbourhood ofthe configuration: The neighbourhood of a stateS is the

set of states to which the system can move from S with non-zero

probability.

c) Cost of configuration: It IS the measure of the goodness of the

configuration

d) Initial configuration: This is chosen picking up a random configuration

from the set of all possible configurations.

Chapter I 39
The generic parameters are:

a) Initial value oftemperature To.

b) Decreasing factor Beta.

c) Stopping criterion

d) Length of the iteration at each value of temperature.

The SA algorithm can be explained in the form of pseudo-code given

below:

1. Set initial value of temperature and randomly choose an initial solution x.

2. Randomly and independently generate a solution from the neighbourhood

of the current solution x. Let this solution bey.

3. Replace x by y according to Boltzmann criterion I.e. if

exp((fty) - ftx))/t > p where ft*) is the value of the objective function for

solution •, t is temperature coefficient and p a random number uniformly

distributed between 0 and 1.

4. Decrease the temperature.

5. Repeat steps 2 to 4 until the stopping criterion is met.

6. Output the best solution found.

Chapter I 40
To implement a finite time approximation of the SA algorithm, a set of

parameters governing the convergence of the algorithm need to be specified.

These parameters are combined in a so-called annealing schedule or cooling

schedule. One commonly used annealing schedule specifies the parameters of

interest as follows:

• Initial value o(the temperature: The initial value To of the temperature is

chosen high enough to ensure that virtually all proposed transitions are

accepted by the simulated annealing algorithm.

• Decrement of the temperature: Ordinarily the cooling is performed

exponentially with the changes made in the values of the temperature

being small In particular, the decrement function is defined by

T k = A • Tk-t , k = 1,2,3 . . . where, A is a constant smaller but close to

unity. Typical value of A lie between 0.8 and 0.99. At each temperature,

. enough transitions are attempted so that there are 10 accepted transitions

per experiment on average.

• Final value of the temperature: The system is frozen and annealing stops if

desired number of acceptartce is not achieved at three successive

temperatures.

The annealing schedule specifies a finite sequence of values of the

temperature and a finite number of transitions attempted at each value of the

temperature.

Chapter l 41
1.6 OPTIMISATION IN METAL FORMING

During recent years, several Computer Aided Engineering (CAE)

software have been developed by researchers for metal forming processes.

They derive from two distinct strategies that can be briefly described as

follows: A first set of techniques is based on human expertise. This is the

domain of artificial intelligence that proceeds in three steps. First the expertise

has to be properly formulated for a specific forging application in a specific

context. Then, the expertise must be gathered and stored in a convenient way.

Finally, the expert system can be developed out of this set of rules, as done for

instance in [Len 94]. But collecting the human expertise is a tough and

delicate task that requires time. Moreover it is specific to each process and

factory. So, in order to avoid this difficulty, other CAE have been based on

approximated models of the forging process. These models have been

developed both from human expertise and from simplified equations. They

can be used by the expert to solve new and more complex design problems

because they are more general than the specific set of rules that are used for

the usual processes. In [Mar 96], these models have been incorporated in

software. It facilitates their use and put them within the reach of less

experienced persons. However, these simple models are based on drastic

approximations that reduce their use.

The second set of techniques is based on the numerical simulation. In

fact, numerical simulation allows the expert to go further when studying a

Chapter I 42
complex problem, as in [Han 92] or in [Kob 89]. It also provides a general

frame for the forging process optimisation. These methods are called inverse

methods of design [Che 96a, Fou 96c, Fou 97, Fou 98]. On this basis, several

zero order methods have been developed. They do not require large

modifications of the forging software that are used as black boxes. The forging

problem optimisation can be solved by simplex algorithms as in [Kus 89], by

genetic algorithms as in [Dug 94] or by statistical methods as in [San 98] and

in [Lor 98]. Usually a large number of process simulations are required to find

a satisfactory solution. On the other hand, the problem can be solved directly,

starting from the desired final part and using the backward tracing method to

fmd a satisfactory shape ofthe preform, as in [Zha 95]. Another way to speed

up these methods is based on more complex first-order methods. They require

computing the u.grodients of the objective functions and have been

investigated following two differentiation methods : the Direct Differentiation

Method has been used in its Variational Form in [Bad 96] while its Discrete

form has been preferred in [Fou 96a, Fou 96b, Zha 97] .

Metal forming IS used to produce metal parts with desirable

mechanical properties, which are conferred by a favourable material

microstructure. Many studies have. · been made of optimising forming

parameters to achieve a desired final shape. Relatively few address

optimisation of microstructure [Ber 98]. [Kus 89] has

presented the optimal shape design techniques for an extrusion die. Two

Chapter l 43
optimisation problems were examined. The first problem was the

determination of the die contour, which minimises the total extrusion force for

a given material. The second was to find the die shape, which would produce

a uniform exit velocity during the extrusion process. The direct search non-

gradient method of Hooke and Jeenes, and the steepest descent and Fletcher-

Powell gradient methods are used as the optimisation procedures. Jan Kusiak

concluded the following (1) The optimisation technique can be useful in metal

extrusion to control process parameters by appropriate choice of optimal die .

design. (2) Optimisation of die shapes, in general, requires constraints to be

added to the analysis in order to prevent designs, which are totally impractical

for metal forming.

[Dug 94] described a method for design

optimisation of process variables in cold forging sequences. To minimise the

possibility of the initiation of tensile fracture in the outer race preform of a

constant velocity joint manufactured by cold fonning operations, an adaptive

Micro Genetic Algorithm was implemented. The chosen design variables were

the preform diameter, the maximum number of fonning operations, the

number of extrusion and upset operations, the amount of upset in each upset,

and the included angle in each upset, and the included angle in the extrusion

and upset dies. They obtained a significant reduction in damage value as a

result of this optimisation process.

Chapter 1 44
A finite element model is presented in [Ven 97] to obtain the

temperature distribution in the work piece as well as in the tooling in hot and

warm extrusion process and an optimal die profile at various process

conditions is obtained by minimising the extrusion power.

[Roy 97] also implemented micro genetic algorithm

scheme for minimising a wide variety of objective-cost functions relevant to

the various forming process. The chosen design variables are die geometry,

area reduction ratios and the total number of forming stages. The selected

forming processes are multi-pass cold drawing of a tabular profile and cold

forging of an automotive outer race preform.

[Lio 97] investigated the FEM and robust design

methodology to identify the controlling process parameters, which can

optimise the residual stresses in forged parts. In optimising process of the

forging operation, experimental planning was performed by using the

orthogonal array and concept, length of the die land, reduction percentage,

inlet angle and comer fillet were selected as process parameters. The analysis

showed that the inlet angle, fiction coefficient and length of die land have the

most significant effects on the optimum residual stresses.

[Fou 98] studied the problem of design of the

preform tool shapes in forging. In the frame of a two step forging sequence,

the design is carried out by an inverse method that is solved by optimisation

Chapter I 45
algorithms. First, an objective function ~ that represents the goal to be

achieved (the prescribed final shape of the part) and the defects to be

eliminated(material folds) is defined. For a given design, this function is

calculated by a finite element axisymmetric forging software, FORE2R In

order to minimise it , a quasi-Newton algorithm is selected. It also requires to

calculate the derivative of ~- The sensitivity analysis is based on the direct

differentiation method and is applied to non-linear , non-steady forging

problems that include numerous remeshings. A special focus is put on the

problem of folds and their specific remeshing procedure, objective function

and derivatives.

[Ber 98] investigated the problem of selecting the ram

velocity profile in an isothermal forging, to best obtain a desired material

microstructure. This is to be accomplished by tracking a prescribed strain rate

profile . A weighting function , reflecting the rate at which microstructure is

performing, describes the relative importance of different pruts of the billet

Finding the optimal solution generally requires a search over an infinite

dimensional function space. However , for a certain class of forgings the class

reduces to solving a single ordinary differential equation. This method is

demonstrated for the simulated forging of a TiAl turbine disk. The functional

to be minimised contains an expression relating the ram velocity profile to the

strain rate distribution. The only way to..accurately calculate this relationship is

through numerical simulation.

Chapter 1 46
[Big 98] presented a shape optimisation method for

the design of preform die shapes in multistage forging processes using a

combination of the backward deformation method and a fuzzy decision

making algorithm. In the backward deformation method, the final component

shape is taken as the starting point, and the die is moved in the reverse

direction with boundary nodes released as the die is raised. The optimum die

shape is thereby determined by taking the optimwn reverse path. A fuzzy

decision making approach is developed to specify new boundary conditions

for each backward time increment based on geometrical features and the

plastic deformation of the workpiece. To demonstrate this approach, a design

analysis for an axisymmetric H-section disk is presented and it was shown that

the backward deformation method in conjunction with the fuzzy logic

algorithm leads to a more uniform plastic strain in the simulated final

component.

[Pat 99] presented the optimal design of a plane strain

two-hole extrusion die. This design example is considered as a test case to

judge the usefulness of the method incorporated in the design of extrusion

discs for uniform exit flow. Bearings for this die are to be designed so that the

material exits both holes with parallel balanced flow. The finite element

method combined with techniques of mathematical programming is adopted.

Derivatives of the objective function used during the optimisation phase are

computed using analytical sensitivity analysis and an optimal bearing length is

Chapter 1 47
reached after few iterations of the optimisation procedure. Available

experimental data for a two-out extrusion die with bearings have been used to

validate the numerical resuhs.

Chapter 1 48

Вам также может понравиться